Master Heading

NPUC Homepage 1997 Workshop 1996 Workshop 1995 Workshop 1994 Workshop 1993 Workshop

Larry Masinter
Speaker
Not Available

Standards, the Net and All That
Dr. Larry Masinter
Principle Scientist
Xerox Palo Alto Research Center
masinter@parc.xerox.com


Ted Selker: Hello, and welcome back. It's harder to keep on schedule with the breaks maybe than with the talks. I'm really proud to invite you to listen to Larry Masinter, a legendary programmer, talking about not programming, but standards.

Larry Masinter: I work on a lot of things and a lot of it's fun, and then I tell people that I work on standards and they say, "I'm sorry." I get a lot of sympathy for working on standards, and so I thought I would talk a little bit about standards because Ted didn't want me to give one of the talks I'd given before. So I stole these slides here. I actually put these together last night at about ten o'clock. Standards I have known. When I started working in the Internet engineering task force and I was asking to be chair of one of the standards committees, and they wanted to know what my previous experience was. My previous experience had been in the Common Lisp standards, because I used to work on Lisp and Interlisp and Common Lisp. And then I got into, I was chair of the URI committee that did the standards on URLs and tried to work on URNs and URAs and it kind of fell apart. I worked on some standards in the industry consortium on document management, DEN, DMA, that are still ongoing. I'm still active as chair of the http working group and the IETF and I've worked on a variety of other standards in the Internet arena, html internationalization, volunteering access control - a political event and internationalization and a couple of other things. I usually start out these kinds of talks of what I mean about the web since I was talking about web standards, and what I think is important. What is important for me about the web and the Internet is that there is one network and everyone is on it. It is not about hypertext, clearly, because we've had hypertext for ten years. It is not about graphics and images and multimedia because we've had those for a long time. It is about one network, everyone on it and able to communicate. Or more than one kind of communication. A lot of people think the Internet is mainly about browsing. I don't think it is because we've had this long discussion about e-mail. People do a lot of kinds of communication with each other. The net is about people communicating, usually they retrieve things or send them out or broadcast them around, maybe some kind of collaboration with what ever kind of medium. That is my flag to leave here. The issue with the net and standards is the same as the issue we see in a lot of other places that I think you call the tragedy of the common is the standard dilemma. There is a common, there is a field, and people share the commons and let their cows graze on the commons. And if each individual acts to optimize their own good by letting their cows graze the most, then it destroys the common good. The commons get overgrazed and no one can graze the cows at all. Similarly, on the Internet, the common good is one network that everyone is on and we can all communicate with each other and I don't have to know what kind of browser you have before I can put up a web page for you. That is the common good. Local optimization is I want everyone in the world to be using the Internet and using my stuff. Not the generic audio tool, but the super whizzy audio tool and that's the local optimization. But when there are too many individuals, each optimizing and trying to get the world to use their stuff which is incompatible, then you destroy the common good which is that there are lots of different kinds of web browsers out there. My page is optimized for use with blank, where various values of blank have changed over time. So, that's the background of why I think this is important and interesting. I wanted to talk a little bit about the difference between designing a standard and designing, because you will often get engineers who will come, who are good engineers at designing a system, and they'll come into the standards process and try and write the standard in the same way that they would write a product design. We've seen a lot of that. The problem is that if you are a good engineer and you had a choice between doing A or doing B. Then you say, "well, what are my requirements, what are my customers need, and let's design it to optimize the choice and decide." We'll do A or we'll do B. And you make a choice and engineer it into your product. But if you're designing a standard, you actually have many more degrees of freedom. You can write the standard that says, "yes, you must do A." Or you can write the standard, "yes, you must do B." You can write the standard that says, "you can do A or B, but you must say which one you did." That at least lets the people figure out. You can say, "well, you must do A or B and it's undefined which one you did, or you can do A or B or C and the results are undefined if someone depends on those." We've seen this for example, a simple example is in designing programming language standards the order of evaluation of expressions. And so you'll get one version of C, and the expressions get evaluated from left to right and another from right to left and another.. it depends on the settings of the optimization flags and any programmer who depends on those in a language that's not defined. The designer of a compiler can say, "well, for this platform it's better to evaluate things from left to right." And another one would say, "well, for that platform it's better from right to left." You write the programming language standard, you can't come to any agreement on it and so you wind up saying that the order is undefined and the programmers have to somehow be either write some diagnostic that before they compile the program, figures out what order of evaluation the compiler generates and then rewrites the program in order to work on that platform with a lot of CPP hassling. Or else you will find they have to program very defensively. We see the same kinds of things happening with html and a variety of kinds of things on the web about what does a "br" in the middle of a "li" mean? And as far as whether or not there is white space before or after the paragraph. And then you get content providers who want to optimize their stuff for one platform doing reverse engineering for what versions one did of their favorite browser, and then coming along the next release and getting mad because it didn't say in the standard what it meant. And so their assumptions are now invalid and it doesn't look good and they complain and a lot of nonsense. So there is a difficulty in designing standards. Clearly it is often better to make a choice, sometimes it is really better not to make a choice but to be explicit that the choice was not made and communicating that to people who are trying to write things that are portable is difficult. This is just a series of stories. I wanted to tell you about the battle of "Must" versus "Should." We often get standards not only have to tell you how something is supposed to work, they often have some limits about what it means to conform to the standard so they have a standard and they have a conforming statement and very often the conformance statements are of the form in the body in the standard, it will say that you must do something. That means in order to conform, and there is a little formal definition in most of these things, that in order to conform your program, it has to exhibit this behavior. And then there is another kind of language that says you should do this, and usually they write these in all capitals. This is not really English must and English should, this is technical must and should. And it says, "in order to do this, if you want to conform, either you do this or else you have a very good reason why you don't," and you say what this reason is. And I, we had a big debate in the http standard group over must versus should and over a lot of different items having to do with trying to design the web protocol in a way that will work in a lot of different situations. Right now you have content providers, origin servers who actually have the data. Usually in the middle you have proxies that will cache some things and then you can have clients, usually. But often people will have a proxy in the middle that is a local cache or a group cache, or a company's cache for data that people in their company have seen recently, and then you have browsers on the other end. The browser goes through the proxy in order to get to the origin data and it doesn't actually have to go outside the net in order to get the data from the origin server and that is an optimization, you get less traffic, less handling with faster response since you don't actually have to go out of the net to get something you got before. Now in the standard, the question comes about whether or not it is okay for a proxy to deliver data to the browser that is marked as out of date. Is it okay? Here we have Playboy, here we have the cache server and here we have the browser. Is okay for the owner of the cache to say, "well, I'm just not going to bother with site because they are not important for our companies business - we'll just deliver stale data along the way." And so, originally we came into this decision that we were going to say in order for it to have good data, to have integrity. If you have a guy who is running -- who is the fellow from Amazon.com -- the largest bookstore on the net. And they were running an application where you had shopping baskets where you selected which books you wanted to buy and then later on you would come and say, "now I want to pay for those." Not a very good idea for the cache to give you the shopping basket you were using yesterday because you already either ordered those books or decided not to. You wanted the latest information. So it was setting some data on it. We had this idea that the cache should not deliver things that are out of date. You must not do that. But you wind up with some situations where it is really physically impossible. You have a browser and it has a cache. The cache is on a machine, you are running it on one of these metracoms and the radio goes out, you have something that has periodically disconnected from the net. It is not a very good idea to come up if you click on the page and say, "well, sorry, I have this data but I can't give it to you because it is out of date, it's stale." So we wound up with this second level thing that says that it's not that the cache must deliver or not deliver stale data, it should not deliver stale data. But if it does deliver stale data, then it must warn the recipient that the data is stale. Well, what if it can't warn the recipient because the recipient is not really a browser, it is an automatic text indexing program and warnings can't be delivered in the text indexing program. So what if we say, "well you must not, you should not deliver stale date, but if you can't, then you should warn, and if you can't warn, then you must." But, you can regress down this infinitely. And the question in the design of standards is very often how far do you want to regress in setting conformance and what might it mean if you require conformance at a level that cache can't really be implemented. Yes, John?

John McCarthy: Is it really infinite? Or only four?

Larry Masinter: It's really only four, but that's infinite! (laughter) They are big turtles. Various people have talked about how we're moving from the age of hardware to software to social. I think that we see this in protocol design as well in something as simple as designing protocols. When you go to computer science school and they teach you about how to design network protocols, you are usually trying to optimize performance and reliability. That is usually what protocol design has to be about, and it's just like what programming is about if you go on to high performance or reliability or matching functions. But in the case of some kinds of protocols designed today, there seems to be some other kinds of factors that come into play that I don't know how to characterize except as social and economic. In this same example, we have three players. We have the origin server and the proxy cache and the client, and they are owned by different people that don't have the same goals. That was the real puzzle in a lot of these design situations that they didn't have the same goals, so how do you talk about designing the standard in which each of the participants could optimize their own goals. So if the client is in the business of browsing the web and wanting the information that you are browsing, the proxy is very often in the situation of trying to optimize bandwidth. I don't know if you know, this is actually mainly a problem with island states, the UK and New Zealand in particular, are organizing national hierarchies of cache servers for their countries because bandwidth off of their island is a lot more expensive than bandwidth on the island. And if someone else in New Zealand has hit that web page before, he would much rather get it from them than overseas as they pay per packet. In their situation, and this is also common in other situations, so in their case, they are trying to optimize the bandwidth used to go off site. On the other hand, you have the people who are running the web browsers have different purposes. They want to either serve the maximum number of people. Some of those who are doing advertising want to gather data about who's browsing their site. They want demographic information, they want to track who that is. And you see in this conflict between these different goals turn into a conflict between the goals of the client, the proxy, and the origin. There is a phenomena called "cache busting" where the origin server, in order to gather demographic data in spite of the best attempts of all of the proxy servers along the way will wind up generating random URLs for each page in order to keep the cache from caching things that it was saying you shouldn't cache but the proxy cache will say, "I'm going to cache anyway because it will cost me too much to go off site." So you see the set of I'll do this and I'll do that counterbalancing that somehow the design of the protocol in the standard has to stop. How do we design the standard and the protocol in a way in which no matter how much each of these individuals gained the system in order to optimize their own good? What we've managed to do is optimize the overall efficiency of the network or what it is that people want. That is a real puzzle here. I'm not sure if we managed to do that in the http design, but at least it was a good try as we are going from version 1.0 of http to 1.1.

Speaker: I just want to get a sense of how, assuming that this kind of cache busting increases at what ever kind of rate it's going, how serious is this kind of gaming to performance of the web?

Larry Masinter: People did this analysis that if the web is going to continue to grow at the rate it is growing and number of users, then we needed about four orders of magnitude of performance over where it is today. And that one or even two of those orders of magnitude would come from improvements in both the backbone of the Internet and just raw bandwidth of two homes and browsers.

Speaker: I'll also add that it is not just a question of bandwidth?, but it's violation of standards which is - if you as an author can't depend on your data getting delivered, you have to essentially toss out any technical improvements and go to the very lowest denominator back a few years.

Larry Masinter: So the first thing was that if we needed four orders of magnitude here, then we could get caching, the whole idea of caching here was at best was going to help up with a factor of four. And some people were seeing cashing as only being worth about 20-30%. But we think overall the national caches are looking for something as good as that factor four improvement.

Martin Haeberli: Martin Haeberli of NetScape here. Caching serves a number of roles, including the performance role you talked about. I'd also like to point out that it serves an auditing role in some contexts. It serves a role as off-line access and other contexts as you pointed out. The one point I'd like to make about the island states issue is there is yet not a degree of freedom there which is the cost arguably in some contexts is more artificial -- that is there is not necessarily a technical reason why the costs have to be what they are. Good news is that some island states have come to talk to us about how they are going to spend billions of dollars on digging tunnels in the ground to put glass in. I'm suggesting that they would actually be better off spending billions of dollars figuring out how -- what they are trying to do is build a local content infrastructure -- figuring out how to subsidize it for them, this international tariff issue. So the problem consulates at the much other additional interspaces.

[Audience question inaudible]

Larry Masinter: Right now I actually think it's a marginal deal. Either you get to the point where lots and lots of sites are moving toward more dynamic content and if they do that stupidly, you wind up with not only the front page that says, "welcome Mike, not being cachable, nice to see you today, it's been three days since I've seen you, but also since they didn't think about it, all of the bit maps turn out not to be cached, either, even though they are static."

Speaker: ...as you move to dynamic pages, if this kind of standard is not -- I just want to know if this is a good deal, that's all.

Larry Masinter: The statistics really are not there. It's conjectural about whether or not that is a big deal. We've had some big arguments about what it was. The issue is that we figure we could design the protocol in a way to get rid of most of the motivation for cache busting. That is, if the stuff really was dynamic you could mark it as dynamic, and the stuff that was static you could mark as static and then you could count on things that you said were dynamic being dynamic because the proxy caches wouldn't be motivated not to deliver stale data because most of the stuff would be marked. You sort of get into this motivational issue.

Speaker: What I do with the net is read. And I believe that the net can satisfy us readers with the present bandwidth, provided we are not interested in the letters flashing green and yellow every two seconds or something like that. So when I was trying to work from England last summer, I would sometimes have a lot of trouble. I would ascribe it to people dynamically sending movies across the Atlantic while I was merely trying to read something. So, is there any scheme that gives priority to the more modest uses?

Larry Masinter: People are normally thinking about giving priority to people who pay more rather than ...

Speaker: Well, I'm willing to pay a pretty high rate and for bit transfer is quite modest.

Larry Masinter: I think a lot of the thinking here was influenced by folks who are running servers that become topical. So the folks from NASA when they are doing the Jupiter flyby, they have a problem with the bit maps they are delivering that are static and they would like to be cached -- being delivered reliably. The people running the election servers where people come to the site and want to get the latest election results, but the background information is static. That is the kind of situation where even people who normally read sometimes want to read the latest. If you get millions of people wanting to read the latest, then we have to be able to deliver caching reliably in a way in which when it is the latest, you get it. When it is three months, then you don't.

Speaker: There is one tag line, John, is that there is other work going on in terms of quality of service and economic models and technologies to support it that might address that issue.

Larry Masinter: Right, but that is people who pay more.

Speaker: People who are trying to measure the traffic at their site are defeated by caching to some extent which is why they are doing the cache busting?

Larry Masinter: That is one of the reasons is demographics and

Speaker: That kind of measurement will only increase as more and more money flows. The people are demanding more animation and red to green shifting. And the commercial people are demanding more and more measurement of kind of Neilson stuff, is that true?

Larry Masinter: The demographics we are actually working on, the hit metering and other kinds of demographic gathering data outside of HTTP, there is no reason why you have to measure that data while the traffic is being generated as long as the cashes are willing to - you have some kind of contract between the origin server and the owner of the proxy. The origin server will offer to not do cache busting as long as the proxy offers to gather the demographic data that the origin server would have gotten if it - and what is the standard of that kind of contract? I don't think we've quite figured that out yet.

Speaker: Larry, can I go back to the standards question for a second? A lot of the questions you talk about in cashing are similar to the questions in design of computer system and schedulers and cashing you also talk about policy mechanisms and prediction mechanisms and yet you haven't talked about any of that. Does that come into play? Or is that being ignored? Or is it different? Or what?

Larry Masinter: Well, you kind of think in the design of a computer system you have some control over users who willingly violate, go out of their way to violate and destroy the policy mechanisms that the system operators had put into place. But there is no kind of situation where you can do that out on the net. They are completely outside of your domain of control, and so the issues about system policy being imposed on regular users just doesn't apply.

Speaker: No, what I'm thinking of is that in a design of a time sharing system, when you enter the kernel, you go in through system calls. There are certain policies you can implement, saying this job will be swapped out or this job will be - whatever. Like John, I prefer just to read text and switch off all of the graphics loading and if there is some way for me to signal that to the system, I could do some kind of essentially pre-fetching or provide some other kind of information to all of the guys between me and the server if such a channel exists.

Larry Masinter: Even if the channel existed, what motivation would they have to pay any attention to it.

Speaker: I might be willing to spend more money for that.

Larry Masinter: But let me give you another example of this which has to do with __ rendering and style sheets. We have a battle on our hands between the users who want to control how they read things, and the publishers who want to control how the users control their stuff. It is a battle of who owns to control the look? So this is the unstoppable publisher and immovable user. Does your browser have a control to turn off the background that the author has provided. Does then the author have the right to override your turn off the background anyway, damnit, because I really want this to be white on black and even if he said ignore default to backgrounds that I really really mean it this time. You see this in style sheets and overrides. We get into __ means this must be presented this way, but most browsers have controls where you can override the default font selections except when the font - Oh well, we'll get around that, we'll give them embedded gifS and bit maps and then they can't change it because we have real control over how it looks. We will see this kind of battle continuing. it's not like this is over. It is one of the real puzzles of standards design.

Speaker: (not understandable)

Larry Masinter: Well, I don't know if I want to have it, but that is what the value of it is.

Speaker: No, but what John said was real accurate. There are different constituencies, and maybe they need different systems on the web. Trying to get one system to handle everything we now want to handle, and God knows what else there will be in the future that we will have to handle, seems to me like trying to have a babble of languages all common. And it looks to me as if trying to make a standard at this time to handle all of the things we now know is not only impossible, but not very practical. You claim that the market will decide who gets what. That is not really the answer to that problem. The market can decide what, in different systems where the cost of doing things whether it's movies or voice or speech, or just reading as John McCarthy said, is all you want. So I feel very poorly about you spending your enormously valuable time trying to satisfy every constituency.

Larry Masinter: So you'd like a telephone you'd like to call your friends on, but not anybody else. I mean you can get those kinds of communication devices and they make a lot of sense in limited domains, and clearly there is not one system that will meet all needs, clearly not. Clearly there are domains of incompatible communication. All I was saying was that what I think is important about the net, why people are excited about the Internet, and why it is different than all of the other communication systems we have had over the last twenty years, that there is one net. Right? When anything went, about five or six years ago, there was a report on network information access systems that listed about thirty or forty different systems. We had WAIS and GOPHER and HyperG(?) and an enormous number of things and it wasn't very functional as far as meeting the need of when someone had someone they wanted to say to the world in a way in which they could expect that most of the community could get at, even if they were visually impaired, but not if they spoke a language that you did not know how to write in. And so a limited domain. And we still do have domains of language. The net provided that, it provides a) our chance at universal communication as ubiquitous as the telephone. And that is all, the telephone has not replaced the television as a way as communicating and we don't get mass dial-ins in order to get the news. There are lots of other modes of communication, but at least here is one that is uniform for one kind of communication. The net offers to be a mode of communication that is uniform and universal for a lot of other things. I wanted to say this thing about standards and standards committee that was a lesson from a long time ago. Writing something in a standards document doesn't mean anything. It has no force. There is no way to cause anyone to do anything by writing it down in the standard. We get a lot of people who are confused by that and they come to the standards committee saying, 'I want everyone to implement this feature and so therefore I want it in the draft.' The standard does not cause people to write any products, it has no force of law. I think there was a time when some European countries had a law where if there was a __, a German standard for something and you had a product that did something like that, then it had to match the standard. I don't think Internet standards have that effect anymore. Certainly don't have that effect. All standards documents do for you is that they add a word to the language. If you were going to sell a product for you to advertise the product, then you could write the name of the standard on it, rather than attatching the speck. We also see that it is often violated that people will advertise that they implement HTTP 2.0 even before 2.0 has ever been published or anyone has imagined it, that you see this kind of stuff happening a lot. But the main purpose of writing the stuff down in specks anyway is to give people a term that they can put into a contract. There are a lot of different organizations that try to get into the standards business from IETF and consortia and individual companies. Companies can publish something and can call it a standard, and it could be a standard. This is a story about the common list standard which I'm not sure I would claim was a successful standardization effort, and mainly because the standard came out long after most people cared about the thing it was standardizing. I tell people that my greatest contribution to the common list standards process was designing the form you had to fill out in order to get a change to the language. I felt like, I worked for Xerox, Xerox had a corporate training program where you had to go and learn about problem solving processes. I felt like this form really embodied this Xerox problem solving process in the following way. We had a document that we were starting with and people wanted to propose changes to it. The form had on it, the first part of the form was the problem statement - you had to say what was wrong with the language, what was wrong? What was the problem you were trying to solve without describing your proposal for how you were going to fix it. That was very difficult for some people. It was a good filter for - there is this neat feature that we need, because you had to describe what functions this feature filled before you told people what the feature was. The second part of the form was your proposal, what you were proposing to do which you had to describe without arguing for it. You couldn't say, 'there is this great feature, add this following wonderful thing.' You had to say, ' add this following thing' without any positive adjectives. There was no arguments. Then there were these sections about costs where you had to say what it would cost to users if you added this or made this change, what would it cost to implementers in order to implement it, and what were the benefits for users and implementers, and so on. What was the performance implication, what is current practice, what did other people do? And on the bottom there was discussion where you could put all of the war stories, all of the testimonials, the statement that this following famous figure in the list community likes this feature. And this was a useful was to think about taking an existing system and migrating it forward in the standards arena, even though we don't require people in the HTTP committee to fill out the form in order to make a change to the speck, that it is useful to try and separate out what is the problem that you are having. One of the difficulties I've gotten into -

Speaker: __ IBM. I often try and think of if I'm offering a solution to something, how about the anti-solution. In other words, do I have a competing solution that I can propose? I find that when people propose a specific solution as opposed to a specific solution and another one that might be a little bit worse that they tend to be more focused on their solution than they often should be.

Larry Masinter: Actually, we often got - once there was one proposal, you often got several proposals. I've gotten into a lot of arguments about the form. It happened in the URL committee. There is a standard in one of the __ about URLs about how you are supposed to write URLs in plain text. There is a big, big debate about it. It says, well you are supposed to put an angle bracket in the letter URL colon, and then the URL in a right angle bracket, and no one does this. Newspapers, people in newspapers, they don't print the URLs that way. And the argument that went on the form is this: we understand there is a problem. The problem is if you were scanning along in a document, how do you pick out the URLs? Where is end of the URL? Is the period at the end of the sentence? Or is that a period as part of the URL? And the argument was as - we have this problem, the solution doesn't solve the problem, but it is a really really important problem, don't you think? Don't you understand how important this problem is? Yes, but the solution is - but you often have this kind of debate where the importance of the problem has overshadowed in the minds of those who are proposing a solution. The issue that the proposal doesn't actually work. Speaker: (not understandable)

Larry Masinter: It does not work because it doesn't actually give you any reliable way of extracting URLs out of text. Contents need standards. It is pretty clear in the Internet community that protocols need standards because you can't __. That you can have local domains in lack of inoperability by having web sites that have one kind of __ and another kind of __ and you click on the one that you want. That is almost inevitably the only solution for multiple languages. There is the French version and the English version and one size won't fit all for all languages. But in the end, content needs standards, mainly for preservation. Inter operability is important, but the issue is that if there are no standards, then in twenty years from now, if you want to pull up that paper again, that nifty web page that someone cited in an important paper, are you really going to be...

<*** recording tape changed ***>

Larry Masinter: ....We see a lot of this. It is a way of saying the meta standard in which you say which nonstandard thing you did, and it is critical when there are no standards at least to have meta standards so that when you get this page, you know which variant of html it was in, or which one the author intended you to browse it with so that at least later on you can figure out how you should have read it even though you don't read it now. And it is not a substitute for convergence. We do see this cycle. You see this time and time again. People get worried. Isn't the Net exploding because you have all these people inventing 12 different kinds of audio. Too many audio. And all of these different proprietary 3-D things, why don't use ...... What you see is that over and over again, that there is innovation and divergence and people invent new things and then they standardize and converge. And you can't standardize before you find something that works. Standards groups can't engineer. It is not reasonable to start a standard's effort to solve a problem where you don't actually have a solution at hand. And basically all the standards groups can do is verify that a solution that works in one domain will work in others. And this is my example of a standards group that couldn't engineer was the URN committee that was trying to, no, there was a problem. There's a problem. I had the real epiphany. I was on the Web for Librarians mailing list. We were in the middle of the URN committee and the Web for Librarians. I had this problem. I went out and browsed the Web and I found a bunch of really neat links for my people that were really interesting to them. And then I went back six months later and half of them didn't work anymore. And then someone says, "Oh the URN committee is working on solving this problem. They are going to have location independent names. Instead of referring to something by it's URL where it is, we are going to refer to something by it's URN and a location independent name that will go through one level of indirection." Every problem that computer science thinks is solvable by one level of indirection will go through one level of indirection and map from the name, the location independent name, to where it is right now. And I finally realized at that moment what was wrong with this picture, which is that most stuff goes away. A student graduates and his home page about micro breweries in the university is gone. It is not that it is moved, it is gone. The company goes out of business. Over a 20 or 30 year time period, everything, most stuff is going to go away. And any system of one level of indirection isn't going to solve the problem that we don't have infrastructure of preservation. People talk about disintermediation. You know, there is the writer and then there is the publisher and the library and the bookstore and all of the people in between, between the writer and the reader, and now on the Web we have disintermediated; gotten rid of all the people. You put out your Web page and the people can read what you have to say. But a long that pathway, there is also a massive amount of inertia. Momentum that kept things available when the guy that wrote it in the first place didn't care about it any more. And we have gotten rid of all of that as well.

Speaker: Isn't that the job of the Library of Congress?

Larry Masinter: Yeah, right, right. Clearly, they are getting out of the business of accepting documents themselves as registration. All they want is the Indy 5 of the document, thank you, because that is all you need to proof copy, right. Well, that is my last slide and I think I am a little over time.

Ted Selker: No so very much. Thank you very much.

User System Ergonomics Research (USER)
[ IBM home page | Almaden Research Center | IBM Research | (C) | (TM) ]