NPUC '96 Speaker's Discussion Panel|
Ted Selker: I want to -- this is the time when we are supposed to start the panel. I want the speakers of today to edge their way up here and prepare for that. I want to put up questions for this talk and put them into the beginning of the panel and I want to invite two people up onto the bandstand in the intermission and that is, this man, Charles Kalko and Lucinda Rios. There is a tremendous amount of people that put a tremendous amount of work into this workshop and I really can't -- to explain how much work Lucinda and Charles have done is to say that of the 20 or 30 people who helped put this together there are two people who spent last night here working on these badges, I mean 5pm to 5am, but I really think the goal of this workshop, by giving us better vision, can be -- I'm not really a flower person -- can be extended I hope to these two people who have done just so much. (applause) So without further ado, I think that Phil has a question and could we have two microphones for the panelists and then the other ones you can use for the audience?
Phil Agre: I had a question for Ed, I found your talk very stimulating. It provoked a lot of thinking. You were talking about the FAA's process and you said something great, I don't remember exact words, but any time anything goes wrong, everyone gets to learn from it, there's some kind of a pooling function that goes on. And the FAA does that in a very centralized way, there's someone who's job it is to go out and learn a lesson and then make everyone follow the lesson. But then another model is the model where if you take any given job, let's have everyone in the world have that job all be in contact with one another so that they can have a running conversation going on, and this is basically what the Internet is, right? And that's a much more de-centralized way of doing much the same thing. Now the FAA -- it seems like the FAA has to do it that way because it is very much a matter of making sure that it happens and making people do it, the structure of interest is set up whereas the de-centralized one, there's a different structure of interests. You talk about a lot of different kinds of applications of this sort of idea and a lot of your metaphors were of things like the ISO which is a different metaphor from the FAA, I mean it's a different social organization, and you say at the end that maybe there's a business model in here and then maybe there's not. How do you design the social organization of the process you're after once you realize the FAA is a metaphor and not necessarily the right model for all cases?
Ed Fredkin: Well, that's a very good question and I clearly don't know the answer. I think there has to be an organization and it could be something like UL, which is an independent organization, it could be governmental it could be some kind of international organization like IKO, which is the international organization that worries about things that have to do with airlines and flying internationally. So, the only thing I can say is that I know there has to be some place with the authority to grant this certification and it has to be independent in some sense, whether it's government or some private thing, I don't know. What you said reminded me of one point I just wanted to mention: there's a magic about software in general, which is if you write a program it keeps running faster as faster computers get out, which means that it's better. This has a property that if you ever create such a system for a plan, the software gets better qualitatively with time, with the guy who did it not doing anything, because all he did was specify what the plant is and these agents as the newer ones get certified and replace the old ones, the software gets qualitatively better by itself, which is a kind of remarkable possibility.
Speaker: I would like to comment that this an intensely political proposal. We could focus that by saying how could we get the nuclear plant to accept it?
Speaker: And going in a different direction, I would like to remark that the company you're talking about actually also has some mapping on to the component software distribution / incremental distribution that in the software community there are also such lessons of practical challenges of how you would distribute this knowledge going on today.
Ed Fredkin: I'm thinking, obviously, as a kind of technologist, that this could happen and if it did happen it would be a good thing, I can also think of it sort of from an entrepreneurial point of view, which is it's clear that it's valuable to the world so people ought to be able to make money at it. The politics, I don't know much about, to be honest.
Danny Bobrow: Ed, two questions. One is, do you actually think we know enough about software at this point to know how to certify anything? And secondly, this whole notion that things will improve because we will learn, again has it's isolation bug. If the way an agent works is that it knows it will -- that when there is something going wrong, there's going to be this kind of cavitation going on, I see a certain kind of vibration in the whole pressure system. And somebody puts some feedback into the systems that keeps it steady until it fails, something that used to run better, it used to detect the thing and turn the plant off, now it can't detect a thing until the plant actually fails. So there's a set of interactions here which both require some knowledge that this is not going to be a linear process.
Ed Fredkin: The first part of your question. You see, the kind of engine in my airplane -- it's a Pratt and Whitney PT6 -- was certified 30 years ago. But, they didn't get it right. And they've been modifying it and fixing it ever since. But this process has ended up with the most reliable engine ever made, so I'm not saying that anyone knows how to write programs that work, I don't believe that. But I do believe that you can have a process where you find bugs and where the probability of introducing new ones as you fix the old ones is kept fairly low. But very often the letter that comes to say you have to do such and such is to undo something they just suggested three months before. The process isn't perfect and they're no better at engineering than software people are, that's for sure, but nevertheless, the fact that they collect all this data from every engine running in every kind of airplane, any kind of failure, and they make use of it and look at it, results in steady improvement. And that's what I'm talking about, pretty much. The second part, you sort of have to catch problems at all kinds of different levels and deal with them, is the main answer to that. It's not just some high level thing and you need local knowledge of things locally, just like a fuse blowing and you need global information, too. So it all has to be worked out, that's for sure.
Speaker: Ed, isn't this what happens anyway when companies release free betas on the Internet and let their users debug it and then take all the bug reports and fix it based on the incidences that are reported in the newspapers...
Ed Fredkin: The difference is that the procedures that are used in airplanes are there because people keep getting killed, including important ones, not just ordinary ones, right? (laughter) So, especially this recent accident after lots of airport accidents like that, they didn't have a GPS, I mean the GPS you can buy for a 1,000 dollars, but it wasn't in their budget or something. The point is that they have had incentive to go to extremes to make it really work and that's what we ought to emulate even though we're not talking about people being killed all that often.
Ted Selker: Larry, aren't you saying that we really don't need standards because in a market economy of ideas that they will create themselves by people having bug reports?
Larry Masinter: That's how standards get created. That's the problem.
Ed Fredkin: I have one more comment to make on certification really quickly. I had the unique experience of briefly with Dykstra, who is known for saying that only demonstrates the presence of bugs not their absence of them. Since the idea of certifying something provably correct would be nice but in the last couple of years I've actually appreciated the merit of doing operational stuff, or even the beta stuff that Larry mentioned. I'm actually seeing people like ourselves, or at Apple when I was Apple, running a maze of web servers in an empirical environment where you make one change to one and you watch whether the behavior changes or not -- and there's no guarantee but you can actually now build a community of people who have that extra peace and there's opportunities to formalize that in lot's of places.
Charlie Rosen: Ed, 20 or 25 years ago, I thought that I could make an autonomous robot. A lot of people thought so and we worked at it and then we got smarter and smarter over the years and found maybe it was a little too hard at that point with what we had. It's still too hard to make a fully autonomous robot. What bothers people, what I've heard, is an implication that you can put on the net enough stuff so that machines can talk to other machines and do monitoring and inspection and failures and so forth and that bothers the heck out of me because I've visited hundreds of factories. My belief is that your idea may begin to fly, and should, I think, when you think in terms of what happened in the robot business, where you get into teleoperators that are augmented by autonomy. And that's what somebody else raised a little while ago. I'm afraid you're going to have to have people interfacing with these networks between machines.
Ed Fredkin: Let me address that. Of course, I was involved -- my license plate is robot, which it's been for the last 30 years -- and I was involved in early robotics work and I agree with you completely. Some problems are harder than others: conversely, some are easier. I had the experience of designing a sea water desalinization plant that embodied these ideas. All such plants in the world, with the exception of the one that got built by this company, have operators that start them up. Many plants have an 8-hour process for turning them on, with guys going around reading about a pressure gauge, writing it down in a notebook and then opening a valve and finding the next thing on their checklist and they walk around and do that for a long time to get the plant moving. But basically, we've designed the plant, using a lot of these principles, that ran itself. It was very successful from the engineering point of view and all the plants that were built are still operating and they make all the water in the Cayman Islands and Tortola in the British Virgin Islands, and have been for about seven years now. The point is the problem was one hell of a lot simpler than the robotic problem and so there are plants, you know, if someone said "let's start out with a nuclear power plant", the answer is "no thanks", but there are plenty of easy things to start with here where you can absolutely do it. Let's start with those. The robotics field doesn't have that many problems that are that easy, most of them are harder. So, that's the point. This is where you can cut your teeth on the easy ones and move on toward harder ones.
Speaker: I have a question for Martin. You made a presentation on the future of browsers and NetScape technology, in the wake of the presentations, do you feel that -- I would like your intuitive reaction and your intellectual reaction as a technologist -- do you believe that there will be a significant market for browsers that respect the social aspects that Cliff has empirically demonstrated in his research? Will I be facing a dominant, unfriendly Navigator 7.0 at the turn of the decade? (audience laughter)
Martin Haeberli: Michael, my reaction is one answer is to cheat and hand the answer to Lucente who handles our experience work, but I think intuitively there is some value in the idea that Cliff -- actually profound ideas that Cliff suggests and it's probably worth a dinner if he's there to talk about the ethical concerns. But nonetheless, I actually believe that there is a possibility that whether it's a browser or not, it's wise to take constructive hints from that in terms of software design. Whether Navigator 7 is 90% influenced by Cliff's ideas and 10% by other needs for innovation, I think, is unlikely. But I think it's likely that whether it's Cliff's work or other work about both individual and social interactions, it is important for us to go beyond where we've been. We've been kind of stuck, and actually for me it is surprising how little has been done. There's been lots of work in observing single users, there's lot of well-known research like "oh, let's go look at the autobox experience", whether it's a web client, a word processor or web server.
Clifford Nass: Let me address the ethical problem with a particular example I think illustrates the research. Historically, when someone says "oh, we need an agent for a product", the agent is always male. And when you ask anybody, "why is it that every random instance that you have to choose one, why is it male? the answers or theories probably none of them very good" , so when we were involved in Bob 1.0, one of the issues was should there be male characters and female characters? And one group said, "well according to the research that males are more confident and blah blah blah, we will use only male characters". Well, the answer to that is -- well, it is true that males are stereotyped a certain way -- however, think of the marvelous opportunity to break stereotypes by including female characters. Of all the ways that you could change society's stereotypes, this is the cheapest one! So you have not pipeline problem, you have no problem with people going up through the ranks, just make them female dogs instead of male dogs, O.K.? And in fact, in the case of Bob, that's exactly what they did. Half the characters were female. That decision was just as much informed by our research as was the default decision of making them all male. So ethical issues are informed by the way humans think and behave. That's point one. On the issue of whether Netscape 7 will be informed by research, I think it will be but I'm sure that my stuff and my colleague, Brian's, stuff won't be the only game in town. I mean, the problem is, there hasn't been a lot of really interesting stuff and new stuff done since doing some windows
Ted Selker: Can I quickly see a show of hands? What percentage of the panel believes that Cliff's work will be significantly incorporated in human interface design in the next three years?
Ted Selker: Who says my questions have to be fair?
Speaker: ...I wouldn't raise my hand to that question, quite candidly, because while I believe that the work of Cliff and his immediate and extended colleagues will be to some extent incorporated over the next three years, significantly to me means a big chunk, a third, a half, three-quarters, and I don't think that's likely.
Ted Selker: I have a comment to your statement about women and interfaces. The only avatar I've seen in user interface that I really love is the one that's a receptionist. There's an avatar in Japan which you come into a building and there's this person and it's a receptionist, it does what a receptionist, you know, it's kind of competent and trustworthy in that way. And what you find is that in the service roles, when the door opens in your car, when you see gender role models with Bob or something, I think randomizing it might do it if that's what they're doing.
Speaker: Let's just improve the interfaces first. I think the Netscape interface is so dreadful that adding people to it will just be superfluous, let's fix it. (audience laughter)
Speaker: I think that we're going to see collaborative things before we're going to see avatars there's enough real characters out on the net that we don't need fake ones. (audience laughter)
Speaker: (beginning is inaudible)...social myths is much more than characters. It's obvious from all this stuff. Characters are particular manifestations with advantages and disadvantages, rules of politeness and rules of flattery, etc., the spell checker is something that likely we know from products well and there are products right now coming -- which means I can't talk about them -- we're doing everything from the most technical measurement device to the most social thing, that are using these concepts, even though not characters. So I would bet that there will not be characters.
Speaker: I think that 20 years ago we had systems that said, "oh, this variable is misspelled and shall I misspell it for you?" And a lot of people got mad at the system for talking as if it was a person, and saying "shall I" as opposed to some kind of passive voice, after all it's a machine and what it it talking to me as if it were a person. So in that sense a lot of systems for a long time had some aspects of trying to be personal and some kind of push back about either negative or positive reactions to that.
Speaker: People -- we've done research on this -- people don't like computers that use the term "I", they do like characters that use the term "I" because there is a psychological category for that and they do like voices that use the term "I" because there is the psychological "I will do this for you". In the studies we do, the computer says "this computer" and there's a number of reasons for that. So the answer is, if a person kept saying to you, "Shall I do this for you?", you would deck them, right? For the exact same reason that you would deck a computer, namely it's annoying, it's passive-aggressive, and it's not O.K. (laughter) So if we just use the rules that ?? the people we like, as opposed to these "computers are special". It's always been "computers are special, let's make them special". Instead we made them banal as people, they would be a lot more likable.
Ted Nelson: Also, it's personifying the computer when you're talking about a specific program. "Do you want this process shut down now" or "are you sure you want this to happen in case the Macintosh will go away for awhile and buzz it's disks forever?"
Audience For awhile there were cars that used to say "fasten your seat belt" and what got discovered is that there is nothing worse than a stupid thing that talks to you. (laughter) The Doors and the Hitchhikers Guide to the Galaxy, I think.
Speaker: Another question for Cliff. It seems your model for human and computer interaction is very linguistically centered and there seems to be some problems with that, and other psychological model flow. But the question about the hammer earlier that Craig was talking about is relevant to that. Some of our interactions with computers are not linguistic. And they may be more like interacting with the dog or interacting with a new tool, and in fact the joy of the interaction comes precisely from the fact that there is a flow interaction happening like in a game, for example, where it has nothing at all to do with linguistic interaction, and if I think I was treating a person that way I would feel ethically responsible for it. So I was wondering if you could address -- it seems like saying all human-computer interaction mapped to human-human interaction leaves out very significant portions of what our interaction is like with computers and ones that actually have very entertaining and useful consequences.
Clifford Nass: O.K. The linguistic issue is -- one of the best cues that you're dealing with another person is language. It's why -- there was some confusion about treating cameras socially, we don't, nor are we evolved to. That is, people who are evolved to treat trees as full-fledged people, would not do too well, right? So the idea is we've evolved to a world where there are cues that let us know there is a person there. One of the best cues is language. There are others, filling a social role in television studies we did that. As far as the flow issue, we can get in flows where we work with people where we don't speak and, in fact, if any of you have written a book or built something with someone else, there are times when you don't talk. Everybody knows you know what the next guy is going to hit, where, etc. So we have rules, procedures, standards, for how we treat other people not linguistically but physically, etc. Everything from bumping into people to working as teams, so the answer is -- again, those same nonlinguistic things happen in the human-human realm. The confusion is to think it's somehow different for computers. It is different for hammers, because hammers do not have social cues that encourage are brains to evolution to treat -- (Inaudible question from audience) -- I think this goes back to Ted's point. A video game machine is not a thing, the characters on the video game machine evoke the pathos, fears, arousal, concern, love, that people do. Does the physical box? No, there's nothing in the box that's social, nor has it evolved to that. So there's this confusion between the process, the character representation, or the thing and that's really where the difference comes in.
Ted Nelson: That kind of focuses what I would like to say. I guess maybe my tastes are at the extreme of the scale but I just want to remove all the social cues so that the thing is like a wonderfully flexible hammer.
Clifford Nass: And hammers are great, right? There's nothing wrong with them. (Inaudible question from audience) Sure, but you'd have to give up the things you give up with hammers. You can't chat with a hammer even if you want to --
Speaker: Ever hit your finger with a hammer? It's not a chat (?)
Clifford Nass: That's right (laughing).
Speaker: I'm wondering whether it's really linguistic or just the fact that the system is responding to the nuances of what you do. People have such relationships with dogs and I don't think there's language passing between them.
Clifford Nass: Yeah, the list of things -- there's a lot of debate what makes socialness -- but the traditional list is language or interactivity or a clear social role that is something that humans traditionally do, even if a human isn't doing it, and voice, faces, and those are generally the thoughts. And no one knows what combination triggers what extend (?) of social rules. And it's a continuum as well, some manifestations will get more social rules applied, some less -- which is another answer to Mike's earlier question, we don't know what the limit -- you know, how far do you get with just text, how much more do you get with voice. That's a great open question we're beating on, but --
Ted Nelson: The other point of this is sophistication. In other words, there is a social presence when I'm using Microsoft Word, it is the social presence of the designers, or shall we say the chaotic group that gets to do the software pieces individually.
Clifford Nass: Well, yeah, we've actually done research on them. One of the debates was: are programmers psychologically relevant when you're dealing with a computer. Now among designers who were always conscious of the nature of self, that very small group, yes. But what we do, we took again experienced users and in one case we told them, you're working with the computer, be aware of the programmer. And in the other case we didn't say anything. We got huge differences: in particular, people were much more cricitical of the software when they were thinking of the progammer. There's a lot of possible reasons why. The most important conclusion is that means that the programmer is not psychologically available in the same way that the fact there was a producer or director of a movie is, for most people except guys who make movies, not psychologically relevant. Granted, when Steven Speilburg goes to a movie, likely he's not enjoying the plot, he's thinking about the filmer and the director -- and I don't know anything about movies -- but that's the difference between him and me at least.
Speaker: I wanted to go back to this notion about certification and distributed use and think a little about the kinds of software construction process that happen over the net. Like new software and the kind of situations where you see hundreds of thousands of programmers around the net joining and constructing something and fixing it. That isn't quite the beta test situation but that we at least see some evidence of that happening in some situations and not arising in others and whether or not if that is an instance of what you were calling for.
Ed Fredkin: I don't really, I think that's a good thing, but what we're talking about here is that the normal mode of discovery of a bug will be that something wrong happened with the machine and it caused mischief and this would have to be analyzed and figured out in some technical way and then dealt with.
Ted Selker: I'm concerned that a lot of the innovation comes and goes due to people having access to the ability to make these changes. For a few minutes now and then we have access to the source code for some part of the world that we are playing with before people figure out how to lockdown the job or lockdown the browsers or lock down the lists with the source codes for things. And during those moments there's this explosion of creativity that has to do with the community that is possible with sharing and on the other hand we have to worry about business models or we don't have food to bring to our children.
Ted Nelson: Here's a point about present day software. Present day software is a bizarre black hole model where in fact each software manufacturer is trying to get you to leave their data in their format and then "export" it to the other format. So whether it's Autocad or Microsoft Word, they want to capture you in their format. Now, in contrast, one of the things about the Osmic proposal that I distributed here, is along these lines everybody can make a piece of software and a piece of interface for it because the data is right there where everybody can reach it.
Speaker: Contrasted with some large-scale model like G2, your model that you're proposing here, it seems like there's a smaller scale one that is also interesting to compare it to something like what Echelon does. Could you comment how it would compare with that?
Speaker: Ideas like Echelon and other sort of little digital gadgets that communicate and so on are really the wave of the future. I would like to address this one other comment about putting together these pieces and having them work. In the early days of digital logic -- this may strike you all as very strange -- but you would design the circuit that might have two flip flops and the number of gates. Guess what was in that circuit -- there were a lot of potentiometers that after you built it you had to adjust them so that these different gates and flip flops would work properly together. But then, after they got passed that, it used to be when DEC built a VAC, the parts -- the disk drive and others -- went into the system test place where for two, three, or four weeks, they worked to get all those pieces to work together. But today, miracle of miracles, when you buy a disk and put it in your PC -- it works! So the point is, we've learned to make higher and higher complexity modules that interface hardware and software like that and what I'm talking about is that same process can take place and the innovation won't be in how you write the software for this pump but how you connect together a hundred pumps and valves and tanks into an innovative plant and where it all works on the first try because you've done the same thing for those things that you've been able to do for digital logic.
Speaker: You know, I accept everything that you just said, except that, spending a lot of time on the West Coast, I seem to recall that in a 10-week span the entire electrical grid of the West Coast collapsed. I mean, not just degraded -- collapsed, blackout. This was explicitly not supposed to happen --
Speaker: Where's the guarantees? (laughter)
Speaker: -- not supposed to happen and, what's more they were supposed to be able to upgrade if, God forbid, it ever collapsed, it would be able to be brought up very, very quickly. So every single assumption about scaling and the whole notion of emergent behavior when one begins to network seemingly simple things into complex ecologies or environments. I mean, as much as I agree with that example, I have two really nifty counters to that hypothesis. I consider an electrical grid to be mission critical, I don't know if I'm prepared to --
Speaker: Well, in this electrical failure --
Speaker: Please, not an electrical failure, two electrical failures --
Speaker: In any case, when evidently a transmission line failed and that fact, known to some, was not communicated to lots of plants and so therefore they couldn't take the right action, so there's -- fine, the point is this -- if you look at the complexity of just one Pentium probe chip, if you look at the complexity of a modern computer system or you look at the complexity of Internet, most of it works most of the time, which is a kind of miracle. And the point is that you take an electrical power network with all it's plants and everything, it's the most simple thing you can imagine compared to the Internet with the computers and looking into the complexity that is in the processors and so on and so forth. The point is that you can master these things one piece at a time and interconnect them and get them to work and take modern airplanes. They're an example of very complex things and there's probably as many electrical nodes in a modern airliner as there is in a whole grid on the West Coast. We had a similar blackout in the East Coast many years ago, and not one since then. Those problems can be solved and that's all there is to it. And they can be solved a lot by a combination of standardizing locally and getting control over local things and ending up with the right kind of networks to communicate and make things happen globally.
Speaker: I think moving in the direction of connecting the computers up together so that they talk to each other instead of calling each other up at power plants. But there was actually a third power outage in Palo Alto a day later, when something in Menlo Park called up another computer in Palo Alto and said turn the power off. They never said if it was some thirteen-year-old kid with a modem or something, but...
Speaker: Well, airplanes have been crashing since the Wright Brothers and they still crash. Nevertheless, we make lots of progress and the overall air transportation has become a very safe and reliable thing. So, examples of things going wrong, I don't think are very impressive.
Speaker: To go into the airline, I've actually studied the Airbus design -- come on, there is a fundamental design philosophy when you decide to fly by wire automate the cockpit and you override the pilot versus the pilot running these things. So your design sensibility determines the effectiveness of these things and there are a lot of American pilots who don't want to fly the Airbus 300 series for a lot of those reasons. And you well know this, that your design sensibility has an enormous impact --
Speaker: That's an interesting point. I just read in the Japan Times on how they had the Airbus crash in China because the copilot hit the turn around and go back switch accidentally and then they tried to override that manually. What this argues for, I think, is much deeper orientation of the user to what is going on, rather than interface with the street users is stupid.
Speaker: Well, the Airbus doesn't treat the user as stupid. All kinds of things in all kinds of airplanes have been automated. The throttles in modern airplanes aren't connected to the engines even in the Boeing planes and so on and a lot of people want a mechanical connection to the fuel valve, but the trouble is if he twitches once he's liable to cause the engine to fail. The point is as you make this kind of progress of getting better and better digital controls, you will have problems along the way because we're exploring new territory. Let me show you, fly by wire essentially will be the dominant thing as we finally learn it better and trust it more, and everywhere you turn there is more movement toward smarter components, smarter navigation devices, and so on. Airplanes used to have navigators, now a Boeing 747 400 has a two-person crew and it's a giant airplane. Obviously, the fact that things go wrong with new technology doesn't mean you shouldn't keep pushing because that's where the future is and it will work better later.
Speaker: I just wanted to make a point about your last comment about the electrical system. I think it would be a tragedy not to do anything because one particular system behaves in a certain way. There are millions and millions of self-contained systems that could be very well controlled. I remember back to my engineering-school days of trying to do a network analyzer and it's the twitchiest thing because you have this huge mechanical inertia of all the generators and motors that are connected to the system are a resonant system, you know, it's running at 60 hertz and anything you do that disturbs the system propagates throughout the whole system, so I think that's a special case that shouldn't be used to decry the possibility of connecting water gauges and wells in Arizona or whatever else it is. And there are a lot of things that will be very, very useful to be connected through the system. You talk about --
Speaker: I think there are two points to make. One of them is that the systems are complicated and, in my experience, often the documentation and the processes have not been kept current and one could argue that the approach that Ed is suggesting would at least move towards more self configuration information being more readily, more currently transmitted, which can't hurt. The second observation which is a sort of lesson from computer networking space, and I'm sure the guys in PG&E are also thinking about this right now in different areas, it's much wiser to think about these from the point of view of when something fails, then if something fails, and really do some thorough scenarios. And I don't envy the job of the Chief Executives of PG&E, but if I were in their shoes I would be starting to put together a very small team of people whose job it was to start - what are the next six things, when are they going to fail? What do we do when they fail? And how do we get people trained up on how to do the interaction of communication across teams to do collaborations? So one of the mental lessons in what Ed is saying is there are opportunities for more formalization of this institutional memory and processes around it to get much more rapid response and maybe create a business model where people can collaborate constructively, whether it is about airplanes, or by computer networks, or whatever else.
Ted Nelson: To return to Michael's example of the airbus, and your response, you referred to a technology as technology and I just want to say that I don't think interfaces are technology, they're art.
Speaker: Larry? Larry? In view of what you've heard about the idea of this network use, do you still want to make a standard on one net?
Larry Masinter: I don't see any reason why you need a separate network to connect the devices that connect people.
Speaker: I do, and I will tell you.
Larry Masinter: Great, except for the issue about reliability and hackers and encryption and so on. Those are issues that are important in the network that connect the people. Those problems have to be solved in the network, no matter what network that it is.
Speaker: Already, there are people using the Internet for this purpose.
Larry Masinter: In fact, I was going to talk about some of the instances. Mainly where you see that happening is in the sort of electronic commerce area where they connect the payments system to from the merchant to the ...
Speaker: No, they are machines, Larry.
Speaker: I have disagree because of one thing that Ed said. He wants to have certification and that is the use-ability of the net and the degree that errors can be propagated and viruses and all the rest. This makes for much higher standards than normal communications and movies and God knows what. My feeling is your network, Ed, is going to have to be a network for machines with very high levels of performance, way beyond what he's going to have to dig up to be accepted.
Speaker: Well, I am going to respectfully disagree with your position and agree with Larry's and I am interested in Ed's which is - look, as Larry pointed out, we have this problem of trust of authentication of reliability of the current network. There are in fact, in conjunction with businesses and human processes technological solutions that can in some sense create, you might call them virtual subnets that have some, but not all properties that Ed, I understand Ed's requirements, now again, for his own reasons.
Speaker: Let me just reply to that for one second. What I envision is this - that the system absolutely must work on the regular Internet, no two ways about it. But that, in addition to being able to do that, it needs it's own private network for the following reasons and the following users. You have to be able to make guarantees to some user that they will have response times and bandwidth available absolutely without any exception. And I recall the Internet has problems, like, I remember once things were slow and MIT, some group at MIT was bringing down the East Coast to various experiments and so on. So you need - there is no reason not to make use of the Internet, except that you also need a guarantee because the stakes will be so high in terms of perhaps human lives or property and so on.
Ted Selker: This is a very authentic talk, we're talking about a lot of authentication and stuff. I want to also make sure that the excitement and the aggressiveness of our conversation gets spread out to the other side of the room that also has been thinking about other aspects of how user interfaces are going to go. So after this comment, I really want to respectfully get-
Larry Mastinter: ... one net, not one Internet. Just like there is one transportation system and it has airplanes that carry passengers and it carries freight and UPS but they happen to use the same airports and there is an interoperation and you can have some things transport over the other. We have one network, whether of not it has some sub-pieces that offer different guarantees of reliability and performance and reservations. And I think we're going to see that as IPB6 gets deployed that there will be areas that you will be able to do reservation and some places where you won't.
Speaker: Yes, I was wondering, with all of this connectivity and the introduction of agents, did you see more of the distributed agents or centralized agents or agents on personal computers?
Larry Mastinter: If you take the source that if you take the broom of the sorcerer's apprentice and you chop them into little pieces, are they separate pieces? Or is it just the broom in separate pieces?
Speaker: I won't address that question. The answer is yes to everything you said. Agents will be on servers that will be on your PC's that will be inside some little one chip micro that will probably be a power PC, one chip micro someday, and they will be all over the place. And some of them will sit there and wait for twenty years and suddenly tell you that something bad is happening that you need to know about and other will be working every minute. So I think they will be everywhere and in every form, that's my opinion.
Larry Mastinter: I'd like to point out my opinion that the word security is kind of like the word love. It is like everybody has a lot of definitions, everybody knows that they want it and has an idea of the kind of security that they want - perfect- and then other people are saying that we have this language that has security, but it is security in the very narrow sense of the word and they never have enough time to explain that it is not what everybody wants.
Speaker: (not understandable - laughter - applause)
Larry Mastiner: We love our customers, it's a different thing. We've had that issue (laughter). We have that same kind of issue in computer systems when you talk about performance. Oh, this system has good performance and this system has bad performance, but in fact what performance means is really quite different whether or not you are doing three dimensional graphics or disk accesses. And there are multiple dimensions of security and different application will need different kinds. My point is that when somebody says we have security, the masses will take that and run with it or worse. And that is what happens. I'll take Java or any other - Unix- well, we have the ... of the next thing. So you have to be really careful about getting people to buy into things when you are throwing this word around it is really irresponsible to throw the word security around. We know what we mean. People are predisposed to misunderstand you!
Danny Bobrow: Security often might mean the amount of money that I have got invested in something, or transactions that I am making or is insured by somebody. And as long as my money is guaranteed or the product will be delivered and some insurance company is satisfied with the certification, then actually, is it secure? To my standard, maybe I say yes it is secure. And is it actually more secure than my phone line that is not secure at all, but the public is satisfied with the assurances? It is really a level of assurances. I mean, no?
Larry Masinter: No, it really does mean something, and there are different applications, and some people really want privacy and some people want the authentication and some people want reliable payment, and it is true that in one situation all you want is reliable payment and fine, and we happen to use an umbrella term, a security to talk about all of these things, and it's also true that things that don't offer reliable payment don't also offer privacy.
Speaker: That is right. If you are going to blanket the whole topic of secure transactions and encryption, that is a much bigger issue, and I didn't mean to step on that and slice because to be honest, what is safe from other people's eyes? That is a level of perception, honestly.
Speaker: There was another topic that Danny was going about to bring up, but-
Danny Bobrow: I just want to go back to the notion that people, whether people could play roles in these reliable systems. And I think there is a whole issue here about what roles the agents can play in terms of ensuring that what people are doing is appropriate or can be compensated for and in fact, it is not that the system it'self, any particular part has to stay working. There is again agents that have to look and see whether it can be reconfigured and do these things. So as we start to learn about how things can fail in many different ways, these agents can play roles in terms of compensation, not just a warning and do these other things.
Speaker: An example is an agent that I suggested that nobody is adopting on e-mail because I have been enflamed several times and my immediate visceral response is to say something even meaner and more viscous. And what I would like to have is a little checklist of words and if I send e-mail, I'm about to send the message, I would like the computer to say to me, ' are you sure you really want to send this message?' And then I hit yes. So there is some sort of self censorship, there would be very little overhead, but the whole idea of building in an agent that boosts in any kind of way my chances of being civil might not be a bad thing.
Larry Masinter: You need a pressure sensor in your keyboard to figure out how hard you were hitting the keys when you were typing it.
Ted Selker: In fact, back in 1980 when I was writing my masters thesis, I was doing some of the work at Smith and some at U Mass Amherst, because there were spell checkers that had the language of the communities of the computer science department at U Mass Amhearst and of the more liberal artsy community at Smith, and I'd used them for different purposes to notice different things about my document. To notice what I was doing anyway. I'm noticing that it is getting late. Is there anything that somebody hasn't got around to asking about some topic that has happened during the day, something exciting in the demo area, or something conversation they had over by the singing chairs that they want to share with us? Should we just -
Speaker: I just realized that we had Doug Crockford's talk and Ted Nelson's talk, and I wanted to ask you guys if you see your two schemes as compatible basically? And also in terms of the realization of them. I mean the visions seem similar, and I am very sympathetic with it about where we want the net to go, where we want content to, and authorship, and readership, and production and consumption. So if you could both address each other's points and talk about where that is headed.
Doug Crockford: From our perspective, we are not doing anything exclusively in hypertext. We have seen examples that have a profound vision about it and wanting to do it right, there is not a big payoff. I think that is right, isn't it? So it's dumb, there is no way we are going to displace or do much with html and that stuff. We are mainly looking at a mirror world that there will be the web, and you know, there is also ftp and all of that other stuff, and we'll provide yet another domain which provides an interesting set of services that does not attempt to displace or subvert or do anything to the web, it is just another way of using the Internet.
Ted Nelson: I didn't understand what Doug said. The impression I got was that you are going to have a company that will deliver stuff and that it will be a good company.
Doug Crockford: You have to come to the meetings, Ted.
Ted Selker: You mean the parties?
Doug Crockford: We've talked about it before. We are creating on-line worlds. We are creating avatar based things. Cliff's research helps me understand a little better why our worlds work. In the early forms, we had little cartoon characters running around and people were able to invest a lot of emotional energy into watching these cartoons and they had no trouble at all in recognizing that there were human beings behind them and relating to them in a very deep way. So they looked like computer programs, but they really were human beings and your research kind of suggests why they were able to see through them so easily.
Ted Nelson: I would like to just say a word about software design as art. There are two separate aspects and all of the terms that people use are silly, like intuitive. Nothing is intuitive, it is retroactively obvious once you have seen it. The real issue is not interface, it is the design of the conceptual structure behind it. It is the construct logic of the conceptual structure. Designing a game is construct logic. It is not interface. So I think that this is a much tighter form of structure. You are designing complex structure - complex abstract structure - is what is really going on in software design. It has nothing, nothing to do with computer science as presently taught. So basically, I believe that the correct preparation for designing interactive software is film school is film school since software is a branch of movie making, literally. A movie is events on a screen that affect the hearts and minds of a viewer, and software is events on a screen that affect the hearts and minds of a viewer, plus interaction. So, data structures are part of conceptual structure. And so essentially building the construct logic with a conceptual structure is a very intricate business and right now we are just nailing pin tails on the donkey.
Speaker: Let me just put in a plug for social science as well as the arts in design of software because those rules are formal and for those few geniuses who intuit all this stuff - retrospectively into it! The people who we later say, 'boy did you get it right!' For most of us, it really helped to read this long list of formal rules about how people behave that turn out to be designed well. But for the geniuses of the world, it is a little easier.
Speaker: I would like to recommend Scott McCloud's book, "Understanding Comics." He has been called the Marshall McCluen of comics, and he has a neat concept called masking which I think is one of the reasons the Commodore 64 version of Habitat worked. If you have a really abstract character, people can identify with it better, and the more concrete and three dimensional and better rendered - your evolution I saw was going in the wrong way towards really realistic __ because then you have to hire artists for photographs but works really well for this thing called masking. You have a very realistic background and then very abstract characters over it. Rin-Tin-Tin is a good example that Scott McCloud used. So he made this continuum of realism with the happy face at one end that anybody can identify with. People are able to identify with characters that only present themselves by words.
Speaker: We are big fans of McCloud, in fact we didn't show what the next step of our evolution looks like. It doesn't look photo real. Because exactly the thing he is pointing out. But it will be more artful than the Commodore 64 allowed us to be.
Speaker: Ed, just the tune in Peter in the Wolf. Characters don't have to have music.
Speaker: Burning houses can make us cry.
Speaker: I wanted to respond to the comment about the interface design being a subset of movie making. When movie making first came around where people would imitate the stage, and you ended up with a very restrictive set of movies, and I think the same thing can happen to interface design if you think of it too much as a movie.
Speaker: Then the motion picture trust hired the camera man to make the movie because he understood the equipment. We are in exactly that stage in software now. Technical guys are designing the software because they understand the technicalities not realizing what finally came to Hollywood in 1904 when the director was invented that you need someone who unifies the psychological and conceptual effects. An answer you gave earlier to Ted Nelson when he was asking about well if the person does think about the computer program, then you said they had a even stronger negative response to say some of the personalization of the program. So that sort of tells me that I shouldn't go back to my programming shop and add some flattery to the program that the development programs are using - perhaps they are inclined more to think more about the program than the general public which is probably your case study was done on regular people not programmers.
Speaker: No, we used programmers. They are very easy to get at Stanford so we do use them. The issue is this. If you are a film director, you can watch a movie two ways. Or let me use myself. I was a professional magician for many years. I can go to a magic show and do one of two things. One is put on my sort of "learn more magic stuff" hat in which case I am watching hand movements, I'm listening to the particular choice of word, I'm looking at the order of the thing. I'm not in the show, I'm not enjoying the show particularly. And that takes enormous effort. When I leave the theater after that show, I am exhausted. My brain is burned out, etc. Or I can enter a magic show without worrying about it in which case I am an audience member and magical things appear and disappear and it's cool and all of those things. I'm not as exhausted. It is an easier and more natural thing for me to do to be a normal audience-man. Similarly, when a person is task focused, they are in this natural mode. Now I think Winograd said your breakdown here is very nice. When things go bad, novices, not experienced programmers, basically are stuck and they flail around aimlessly. One of the tools in the expert tool kit, is if I were the programmer, what would I have done? Or, in fact, just today, someone said, 'well, you know if it is Apple software, they do it just this way, so why don't we try to find the control panel here.' Now, that is not what he or I are thinking when we're working on the Mac. Only when things collapse do we do that. So the answer is, when things break down, the expert will orient to this other mode. But in normal use, they like everyone else - people who write spell checkers still love flattering spell checkers.
Speaker: So what you are saying then is that it still might just work.
Speaker: We've done stuff on hard core engineers using oscilloscopes and it has worked great. They love it.
Speaker: I am going to go out on a limb here and say that you are going to see the first zero order edition of this happen next calendar year at the latest. Right? What that means is that you will see people doing practical micro payments sub a penny and perhaps as low as hundredth of a cent, but not yet to the very honorable goal that Ted has articulated starting next year, if not earlier.
Speaker: We clearly need alternatives to credit cards, because small stuff just doesn't work with credit cards, big stuff doesn't work with credit cards, anonymous stuff doesn't work very well with credit cards. So we need alternatives. I don't think micro payments by itself is going to make electronic commerce stuff take off. Most people don't like getting nickels and dimes and that is what it is really about.
Speaker: For a single source, they could do their own aggregation, so they could charge micro amount for single articles for as long as they are willing to aggregate into a typical bill. CompuServe does micro payments, but distributive micro payments is really the issue and then it is really a question of how distributed and how anonymous.
[ IBM home page | Almaden Research Center | IBM Research | (C) | (TM) ]