Master Heading

NPUC Homepage 1997 Workshop 1996 Workshop 1995 Workshop 1994 Workshop 1993 Workshop

Clifford Nass
Audio Excerpt
Computers as Social Actors
Clifford Nass
Stanford University

Ted Selker: And thank you very much Ted. At the conclusion of Ed Fredkin's talk, we will invite all these wonderful people that have been speaking today up here and continue the discussion for another 45 minutes. Now I would like to introduce Clifford Nass, a sociologist.

Clifford Nass: Even sociologists can say something about computers. The title of the talk is actually slightly different than the one Ted put down because he encouraged me to make the strongest and most aggressive case I could. So the title is "Computersare Social Actors." Not as or like or some other metaphor. I want to mean this in the strongest possible sense. And throughout the talk you will see how strong I mean it. Two commercials, one is the research supported presented today is supported by the Center for the Study of Language and Information under Industrial Affiliates Program of which IBM is one member. And the other commercial is my book that comes out next week describes this research so you should all rush to the bookstore next week.The basic ideance and that's the top one there. Individual's interaction with technologies is fundamentally, emphasis on fundamentally, social and natural, and in a minute I'll define what I mean by social and natural. I can refer to in questions. Second point if you'll see is these responses are automatic and unconscious. Simply put, all of you in the audience will deny that you would do when you would see people like you that is experienced computer users do up here. The reason you'll deny it is because these are responses that you're not consciously aware of and that you couldn't control. So what do I mean by fundamentally social. What I mean is go to the social science section of the library and the argument of this talk is the people who know the most by far about human computer interaction or social science. Unfortunately, none of them know that. Little did they know that they had been spending all their time writing deeply about human computer interaction and just were not aware of it. So what I am going to try and do today is show you how or where things should be. So what I'm going to do is I'm going to the social science section of the library and pull out a journal and find some paper concerning humans attitudes or behaviors towards other people. So that would be social psychology, sociology, communication and other disciplines. Here is the best part, pull out a crayon and wherever you see the word "human" scratch it out and scribble in the word "computer" or "communication technology." Do that in two different parts of the article. First in the theory section, which describes the predictions that these social scientists have worked long and hard to come up with about how people will treat other people. What I want to argue is now you have the theory how people will treat computers. Now anyone can make up a theory. The hard part is proving it true. So go to the method section where psychologists and sociologists worked long and hard figuring out how we could design an experiment that would prove this and just mindlessly go through and cross out a person and plop in a computer. Steal their measures, steal their statistics, steal everything. A you are doing is plopping in a computer. We are not talking here about fancy computers. You will see here that these are all text based ,no artificial intelligence extraordinarily simple computers, but plop them in. Then essentially specify what social characteristics you're going to manifest and see how pathetic the social characteristics we are talking about here are. We arere not talking about pictures, human representations,. full AI. Computers refer to themselves as this computer rather than I. They don't give people a name. Strip out everything that you would think would make it social. Then run the same experiment and demonstrate using statistics, quantitatively that the exact same rules work. Then draw a conclusions, first for design and then for methods and then for markets, which I will not talk about today. Let me give you an example of this. Take politeness. If you go to social psychology section of the library your going to find the following rule: when individuals ask about themselves rather than others, the responses are more positive and more homogeneous. Put in another way. If I say to you, "How do you like my talk so far," presumably you're going to say a nice thing to me. But you can whisper something less nice to the person next to you. So there's a social politeness norm. The homogeneity one, just imagine it as people talking to me will not give their honest opinion, they will give the opinion I want to hear, but across the audience there will be a wide range of opinions about my talk, hopefully all positive. So what we do is we go ahead and we do this experiment with computers. We take a text-based computer, have it teach a person, in this case a simple tutorial system, and then have the computer ask questions about how well the subject perceived the performance--how helpful were the information it taught, how useful, how informative, etc. In one case the computer that asks the questions is the computer people work with. The other case the people used a different, physically identical but different computer, on the other side of the room. These are all experienced computer users who understand that this computer would feel no worse when it heard about itself than when this one heard about this one. They all thought correctly in fact that they were programmed by the same person. We then ask people various questionnaire assessments on ten-point scales about how much they liked the computer, how much fun it was to use, how intelligent, etc. So paradigm here tutoring testing evaluation and questionnaire, language computer used text it was interactive in a very minimum way and when it gave you a fact it would say, "How much did you know about it before it before it gave you the next fact. In fact everyone got the same information and it was text based. So what happened? Amazingly, responses when people were asked by this computer that they just worked with were significantly more positive. Significance here means statistical significance and not a fluke. They was also more homogeneous. We asked people who did the experiment if they changed their answers to make that first computer feel better. Everyone said, No don't be an idiot. No, only a moron would change their answers to make them feel better, but that's exactly what they did. Not because they are morons, but because they are humans and most importantly because they are not evolved to twentieth century technology. And that is the point of all our research. The human brain evolved in a world where something filled the most minimal social queues. It used language, it interacted, it filled the role. Our brain goes ahah, I know what to do with that--treat it like a person. There is no on/off switch in a brain for media for computers that says, "Computer don't do this stuff!" It's not the way human brains are built. So what I going to do for the rest of the talk is show more of these. Now hopefully its obvious that in a normal audience I would go through all the different ways you would design if these things were true. I'm going to assume for an audience like this you can deduce all the designs implications having to do with politeness, crisis maxims, etc. If not the book has them played out. Let me tell you about the other concepts that I will be presenting. I could present a few more then if people have questions about specific areas. Note that this is a rather strange way to talk about human computer interaction. This looks more like you would talk about human-human interaction. but that is exactly they way we should be approaching human-computer interaction. So politeness--we can show that people have personal distance queues. I will present flattery. We can show that when a computer praises itself it is believed less than when a different computer praises it. We can go through interface personality, imitation when a computer changes to imitate you. People are flattered. They like the computer more. Imitation is flattery. Just like with people. Blame, that I will present, hopefully. Good-bad fundamental emotions, these are the two basic emotions, and human-human interaction, negativity and arousal they apply in human computer interaction. A specialist, simply by labeling in this case even a television set, a specialist, people thought it was more effective. Team mates I will talk about today. Gender female voice computers or thought of as better teachers of love and relationship and worse teachers of technical subjects then male voice computers, and perhaps more depressing, people who are praised by male voice computers think that they did better than people praised by a female voice computer. And this is not my fault either. Multiple voices, two different voices on the same computer are seen as two different social actors; therefore, when voice number two praises voice number one people think the computer did better than when voice number one praises itself. They also think its nicer. Vice output/input--I will talk about. And these are various characteristics of media presentations that effect people's perceptions, and I can go into any of these if people are interested. I just want to give you a note here. Let me cut to some more of the social rules now. Flattery, turns out people are suckers for flattery. What that means is, when you were flattered, even if you know you were flattered by a person, you will think you did just as well as if you were sincerely praised. You will think the person is just as good and nice, and you will like them just as much. Even though we all deny it, we have terrible pejorative terms like brown nose, for people who flatter us. Nonsense, we all love it. Conversely, unwarranted criticism we are skeptics. When we get criticized, we evaluate it more carefully; therefore, if I think the criticism is unwarranted I will reject it. This psychologist called it Hedonecky symmetry, positive and negative are different. You don't need to know that. What you need to know is that flattery is different than criticism. So what we did was, we had people work with the computer. In one case we told them that your evaluation will be based on a great deal of study. The evaluation system is one of the best developed systems in the world. In the other case we said we haven't had time... Good question. In the case of flattery. Let me answer the question first in the case of flattery. In the case of flattery, Western cultures are, I don't know the literature on Eastern cultures. Some of the other concepts politeness is a universal and some of the other ones do vary and this is important question--which of these are cultural universals. I will argue that social responses to technology are universal. The particular manifistationsdo differ. The flip side is when you nationalize, when you design for particular countries, your translation must include social translations as well as the more technical translations for words. So the other case we tell people look have written the evaluation software yet the evaluations you get are totally random. And then people either praise or criticize everyone got it one to four cases random definite, careful praise or criticism. Well, you probably guessed flattery works just as well as true praise. Users thought that they did better when they received no evaluation, and they thought they did just as well as when quote and sincerely praised, their ---- was equal they liked the computer as much they enjoy working with the computer as much and they perceived performance the computers performance was perceived equal and both was more positive than no evaluation at all. However, ---- criticism just like with people is different than sincere criticism users perceive performance as better for insincere criticism we reject criticism, we suck up praise but reject criticism when its random. We also feel better about it, we just we'll ignore it we don't like people who unwarranted we criticize and we don't like either in performance or other things. This suggest that an example a radical new spell checker. Spell checkers are bad thing as for as flattery they only criticize and sometimes they criticize even when you spell the word right, let's flip that around imagine a spell checker that goes through that says ---, you spelled that correctly fantastic only five percent of the country can spell -----, correctly and at the end it says, "you're a much better speller than ever. ---- task people love it, people love it, they think it just catches more long words, everything. OK, right...ok next one, so know I going to give you a run through froming running throughout the libraries wonderful libraries paper you could run through and grab things. So know I'm going to run through the personality psychology section. Now personality personalities computers is one of the great holy grail since I gotten into the field people have said oh, computers have personality but it so hard its artificial intelligence and complex representations and all this really hard stuff. Well it turns out couldn't be, because humans can assess the personality of other humans within a minute or two with great accuracy and we can't be using them enormous data base with which to do it. So it turns out that psychologist have been very nice being at first the human computer interaction that they are, and listed for us the criteria that manifested various personalities let me just tell you a few things about personalities written these are all written by psychologist two aspects of personality that are fundamental there called the interpersonal complex, dominance versus permissiveness your either dominant person or permissive friendly or unfriendly those are two basic dimensions those are thongnicle, those are cross cultural, manifestations dip a little bit those have been tested in a about seventy different countries and they always work. We can mark personality with similar cues and most important what is called the law of similarity attraction, we like people who are like us. That's the law of similarity and then were, "birds of a feather don't flock together", but opposites don't attract. How should we manifest personality on computers if we don't know any artificial intelligence, and all we have is text easy. Speaker: Does this actually help in real success? Clifford Nass: Well, it's a good question. The answer is that in some cases we've studied and some cases we are about to. In cases where we have studied it, people cooperate more with the computer. Cooperation has been associated in other studies with learning. So to the extent people are cooperating which happens with both similarity personalities and teammate, it does lead to greater learning. We are about to embark on a series of studies that focus on learning specifically. But we also know that positive affect leads to all sorts of good things, including greater success. But to the extent that you can manipulate positive affect, it leads to success as well.

Speaker: In regards to the social aspect, have you done any experiment in a sort of manage tois (French word) kinds of experiments. Because right now we are now talking more about collaborative groupware. Have you done anything with true psychology where you have people with mixed temperament working together with the computer?

Clifford Nass: We haven't but the areas that we are most excited about are CSCW, computers for collaborative work. We have multiple computers and multiple people. And the interesting question there is, "Do you start getting stereotyping in-group out-group phenomena?" So that it becomes we the people versus them the computer or you get the computer, my computer and I are a team; therefore, defacto we are better, stronger, smarter and not going to cooperate with these other people. We haven't started that yet, but that is another avenue. Some psychologies are a 100 years old roughly and big and we are small. So we are busily pulling off the bookshelf. But people are welcome to come in and help. So similarly when they are similar you get these blame and things. Now here is a very interesting one on control. Most of the literature in HCI totally uninformed by human-human interaction says, "Control is a good thing." My friend the antagonist, Ben Schneiderman, is always saying that user control is the ultimate good. It turns out that is not true because a list in terms of affect, if you think people are going to succeed, then you want to give them control. They will feel good. But if you think the likelihood of failure is high and you want people to feel good, take away control. That way they can...

[inaudible question]

Good question. It depends on the particular study we are talking about. In a case of the tutoring situation, it means a high score or low score on a test. In the case of blame, it was a different type of test. We said experts from the U.S. Army figured out the desert survival problem. We are going to compare your score to theirs. So there is objective success. As it turns out, my own particular bias is not towards objective success, but perceived success. I personally am much more interested in how people feel rather than their sort of learning, which is why learning comes last as opposed to first. But certainly actual success is a valid variable. I think it has been overemphasized, but it is certainly a valid variable. No, at the expense of feeling of good, liking things. Those are important and good things.

Speaker: Do you advocate routinely giving personality tests to users?

Clifford Nass: It depends on what you mean by routinely. If someone said to me, "I can make you work better with your computer, feel better about the interaction, and have a good time," and all you have to do is answer these 8 questions, yes I would feel comfortable with that. Now, the potential for elicit forces, like Microsoft or something, capturing this information and destroying the world are real. So there is that aspect. But on the other hand, there is certainly an issue of privacy and privacy is always an issue. Same thing in human-human interaction. The more you reveal about me, the more potential power I have over you and the more miserable I can make your life. Also the nicer I can make your life. Which is why marriages are some of the happiest things and divorces are some of the most unhappy things. They are both based on too much information, which can either be used for good or ill. And I think the same rules apply. Information is a very complicated thing. What I am saying though, if you trust, and I think the issue of trust is an important one, giving information can optimize an interface and make people work better, happier, etc.

Speaker: It seem very clear that these effects are true for short-time interactions with the computer. We have longitudinal studies. I know people have dated bots and muds for about 6 months and eventually figure out that they are dating a computer. In interacting with the machine initially we are going to use all that wiring that you talked about, but over 6 months, a year and long-going interaction, the differences between human-computer interaction and human-human interaction, one would think, are going to become apparent, so have you studied that.

Clifford Nass: No. It is a great question that takes a lot of money and a lot of time. But it is a great area. There has been some studies with respect to Bob over time use, do people get tired of it.

Speaker: It seems it is a little dangerous to extrapolate from a very short-term interaction and conclude that that is true for long-term human-computer interaction.

Clifford Nass: Well that is a good question. The answer to which is that types of rules you are using are unlikely to be extinguished (that is a fancy psychological term). Things may go away, because there is little motive not to. That is it is only when you make some egregious error that things get bad. I know, even in the case where you are dating a robot, that in and of itself is probably not an egregious error because you have had this social interaction which is what you wanted presumably by doing this. But when it becomes an egregious error is when someone comes up to you and goes, "ha-ha-ha, you are stupid." Now, does that protect you the next time. No. You will then encounter someone else who will manifest social rules and you 're evolved brain will fall for it just as much as it did the last time. Not because you failed to understand, but because that is the way brain is built. So I think the emphasis on these affects going away are actually much overestimated. We can really parse--our brains are too small. Not only evolved a particular way, but are too small to continually filter through. Let me give you my favorite example of this. When you go to a horror movie--you get very scared. How do you calm yourself down? The best way is to say "It's only a movie, it's only a movie." Now why doesn't your brain say, "You moron, what the hell else do you think it was!" The answer is your brain has a great answer for that--real life. Because your brain was built to rules of human evolution. When is saw something, it didn't sit there and go, "Is it real?" Until ? we didn't have to worry about that stuff. So the point is that your brain is built to accept all this stuff as real and what is hard is to filter it. Now what happens when you say, "It is only a movie, it is only a movie." You don't get the plot, you miss out all the cool stuff because your brain is not big enough to keep on going, "It is only a movie, something happened, it is only a movie, something happened." Because our brains are built the way they are that you can do this sort of what we call discounting for media. Let me just zip through one more of these and then if people have particular concepts they are interested in, I will do it. There is a lot of work on voice input. A lot of companies, including IBM, are going into having software that has voice input. But one of the things that we know in real life is that when we speak with someone, there is an increased social presence. That is why it is harder to break of with someone, if you have to talk to them it is harder to fire someone. It is easier to write a letter. That may not be nice, but it is certainly easier. We wanted to see whether people would feel emotionally funny in the same way with computers. The particular rule we explored here is the more social, the more social conformance. If you are alone in a house you can eat your dinner, you can beans with a wooden spoon, with a towel wrapped around you over the sink. You can't do that with a lot of people around or you get in trouble. So you conform more with larger social groups. So we wanted to see whether when people spoke to a computer as opposed to use text code whether they would in fact be more circumspect. So we had an input mode of text and voice and output mode of text and voice, and we asked people personal questions about themselves. People were told they would not be recorded and everyone believed that. They just though it was a speech recognition system. We did not have a speech recognition system, so recorded them. But we destroyed the tape immediately after beaus it is unethical. And what we were interested in here is the extremity of responses. It turns out that is not socially normative to give extreme responses, more so for dominance and submissive. But to say absolutely to personal questions is considered not okay. And also people tend to give the more socially accepted response. So we asked people a variety of survey questions. One involved feminist issues and one involved political issues. We did two different experiments. And the moral to the story is summary both, voice input led to less extreme responses and more socially acceptable responses. So this should be a lesson not a moral--sorry about that. That is people were more circumspect in speaking. Voice output had no affect on this. Interestingly also, we asked people how honest they felt they could be, depending on which condition we are in. Here is a case where people actually, even though they denied this, when you looked at the scores for people who use voice input, they actually felt they could be less honest. So somewhere they were able to access the rule that they being circumspect. Not just doing it unconsciously. So just let me summarize. We will have a few minutes for questions. First of all, very important. People can't tell us what they think and feel. If we did focus groups on all these experiments, I wouldn't be presenting here today because the results would people aren't polite, they are not subject to flattery, they don't believe in personalities, they don't get personal. Because when you ask people, that is what they tell you. You are not wrong, they just don't know. You have to use methods that get people to say what they thinking and feeling, and if that thing in some of our other stuff, we actually use things that really don't ask. We are using electroencephelograms and electrocardiograms and skin conductance to get at responses. Second point someone was asking, quality is really a perceptual issue. What is good and what is bad is really perceptual not technological. Tells what people perceive is good or bad that is important. Third point is people are human first. People always say that experts are different which is why we run all these studies with experts.

<*** recording tape changed ***>

Ken Kahn: Ken Kahn here, I was wondering if you could comment on lessons that all of us might learn from the experience with Microsoft's Bob.

Clifford Clifford Nass: He won't let me, but you can grab me afterwards.

Ted Selker: That's too big a question and we are really out of time, Phil I need your comment though, please go ahead I want you speak.

Phil Agre: This may be as big a question as Ken's, Cliff I found your presentation ethically troubling all the way down, I want to ....

Clifford Clifford Nass: It's not my fault (laughs in the background)

Phil Agre: No, I think it is. At least, it's my concern. Let me just try a scenario on you. In the literature you are talking about is a great deal of research on the conditions under which people are more likely to obey instructions. What do you think about imbedding those principles in user interfaces. Are you comfortable with that?

Clifford Nass: Okay, I think I can give you a really short answer. It is critically important and socially valuable to know all the terrible ways that people can be manipulated. That is critically important that is not to say, nor if I advocated at all in this talk, that we necessarily should use those methods. The discovery that people can be manipulated is one of the most important social findings in the 20th century and I'm also delighted we know that. I'm also delighted that we know we should avoid it, that's good too. There is no ethical component to the discovery that these things exist, there is an ethical component in using them and I am not advocating which ones you use and which ones you don't. That's for the individual ..

Ted Selker: Except, except when you are in your consulting role.

Clifford Nass: Well but even there, I'll give you a really short anecdote: male characters are trusted more than female characters.

Ted Selker: So the character in us, for example if I am designing a user interface I really want to focus and work on tasks and be oriented. Now if I've got this little guy over here, that's like disorienting me. I'm sorry guys, that's not really helping with my task. Now when is it appropriate to have an avatar helping me in a task, and that has to do when the task is generally social probably plus I'm sure we can learn about that.

Clifford Nass: No, it's the same thing as sometimes when I want to know what the meaning of a word is, I look in the dictionary. Sometimes I go to the guys next door, not for reasons of speed but I feel like being social with the guy next door. Even though I may be working on a task I may just feel like it. Similarly social things should be there, social manifestations should be there when you feel like it. With that said, one lesson from Bob is the characters there where way over the top. They spent their life saying look at me I am a character, look at me I am a character, we don't like that in people and we certainly don't like in software either. So social presences that are available when we want them and not when you don't are the people we like the best and those are the people we should model.

Ted Selker: This is a fantastic talk, Thank you so much.

Clifford Nass: Thank you and I'll be glad to talk to people afterwards. And buy the book.

Ted Selker: Buy the book, I know there is a lot more people interested and a lot more questions about ....

Clifford Nass: The media equation.

Ted Selker: Ooh...

Clifford Nass: Mediated life equals real life.

Ted Selker: Anyway, no more speaking, lets get the microphone of him. I want.....

Clifford Nass: Cambridge University Press, but it's in regular bookstores.

Ted Selker: No advertising, ...give it to him...

Clifford Nass: Copyrighted. (laughter)

Ted Selker: I really hope that the discussion about this talk has more life in the discussion period following this talk. I trust and I expect that we will have more chance to talk about all these issues and I think they revolve around a lot of things that we were going to be seeing more and more reasons to think about. With that we are going to talk about something very different, we are going to talk about....

User System Ergonomics Research (USER)
[ IBM home page | Almaden Research Center | IBM Research | (C) | (TM) ]