Agents: from Pandemonium to ... whither?
Considering the user hostile computing environment of the 40s it is amazing that any of our ideas of what artificial intelligence could do are still interesting... The concept of an intermediary that would act as an agent doing things you wanted done still thrives today. Still, I am dreaming of agents that can understand and interpret high level goals and purposes. What is important? What is correct? What should be done? I want an agent to remind me, “hey boss but yesterday you said...” or “Professor, you want me to lie to the IRS...” or “But, honey thats wrong....” An important thing that seems missing from the current concept of agent is that a good agent for you incorporates your set of purpose and goals. I want an agent that can learn and adapt as I might, that can at least occasionally infer what I would want it to do from the updated purposes it has learned from working for me, and that will do as I want rather than the silly things I might say.
Oliver Selfridge came to MIT from England at the age of 14 to study with the greats. He organized the first public meeting session on AI with Marvin Minsky in 1953. His early papers were on neural nets (1948); and pattern recognition and learning (1955). His pandemonium paper of 1958 is recognized as the beginning of breakthroughs in several fields. Oliver has spent his career creating projects at Lincoln Laboratories, Bolt Beranek and Newman, and GTE laboratories, research laboratories and has been a lecturer at several universities. He has been the author of several technical books and a number of children’s books.
[ IBM home page | Almaden Research Center | IBM Research | (C) | (TM) ]