When discussing the nature of an individual's beliefs about intelligence, knowledge or the learning process, e.g., Norman (1993), Brning (1999), I have noticed in a number of discussion forum threads on EDEDC and ULOE11 where it can be a useful device to put oneself into the position of an artificial intelligence agent, knowledge-based computer system or a robot. Others have observed that adopting a "Cyborg Pedagogy" can be useful in educational situations (e.g., Angus et al., 2001).

Think like a Robot


Photo from I, Robot Movie on Internet Movie DataBase

To get you into an appropriate frame of mind you might explore Donna Haraway's Cyborg Manifesto (1991).

Philip K. Dick's novel "Do Androids Dream of Electric Sheep" (which was the basis for the Blade Runner movie) explores a number of personal identity and ethical issues related to robots and their emotions, and relationships to humans. Some of these issues are raised in the posting entitled: Blade Runner Film (1982) Philosophical Issues: Personal Identity.

John McCarthy, pioneer of AI, died on October 24th, 2011. Typical of John's desire to communicate about his field was a short sci-fi story he wrote in 2001, "The Robot and the Baby" which has many interesting themes, and to me epitomises his breadth of interests, politics and fascinating opinions. A capable companion robot – "Robot Model number GenRob337L3, serial number 337942781--R781 for short" – was one of many deployed to assist people and deliberately made unappealing and emasculated by the constraints society had placed on robot use.

The story begins:

"Mistress, your baby is doing poorly. He needs your attention."
"Stop bothering me, you …" … "Love the … baby, yourself"

John amusingly includes a long line of reasoning by R781 in the bracketed notation of LISP on probabilities of the baby being harmed if it disobeys its key constraints:

(= (Command 337) (Love Travis))
(True (Not (Executable (Command 337)))
      (Reason (Impossible-for robot (Action Love))))
(Will-cause (Not (Believes Travis) (Loved Travis)) (Die Travis))
(= (Value (Die Travis)) -0.883)
(Will-cause (Believes Travis (Loves R781 Travis) (Not (Die Travis))))

With this reasoning R781 decided that the value of simulating loving Travis and thereby saving its life was greater by 0.002 than the value of obeying the directive to not simulate a person. There follows a progressively escalating series of events where the whole world is watching the handling of the situation by the authorities, and commenting in real time on the event on social media - anticipating Twitter by some years.

Read McCarthy (2001) if you want to explore an informed opinion on the ethics, issues and dilemmas involved in human and robot interaction, which one day we may face. The story has many thought provoking elements. I personally feel for the emasculated robot that is left in the Smithsonian.

Think like an Octopus (Tentacle)


Photo by Brendan Cole from post on Orion Magazine

Now perhaps go one step further and look at an argument or discussion point from the perspective of an animal. Again, to get you into an appropriate frame of mind you might explore Donna Haraway's Companion Species Manifesto (2003).

A recent article by Montgomery (2011) on "How Smart is an Octopus" and its (high) level of intelligence is fascinating. Montgomery quotes a diver friend as indicating:

Three-fifths of an octopus's neurons are not in the brain, they're in its arms. It is as if each arm has a mind of its own. For example, researchers who cut off an octopus's arm (which the octopus can regrow) discovered that not only does the arm crawl away on its own, but if the arm meets a food item, it seizes it, and tries to pass it to where the mouth would be if the arm were still connected to its body.

So you might go one step even further and consider a discussion position from the distributed point of view of an octopus's tentacle!

Think like Skynet


Photo from Wired Science: Data as Art: 10 Striking Science Maps

Now lets push this all the way and try to take a completely disembodied position for argumentation of being a network of computers, or, if you are really wanting world domination or to get back at the humans, you could adopt the perspective of the "Skynet" as envisaged in James Cameron's Terminator movies.

At the 2011 Edinburgh International Film Festival, one of the events was a screening of the 1984 sci-fi classic The Terminator. The event was followed by a discussion between the audience, myself and Sethu Vijaykumar, a roboticist. A Podcast (EUSci, 2011) of an interview with us just before the event covers some of the issues raised in such sci-fi portrayals of robots, their relationship to humans, what their thought processes might be, and ethical matter involved in robotics... and looks at the possibility of Skynet coming about, or whether it might already be here.

Think Differently

Katherine Hayles (1999) "argument" is that the information needs an embodiment. I thought she was offhand in her remarks on "how ... was it possible for someone of Moravec's obvious intelligence to believe... " otherwise. See Morovec (2011). I did not see a rationale for her dismissal of other thinking that seemed to go beyond her own asserted viewpoint. I would have preferred to see some argumentation from her on how the "context" of a "body" provides some defining characteristics for how knowledge is used when so embodied. That could have been interesting with respect to education in both face-to-face and distance learning forms.

By discussing an embodied context for knowledge Hayles might have been able to argue what is it about a (human or animal) body that means it can uniquely carry something that another device cannot? If the "information" is "stored" somewhere whether its in wetware in the form of mushy grey matter or a computer, or in transit between communication devices. does the information still exist?

This transference of the argument to a non-animal agent can also help avoid the over emphasis of human traits or superior species assumptive arguments. The more we observe of animals and consider artificial agents, the more we will come to realise we are just another type of soft machine. Recent studies apparently show we can even share blood transfusions with chimpanzees, as they are so closely related to us. Dolphins may have a different type of intelligence, but should we put such intelligent creatures in zoos?

Using the device of the posthuman pedagogy described in this article could lead to thought provoking discussions.

References

Angus, T., Cook, I., Evans, J. et al. (2001) "A Manifesto for Cyborg Pedagogy?", International Research in Geographical and Environmental Education, Vol. 10, No. 2, pp.195-201.

Blade Runner Film (1982) "Philosophical Issues: Personal Identity" See http://www.philfilms.utm.edu/1/blade.htm

Bruning, R. H., Schraw, G.J. and Ronning, R.R. (1999) "Cognitive psychology and instruction", Merrill.

Dick, Philip K. (1968) "Do Androids Dream of Electric Sheep", Doubleday.

EUSci (2011) "AI in the Movies - Discussion on The Terminator Film at the Edinburgh International", Austin Tate, Professor of Knowledge-Based Systems & Sethu Vijayakumar, Professor of Robotics, University of Edinburgh. http://www.eusci.org.uk/podcasts/eusci-podcast-extra-conversation-ai-researchers

Haraway, Donna (1991) "A Cyborg Manifesto: Science, Technology, and Socialist-Feminism in the Late Twentieth Century," in Simians, Cyborgs and Women: The Reinvention of Nature (New York; Routledge, 1991), pp.149-181.

Haraway, Donna (2003) "The Companion Species Manifesto: Dogs, People, and Significant Otherness", Chicago: Prickly Paradigm Press, 2003.

Hayles, N. Katherine, (1999) "Towards embodied virtuality" from Hayles, N. Katherine, "How we became posthuman: virtual bodies in cybernetics, literature, and informatics" pp.1-25,293-297, Chicago, Ill.: University of Chicago Press

McCarthy, John (2001) "The Robot and the Baby", http://www-formal.stanford.edu/jmc/robotandbaby.html

Montgomery, Syd (2011) "Deep Intellect: Inside the mind of the octopus", Orion Magazine, November/December 2011. http://www.orionmagazine.org/index.php/articles/article/6474/

Morovec, Hans (2011) Hans Morovec Home Page. http://www.frc.ri.cmu.edu/~hpm/

Norman, Don (1993) "Things that make us smart: defending human attributes in the age of the machine", Addison-Wesley.


Sidebar on Emotions

I have been asked before if I believe AI systems and robots can have emotions. My answer from my BAT FAQs is:

It depends on whether you see emotions as something only humans can express... if they are fundamental aspects like desire, fear, pleasure and pain these can be used as guides to influence programs and robots to perform in real environments which can be sensed by the robot. Computers that are aware of their users' emotions could improve their dialogues... and it could be a useful mode of expression to engage their users more effectively.


Partly based on Digital Cultures Blog Post: AI, Cyborgs and Robots, Austin Tate, 16-Nov-2011