Minds, Brains, and Turing

Posted 9 May 2011 at 21:59 UTC by steve Share This

Stevan Harnad was invited to present a talk at the recent Online Consciousness Conference. His talk is titled Minds, Brains, and Turning (PDF format). In it, Stevan explains how and why he thinks Turning got it wrong when he set the modern agenda for cognitive science. He starts with a description of the famous Turing Test (which he calls T2). Along the way he offers comments on Searle's Chinese Room thought experiment and brings in the idea of "what it feels like" - an idea on which he places a great deal of importance:

"Yet, although it may be an illusion that some of the things I do, I do because I feel like it, it is certainly not an illusion that it feels like some of the things I do, I do because I feel like it. And that feeling is as real as the feeling that I have a toothache even when I don’t have a tooth."

Later he proposes the "Robotic Turing Test" (T3) as an improvement on Turing's original. Stevan believes an entity cannot be intelligent if it merely communicates intelligently; it must also have human-like "sensorimotor capacities". Finally he proposes the neurobehavioral Turing Test (T4) which tests the for human-likeness in communication, sensorimotor capacities, and "neurobehavioral performance capacity". The apparent downside to this is that T4 ends up as more of a test for identifying human intelligence than determining whether non-human entities are intelligent. It's easy to imagine a silicon-brained robot with alternate sensorimotor capabilities that's as intelligent as a human, yet would have no chance of passing T4. In fact, most well-known fictional robots and non-human life forms (e.g. HAL, Mr. Spock, etc) would fail T4 despite general agreement they seem intelligent. A variety of interesting responses and comments to the talk have been posted.

Would Vegans eat robots?, posted 9 May 2011 at 22:48 UTC by steve » (Master)

Best exchange I noticed in the comments and replies, referencing Mr. Data, a fictional robot from Star Trek, who Stevan proposes would pass T3 (but presumably not T4) and who, at least in ST:TNG episodes could feel emotion:

S.Mirsky: “[W]ould we treat Data… like a toaster or other zombie machines instead of as a fellow person? (not about whether we would be justified… but whether it would make sense…)”

S.Harnad: Well the moral question would certainly trouble a vegan like me: I definitely would not eat a T3 robot. (...more of the exchange omitted...) As for me, Data would be enough — not just to prevent me from eating him or beating him, but for according him the full rights and respect we owe to all feeling creatures (even the ones with blunted affect, and made of metal).

Interestingly, it seems Stevan acknowledges both that an entity passing T4 may not "feel", precluding it from being intelligent/conscious/alive (whatever word we prefer) but an entity (such as Data) who could only pass T3, might be.

See more of the latest robot news!

Recent blogs

30 Sep 2017 evilrobots (Observer)
10 Jun 2017 wedesoft (Master)
9 Jun 2017 mwaibel (Master)
25 May 2017 AI4U (Observer)
25 Feb 2017 steve (Master)
16 Aug 2016 Flanneltron (Journeyer)
27 Jun 2016 Petar.Kormushev (Master)
2 May 2016 motters (Master)
10 Sep 2015 svo (Master)
14 Nov 2014 Sergey Popov (Apprentice)
Share this page