Machines That Think Like People - Bad Idea?

Posted 2 Jul 2008 at 22:23 UTC by steve Share This

The Guardian published a piece by Charles Arthur titled, "Artificial intelligence: God help us if machines ever think like people", in which he questions the idea that Humans are a good model on which to base machine consciousness. Why? "We don't build skyscrapers based on the same principles as the human spine; if we did, then they'd be constantly falling down or showing signs of significant weakness. We don't build transport systems that work like the human body, using muscle-like elastic bands snapping back and forth to power them." He notes that even the Human body is a mess because "evolution is a terrible designer". His premise is based on a recent book by Gary Marcus, Kluge: The Haphazard Construction of the Human Mind which investigates the conflicts between the millions of years old features of the brain that are now conflicting with the relatively recently acquired features based on language. In the end he suggests that giving a machine a mind as badly designed as ours would be an act of cruelty.

conscious?, posted 3 Jul 2008 at 13:10 UTC by JamesBruton » (Master)

"I just think that humans are a terrible example to follow if you want to develop something that's conscious".

Depending on the definition of 'conscious', there may be no choice for the outcome of a conscious machine to be like a human - because that potentially includes everything that makes up the definition.

The thing about sky scrapers and transport systems seem to be a red herring. Although you could argue that buildings made bend to withstand earth quakes are similar to a human spine, and that any stored energy element used to power transport etc is just a complex version of potential kinetic energy stored in a stretched elastic band.

Agreed!, posted 3 Jul 2008 at 17:43 UTC by Nelson » (Journeyer)

I would have to agree with much of what Mr Arthur writes. I can think of no good reason why an intelligent machine would have to be organized or function in a manner analogous to an intelligent animal.

On the other hand, the human brain, with all of it's evolutionary baggage, is amazingly capable.

Although neuroscience progress has been slow, I am inclined to believe that there will come a day when we will have a thorough understanding of how the human brain operates, and will also be able to construct artificial minds that are so much more effective and efficient.

Non-problem as far as I see it., posted 4 Jul 2008 at 00:55 UTC by MDude » (Journeyer)

I'm pretty sure most attempts at "replicating human thought" arn't trying to out-and-out copy every detail of us, just the parts that whoever is making that paticular intelligence thinks is important. Most likely, any (or at least the most obvious) flaws in the human reasoning system will be looked over as details which can be abstracted away, and thus thus replaced with whatever's more efficient, much like how most robot developers don't intentonally use messed up cameras in order to simulate the human eye. So no, I don't think our machines will be riddled with weird quirks of human biology. They will, however, be riddled with weird quirks of human culture. ;)

See more of the latest robot news!

Recent blogs

30 Sep 2017 evilrobots (Observer)
10 Jun 2017 wedesoft (Master)
9 Jun 2017 mwaibel (Master)
25 May 2017 AI4U (Observer)
25 Feb 2017 steve (Master)
16 Aug 2016 Flanneltron (Journeyer)
27 Jun 2016 Petar.Kormushev (Master)
2 May 2016 motters (Master)
10 Sep 2015 svo (Master)
14 Nov 2014 Sergey Popov (Apprentice)
Share this page