The Limits of Artificial Intelligence

Posted 5 Dec 2004 at 04:38 UTC by steve Share This

A new ACM Ubiquity article addresses what the author, Alexandru Tugui, believes are limitations of artificial intelligence that cannot be overcome. The article is primarily concered with AI as a simulation of biological intelligence rather than with the creation of real machine intelligence. Even so, some of his objections seem a bit odd, such as the claim that AI can never truly simulate biological intelligence because it is limited to 1s and 0s whereas biological intelligence can have intermediate values. A CD player is a computer that deals only with 1s and 0s, yet it seems to simulate analog music quite easily.

Penrose's arguments, posted 6 Dec 2004 at 01:46 UTC by outsider » (Journeyer)

it seems to me like he is arguing in a similar way as Penrose did (The emperor's new mind). But don't forget that Penrose said that intelligence could be artificially created though, but overcoming limitations of deterministic systems.

Determinism and intelligence, posted 6 Dec 2004 at 04:43 UTC by steve » (Master)

I've seen determinism raised as an argument against free will but never against intelligence itself. How does he explain human intelligence? Does he claim the Universe itself is non-deterministic? (come ot think of it, he's in the school of thought that relies of some kind of quantum weirdness as an escape from determinism isn't he)... I just read Daniel C. Dennett's book, Freedom Evolves, which spends a lot of time dealing with these ideas and he doesn't see determinism as particularly relevent to either. He thinks the problem stems from people confusing determinism with the idea that sequences of events are inevitable or unavoidable.

Missconceptions about computers, posted 6 Dec 2004 at 12:41 UTC by motters » (Master)

Some philosophers and even neuroscientists (who ought to know better) share common missconceptions about what computers are. They get too hung up upon the fundamental aspects of computing: Turing machines and the 1s and 0s. What they miss out is the most powerful aspect of TMs in that they can simulate other types of machines, provided that they are capable of being described by some definite procedure.

They also usually fail to recognise that within any complex system there can be multiple levels of organisation, all of which are equally valid in their own right. As an example if I take my car to a garage because it won't start I don't go to the mechanic and say "the atoms in this part of my car are not moving with a specific amount of energy in order to bring about a self sustaining electro-chemical reaction", but instead say something like "the damn thing won't work, can you check the spark plugs?". Both levels of description are valid, and you could describe the higher levels (spark plugs and pistons) to be emergent properties of the interactions between the lower levels (atoms and forces).

In short, computers should be thought of as modelling tools rather than just things that shuffle symbols or 1s and 0s.

What about ADCs and floating point numbers?, posted 6 Dec 2004 at 13:20 UTC by c6jones720 » (Master)

I think the binary argument was particularly weak. With 1's and 0's you can represent whatever analogue quantity you like subject to quantisation levels. Most of the artificial neural nets using delta rules and things that I've seen are digital simulations and make use of this sort of thing. I think a bigger limitation is our own understanding of how best we can devise a method solve the problems of AI

See more of the latest robot news!

Recent blogs

30 Sep 2017 evilrobots (Observer)
10 Jun 2017 wedesoft (Master)
9 Jun 2017 mwaibel (Master)
25 May 2017 AI4U (Observer)
25 Feb 2017 steve (Master)
16 Aug 2016 Flanneltron (Journeyer)
27 Jun 2016 Petar.Kormushev (Master)
2 May 2016 motters (Master)
10 Sep 2015 svo (Master)
14 Nov 2014 Sergey Popov (Apprentice)
Share this page