16 Mar 2006 Tetra2o2   » (Journeyer)

Last night before going to bed, I was thinking about how to create the artificial intelligence for my robot. As I thought more about it, I seemed to be experiencing a serious moment of doubt. So far, I have assummed without much critical evaluation that creating a true machine intelligence (I don't like the terms 'artificial intelligence' because I don't think that if machines were to become truly intelligent, there would be much use for the prefix 'artificial'.) would be a good thing. But with the recent articles on the Defense Department's use of robot-like devices for warfare, I am not so certain anymore. It seems to me that machine intelligence, especially when such intelligence becomes sentient, would have vast potential for abuse. One could argue that computers already have potential for abuse (i.e., crackers and virus spreaders) but I think this misses the point. To give machines a high level of independent intelligence (which is what I'm really talking about) would make them potentially much more dangerous then computers in general because the capabilities of most computers is limited by the skill of the programmer. But a robot or computer with a high level of machine intelligence might be able to do things far beyond the expectations of the human programmer.

Sentient computers and/or robots is a serious matter. I suspect that when computers/robots become recognizably sentient, people will be afraid of them. Humans will be threatened by them, just as some were by the great chess-playing programs (e.g., Deep Blue) in the mid to late nineties. It seems not only possible, but inevitable that computers will become sentient. In fact, I think it is possible with technology we have now. The problem is one of having the right software rather than hardware. Just think of how intelligent some species of birds are. Their brains are relatively small compared to many mammals, and yet some exhibit extremely high levels of intelligence, comparable to dolphins and chimps. (See, http://www.alexfoundation.org/index2.htm). Thus, it seems that the problem with robots and computers now as far as sentience is the lack of the right programming. True, some aspects of robots such as fast visual or sound processing require shear processing power (at least the way visual software and cameras are currently built), but the capacity for logic can be programmed easily. I think that when someone figures out a way to control a robot such as Honda's ASIMO by a supercomputer, and writes the correct software, we could have a robot exhibiting sentience tomorrow. If you observe the reaction that people have toward cloning, imagine how they will react to an autonomous robot that appears to be conscious. I think they would react negatively and fearfully when they observe it acting in unanticipated and spontaneous ways, as one would expect from an entity that was really conscious. They will be forced to deal with non-human intelligence, which will, at a minimum, threaten their religious views of human special-ness and if some Skynet-like machine emerges, threaten their life.

Latest blog entries     Older blog entries

X
Share this page