Older blog entries for TheDuck (starting at number 0)

I noticed in the article http://robots.net/article/2003.html that there is an argument for Utilitarianism and that robots should observe this social concept. The concept itself is doomed to failure. There is mention of a "greater good" but in who's context? Who decides what is "good" for everybody? Is my robotics hobby more important than your liposuction? The article talks about one person being sacrificed so that five people can have his organs. I beg your pardon? Did anybody ask the human organ production plant what his views on this concept are? The last person that I recall sanctioning the use of one person or their organs for another person's interests was Hitler. I don't recall everybody being pleased with that (though, undoubtedly, some were). Please read 'Atlas Shrugged' by Ayn Rand for more info.

Also, do we realize we are attempting to assign such things as rights to a robot? Let me put it another way, do you think a hammer and screwdriver should be governed by Utilitarianism or Objectivism? Sounds ridiculous, yes? No matter what, a robot is a device (like a phone) that does exactly what we tell it to do. If we 'emulate human response', we have not created a sentient being. So if the argument is that the designer or programmer might consider these rules as principles to guide the construction of a robot that makes sense to me. But keep in mind it is not the "robot's fault" any more than it's the hammer's fault. "Guns don't kill people, people kill people" as they say.

What do you think?

X
Share this page