Most of the robotics work I've done has been on immobile, 5 degree of freedom robots. I developed drivers that went on to be used in a couple of other student's work for the Rhino, a small, toyish robot designed for use in education. This machine was controlled with an old VME setup and was later ported to QNX. (Real time Unix-like operating system.) As mentioned earlier, CWRU has a project based class, where we worked on the Rhino and a Motoman. There were some interesting projects, such as following a laser with a sensor in order to measure the robot's lengths, part sorting using image processing and cutting circles from foam with a soldering iron. The final project was to implement part manufacturing- use autocad to draw something, divide it into slices and then have the robot create the slices from foam. While each peice worked (I think), it never was coordinated to work all together from start to finish. Currently, I'm looking at biologically inspired AI. The idea is that presently, there are certain tasks that robots just aren't very good at, like path finding. The most advanced robots currently can't move 200 miles (in the desert) without falling in a ditch or something. (See DARPA Desert Challenge for details) The mars rover moves something like 100 feet per day. Your dog, however, can easily run half a mile through a forest to find you if you call it loud enough. And it can do it even if you have a friend trying to stop it, meaning it's pretty fast. How's the dog's brain do that? If we (humanity) knew that, we might be able to make our machines (robots) do it to.