Older blog entries for motters (starting at number 5)

Have just got my copy of "designing sociable robots" by Cynthia Breazeal which describes the construction and software of the MIT Kismet robot. The vision system on the robot looks fairly simple, consisting only of motion, skin tone and primary colours detection. Nevertheless the videos on the accompanying CD look good with some interesting behaviors being displayed.

For distance measurement it looks like she has used the same sort of simple subtraction which I've used on Rodney. This just gives an overall indication of whether there is an object close to the robot and how fast it might be moving.

I've started reading the much touted book by Stephen Wolfram, "A new kind of science". It's a considerable tome, being just about the right size and weight to prop open a heavy door, and thus far I've only read the first couple of chapters.

There seem to be some pretty big claims made in the first chapter, including one about possible AI applications, but thus far into the book nothing really exciting seems to have happened. Nevertheless I've written a little VB program to demonstrate the one dimensional cellular automata which is described. Most of the patterns produced are pretty boring, but there are a few which appear chaotic. I think my rule numbers aren't quite the same as those in the book, but the patterns are the same.


After a certain amount of faffing about I've managed to get Rodney's slow speed visual tracking working. I managed to overcome the competing optical flows problem by ignoring the actual velocity of the target and just using its distance from the centre of the camera's coordinates as the error signal.

First I tested the tracking only on one axis (head pan). This worked ok, so I've also done the same for the eyes tilt axis. There are other axes, such as neck tilt, which I could include but I really want to keep the head movements to a minimum and as smooth as possible to that the target isn't lost due to camera shake. There is some slight shakiness in the very slow speed movements of the head, but this is probably inevitable given the rather crude way in which the miniSSC controller works.

To make the visual tracking a little more reliable I've increased the size of the local region within which the program searches for matches, and upped the sampling resolution a little.

Of course this type of tracking is only for slow moving objects. For things which move faster the robot needs to use a different, so-called "ballistic", tracking system. Here the system determines the position error and then does a fast move of the head, ignoring anything which is seen during the move (mostly just blur).

Have been testing out the visual tracking on Rodney. I switched off all his other behaviors, effectively bringing him to a complete stop except for the tracking routine.

Waiving a toy in front of the robot, it is actually trying to follow it at slow speed. I can see that the fovea region is continuously focused on the object. Unfortunately the robot's head tracks the object for a short time, then moves back again. At first I thought this was just too much gain, but no amount of tuning seemed to alleviate the problem.

What's actually happening is a sort of battle of the optical flows. The vision system is seeing a flow in one direction and initiates the command to move the robot's head, but then when the head moves this produces an apparent optical flow in the *opposite* direction. This causes the annoying back and forth behavior.

At the moment I'm not quite sure how to get around this problem. I don't really want the robot to ignore everything while it's head is moving because this is supposed to be a slow speed tracking mechanism independent from the ballistic trajectory system which already works ok for big movements. I think some sort of predictive mechanism is needed to anticipate the self-induced optical flow produced by the robot's own movement.

- Bob

A few more tweaks to the motion detector. I added the ability to track a moving object for a limited period of time, with the tracking being manually initiated by pressing a button.

When the new motion detector is integrated with the robot I'll need to decide on some criteria which it can use to decide whether a particular target is worth tracking or not. It will also be interesting to see whether the robot can continuously track an object with its head moving at slow speed.

Also experimented with doing some classification on the fovea region, but not much in the way of results with that yet.

I've rewritten the motion detection system for my Rodney robot so that it's now much more similar to the type of motion tracking found on more expensive robots like Cog. This is really a substantial improvement on the previous system and is able to detect distinct target areas within the image.

Previously I just used a system which calculated a global "centre of motion" coordinate within the whole image and used this to direct the robot's head. The new system is more accurate in that it isolates individual moving parts of the image and moves a small foveal region to zoom in on that area. At present the region of the image selected to be highlighted by the fovea is just the biggest moving part of the image, but potentially other criteria could be used to select from multiple "attention boxes" which may appear from time to time.

Typically a lot of the motion which the robot sees is me sitting in front of it, so the fovea seems to spend a lot of time focused on my face. Later on I could maybe use this for face recognition, or recognition of different facial expressions.

Although the new motion detector is designed for use on the robot the program here is completely independent from it, so all you need is a single Video For Windows compatible webcam to try it out.


Share this page