Older blog entries for motters (starting at number 12)

I've tried the recently released open edition of Borland's Kylix 3. It looks nice, and completely identical to the windows equivalent C++ Builder which I've been using to develop Rodney's vision system.

Actually, I chose C++ Builder precisely becase I knew that Borland were developing a Linux version. This should mean that I can port Rodney's code to linux with minimal effort, whilst not losing the modern drag and drop type development.

There are a couple of unknowns when dealing with Linux. Firstly is the question of how does video for linux work, and does it support simultaneous access of two cameras. Secondly is the question of how serial comms works under linux. Under windows I'm using the MScomm control, and presumably there is something equivalent in linux.

At last after months of searching the web and asking on newsgroups I've finally managed to get Rodney's two quickcams running at the same time and on the same computer. This may not seem like much, but for me I think this is a significant breakthrough, which should permit some interesting new experiments.

Check out the demo on http://www.fuzzgun.btinternet.co.uk/rodney/vision.htm

In the pipeline are plans for the next version of the Rodney robot, and this may eventually use firewire webcams so that I can get a higher frame rate at large resolutions and without the noise which results from video compression algorithms. Firewire cameras should work with the same WDM system which I've developed already.

- Bob

I've made use of a snake algorithm in order to improve the object detection ability of my Rodney robot. This helps to avoid inappropriate background detections which were previously a problem.

Have a look at some of the results at http://www.fuzzgun.btinternet.co.uk/rodney/vision.htm

After writing the optical flow and segmentation routines for the vision system of my Rodney robot I've now combined the two algorithms to form an object detection system.

The fundamental assumption is that parts of the image which are moving in approximately the same manner usually belong to the same object. Of course this isn't always true, but as a general commonsense rule it holds most of the time.

At the moment the combined method merely highlights areas of the image which it thinks belong to the same object. It doesn't actually carry out any recognition of the identified areas, but that is the logical next step.

Work on Rodney's vision system continues apace. The latest algorithm to be completed is a texture-based segmentation routine. This breaks the image down into a number of coloured blobs.

After some tweaking of the algorithm I've managed to get it working quite well, even on the often poor quality images captured from the robot's webcams. This segmentation routine may be very useful for detecting face-like areas of the image for face recognition. The characteristic shapes produced by different hand movements may also be a strong candidate for recognition using this algorithm.

I've written a small demo program to calculate optical flow from an attached webcam. The program works quite nicely and zips along at 20fps on my 1.7GHz P4. At present it's written using the only C++ compiler I have to hand, Borland C++ Builder version 4, and I've got no idea what DLLs you would need to run the executable standalone since the help system is pretty vague on that score.

Anyhow, the general idea is to use optical flow information to help my Rodney robot recognise certain types of gesture. The prime candidates are head nodding/shaking and finger/arm pointing. These elementary social events together with other information from speech recognition all help to give the robot important clues about what is going on and where it's attention should be focused.

If anyone wants to have a go at porting the program to MFC or KDE I'd be interested to have a look at the results, since I havn't really done much C++ programming for years.


Still plodding on with Dr Cynthia's book whilst sitting in the museum park in York with some bizarre medieval festival going on in the background. I'm sure that people of the tudor period didn't smoke cigarettes or use mobile phones. But anyway, there's a lot of interesting an potentially useful stuff in there, and the way that the behaviors and emotions are implemented seems neat.

I was watching some of the videos on the CD which comes with the book. The frame rate for Kismet's vision system seems ok, even with quite a lot of stuff going on at a 128x128 resolution. By contrast Rodney's vision system is much slower than the reputed 40Hz rate of Kismet, so I've started porting the vision system which I wrote in VB to C++. I'm using the only C++ compiler I have - Borland C++ Builder version 4 - and the improvement in the performance of Rodney's vision system simply by going from one language to another is impressive. Using C++ I can easily do all the necessary processing and still maintain the 33Hz frame rate of the quickcam. The performance is so good in fact that it might become practical to detect eyes within an image of a face by their occasional blinking.

So the next version of Rodney's vision system will probably be in C++. The only question is how to make the C++ program communicate with the robot's other modules. For this I might use UDP rather than microsoft's proprietory COM technology, because that would mean that the program would be easier to port to linux at a later date.

Have just got my copy of "designing sociable robots" by Cynthia Breazeal which describes the construction and software of the MIT Kismet robot. The vision system on the robot looks fairly simple, consisting only of motion, skin tone and primary colours detection. Nevertheless the videos on the accompanying CD look good with some interesting behaviors being displayed.

For distance measurement it looks like she has used the same sort of simple subtraction which I've used on Rodney. This just gives an overall indication of whether there is an object close to the robot and how fast it might be moving.

I've started reading the much touted book by Stephen Wolfram, "A new kind of science". It's a considerable tome, being just about the right size and weight to prop open a heavy door, and thus far I've only read the first couple of chapters.

There seem to be some pretty big claims made in the first chapter, including one about possible AI applications, but thus far into the book nothing really exciting seems to have happened. Nevertheless I've written a little VB program to demonstrate the one dimensional cellular automata which is described. Most of the patterns produced are pretty boring, but there are a few which appear chaotic. I think my rule numbers aren't quite the same as those in the book, but the patterns are the same.


After a certain amount of faffing about I've managed to get Rodney's slow speed visual tracking working. I managed to overcome the competing optical flows problem by ignoring the actual velocity of the target and just using its distance from the centre of the camera's coordinates as the error signal.

First I tested the tracking only on one axis (head pan). This worked ok, so I've also done the same for the eyes tilt axis. There are other axes, such as neck tilt, which I could include but I really want to keep the head movements to a minimum and as smooth as possible to that the target isn't lost due to camera shake. There is some slight shakiness in the very slow speed movements of the head, but this is probably inevitable given the rather crude way in which the miniSSC controller works.

To make the visual tracking a little more reliable I've increased the size of the local region within which the program searches for matches, and upped the sampling resolution a little.

Of course this type of tracking is only for slow moving objects. For things which move faster the robot needs to use a different, so-called "ballistic", tracking system. Here the system determines the position error and then does a fast move of the head, ignoring anything which is seen during the move (mostly just blur).

3 older entries...

Share this page