Older blog entries for motters (starting at number 38)

A bit more weekend roboting. I've now made the SLAM mapping multithreaded so that the system will scale nicely over multiple CPU cores. Testing on a dual core system shows that I'm getting a good distribution of the processing load. This means that the number of particles used to model the robots pose uncertainty at any point in time is scalable, so as more cores become available the mapping will be more robust and accurate.

Have done a little tidying up on the Sentience code and made a few speed improvements. My financial status is now recovering after a couple of years of economic delinquency and it looks like I'll soon be in a position to perhaps order an off the shelf robot such as the Corobot or 914 and begin integration testing the stereo vision and mapping software.

The main aim here is to develop visual perception software which will enable useful and economical robotics in a home or office environment, using cameras rather than laser scanners.

A new stereo vision/grid mapping result:

http://sentience.googlegroups.com/web/corridor_sequence1.gif

Although this looks 2D its actually a colour 3D occupancy grid map shown from above as a robot moves down a corridor. As the robot moves its exact pose is uncertain, and it must maintain accuracy by continually localising.

I remain optimistic - perhaps foolishly so looking at the state of my finances - that I'll have a working mobile robot using this system by the end of the year. Genuine spatial awareness for mobile robots using inexpensive vision sensing is within reach.

Some further stereo camera testing. These occupancy grids of an empty orange juice carton show the current state of play. They were produced just by holding the stereo camera and moving it around slightly to get good coverage. The system estimates the camera's movement through space and so is able to project individual depth readings into an independent coordinate frame.

After some physical re-alignment of the cameras and a calibration test the depth accuracy of Sentience is looking good. Here you can see a little animation.

This should allow me to build quite accurate 3D occupancy grids.

Still working on the Sentience stereo vision project. I'm still making refinements and improvements, mainly on increasing the accuracy of the depth maps. I'm really quite pleased with some of the recent changes. Although there is still quite a lot of "noise" in the depth images detection of objects or people seems very reliable even under poor artificial lighting conditions.

I'm trying to get the depth maps as accurate as they can possibly be, so that the quality of occupancy grids produced from the maps will have a similar or even better level of accuracy. There's nothing particularly special about the webcams which I'm using. There is no special frame capture synchronisation or colour correction, but I can still get reasonable results. This should mean that a sophisticated three dimensional perception system can be built using very cheap "off the shelf" hardware.

Well its been quite some time since I last posted on this diary. My work situation has been all over the place since the end of last year, so I havn't done all that much on robotics projects. I'm now unemployed, so I've got plenty of time but no money. Probably nothing much will happen until I can get a new job of some sort to finance my robotics habit.

Robocore is on hold at the moment, and over the last few months I started a few other projects. One of them is a vision system for microscopes called "littlepeek". The main aim of that system is to analyse photos of living cells and extract information about the nuclei which can be used for cancer diagnosis. It's an interesting problem, and could bring real medical benefit if I could persuade anyone to back it. The other project is a simplified version of my stereo vision system. This is a boxed version of the vision system on my two humanoids, and could also potentially become a commercial product.

I'm now working on transfering the robocore system from simulation onto my Flint robot. One interesting thing which I've discovered almost by accident is that the robot is able to move its eyes in a similar way to a person.

If you rotate your head your eyes don't move in a similarly smooth fashion but instead as a series of jumps, known as saccades. The robot is also now able to do this. As its head is rotated it can keep its eyes fixated on an object. As the head carries on moving around it eventually is unable to track the object and the eyes move suddenly to another visible target.

The result looks quite lifelike, but apart from cosmetic appearance its also a useful feature. I'm hoping to develop this a bit further so that the robot can visually judge distances to objects with reasonable accuracy, using only a single camera and movements of the head.

11 Dec 2004 (updated 11 Dec 2004 at 18:12 UTC) »

Over the last couple of weeks I've been adding more bits to the robocore simulation. I've now got a much better value assignment system going, such that in the simulation the robot is able to learn that things with different colours have a strong discriminative value (they taste good or bad), but that their texture and shape don't. This is quite a nice system which would work with any kind of categorisation.

The part which is currently occupying me is speech understanding and production. I'm trying to make a model of the way speech works which is reasonably realistic, such that damage to the maps or connections within Brocas or Wernickes areas produces similar aphasias to those seen in people after brain damage or strokes. Fundamentally, speech production is just about motor control. As the philosopher John Searle says, "I open this flap in the bottom half of my head and a racket comes out". Rather than trying to explicitly represent words I'm just having the system detect phonemes - the small components of speech which have a direct correlation to specific movements of the larynx and mouth.

The first experiment just involves repetition. So the system listens to some speech and then tries to speak exactly the same sentence itself. This may sound really easy and trivial but it actually involves learning complicated sequences of phonemes.

Work on Robocore continues. I've implemented secondary neural maps for colour, shape and texture categorisation and have also included an IT (inferior temporal) map which is a kind of meta-map classifying the three earlier ones. Shape categories are formed based upon the radial wave method described in Steve Grands recent book about his robot.

The resulting simulation works (the robot moves towards blue blocks and avoids red ones), but to become a more general system capable of being run on a real robot some additional components are needed. The next stage is to implement the notorious "thalamocortical loop" which forms the crux of the dynamic core theory. This might be tricky and involves some close scrutiny of Edelman's often cryptic writing style. One key property of the dynamic core is the ability to automate common behaviors. For example, learning to ride a bike initially involves a lot of conscious effort, but once you have learned it just becomes automatic and you don't really have to think about it.

It seems that the forum idea on my web site was a mistake. Someone emailed me offering to host a forum, but now it seems that their server is down and the forum has been inaccessible for an awfully long time. Appologies to anyone who tried to post but couldn't.

29 older entries...

X
Share this page