Older blog entries for motters (starting at number 64)

11 May 2009 (updated 11 May 2009 at 22:21 UTC) »

I now have both of the stereo cameras calibrated, and am fairly confident that I'm getting good quality disparities, which should at least suffice for navigation purposes. I wrote an extra program which allows me to visualise the stereo disparity and manually alter calibration parameters to observe the effects. Hence, if there are problems I can check the camera calibration more thoroughly than was possible previously.

The next step is to do integration testing with all of the systems running - stereo vision server, motion control server, servo control server, steersman and ultrasonics server. With luck I should be able to create some real maps soon.

One of the problems with 3D occupancy grids, is that they can occupy a lot of storage space in terms of memory or disk storage. Imagine a space the size of an average house turned into small 1cm cubes, and that's quite a lot of cubes to keep track of.

Much of the space inside homes is actually empty, or rather filled with air, but from the robot's point of view knowing about probably empty space is just as important (maybe even more important!) than knowing about what is occupied, and thereby a potential obstacle. Some savings can be made by not storing information about terra incognita - areas of the map which have so far not been explored, but assuming that we want the robot to have a good understanding of an entire house this still leaves us with quite a heap of data.

At this point the unimaginative can simply appeal to Gordon Moore and his famous "law". The capacity of storage devices, such as hard disk drives, is always increasing and it does look as if even the smallest storage devices around today would be able to handle the number of cubes that we would like to deal with. Even though this is the case loading from and saving to the storage device is still going to be relatively slow, and the robot needs to be able to access the data more or less in real time if it's going to be useful. We could also be lazy and just load the whole lot into a large amount of RAM, but ideally it would be good if low cost devices could be used, such as netbooks, which only have modest memory and local storage capacity. This would help robotics to continue becoming more economical and therefore marketable.

So what to do? Since the occupancy data in this case is being produced from stereo vision a way to get better storage economy might be to only store a random sample of the stereo disparities observed from a dense disparity image. If we know the location and pose from which the observation was originally made, based upon the results of SLAM, then a local 3D occupancy grid can be regenerated dynamically from a fairly small amount of data as the robot moves around the house. This means that storage access times are going to be much shorter, and potentially a lot of stereo disparity data could be buffered in memory.

Some back an envelope calculations go as follows:

If we randomly sample 300 stereo disparities from a dense disparity image, and represent the image coordinates and disparity as floating point values (sub-pixel accuracy), this translates into

300 stereo features x 3 values (x,y,disparity) x 4 bytes per value

= 3600 bytes per observation, or 3.5K

If we also want to store colour information, so that coloured 3D occupancy grids can be produced this increases to 4500 bytes or 4.4K. There is also the robot's pose information to store, but this is only a small number of bytes, so doesn't make a big overall difference. This seems quite tractable. Potentially the robot could make several thousand observations as it maps the house, and this only translates into a few tens of megabytes which is well within the limitations of what a netbook could handle. Even if the number of observations rises into the tens of thousands this still looks feasible.

Just spotted a couple of major bugs in the occupancy grid mapping ray throwing routine. They were real howlers!

Still, they've only been there for the last five years.

Ah, the importance of having users...

First successful following of a pre-recorded path. This is just open loop at the moment, since there is no integration with the visual odometry.

I added some user friendly audio messages, such as "The journey begins", "I am pausing" and "I am resuming my journey".

Squashed another odometry bug. It turns out that there appears to be an occasional momentary glitch in the values coming from the phidgets high speed encoders, whereby the sign of the encoder count flips from positive to negative or vice versa. This isn't a rollover of the accumulated count, and I think it's most likely due to EM interference from the motors, since it's not at all repeatable.

Fortunately there is a simple fix for this by having the software look for isolated sign inversions.

Reading about odometry models. All of the methods described either involve some amount of manual effort, or setting up special markers by which the robot can judge its actual pose and compare that against the odometry derived pose.

The thought occurs that it may be possible to have the robot learn its own odometry model in a fully automated way. By looking for systematic deviations between the visual localization and the odometry based localization probably an odometry model can be learned over time. This may also be relatively easy to do.

25 Jan 2009 (updated 25 Jan 2009 at 18:34 UTC) »

Recording of wheel odometry data looks good, and I can plot out the journey through which the robot has traveled. This will be subject to errors, but it's a start.

When replaying a pre-recorded journey I think what I'll do initially is have the robot perform an optical scan of the immediate vicinity from a static position using its pan and tilt head. This can be used to build up an initial occupancy grid which is as dense as possible, allowing the pose to be visually estimated as well as it can be. There's an obvious hazard in that if the initial pose estimate is completely wrong, the subsequent mapping could go awry.

I might also have the robot perform a visual search if the localization estimate is too uncertain - so that it would appear to pause and do a "double take".

Woooo. What's this I see? A new look to robots.net? The new look is ok, but when you click on recent blog entries they're grotesquely centre justified!

Anyway, the odometry stuff is coming along nicely. Fairly run-o-the-mill differential drive malarkey. Most of this I can develop and test purely in simulation, which assists the software validation.

The path through which the robot moves is reconstructed from the wheel odometry, then I'm doing some post-processing to ensure there is some minimal sensible distance between points along the path, and using a b-spline to smooth out any sudden turns. The resulting path should be fairly graceful. At this stage the path is calculated in a very simplistic way, totally ignoring any attempt to model the inherent uncertainties. This is a start, but will get more complex later on.

I might also attempt to build an odometry model, explicitly taking into account the systematic errors, but for initial tests I'll stick with a naive implementation and see whether the accuracy seems acceptable.

Working mainly on the odometry, trying to get dead reckoning performance as good as it can be. In this case just controlling the individual motors as closed loop systems seems to be insufficient for accurate movement. It looks like I need a higher level differential drive control which coordinates the action of the two motors together. I'll probably do this by adding two additional PID control objects, one being for forward velocity control and the other being for angular velocity. The outputs from these higher level objects will become an auxiliary error term within the PID loop which controls each individual motor.

Added some code which compresses the stereo images and odometry into a single file when training is stopped. This makes the handling of different training sets much easier to administer, and also saves on storage space. Given the size of current hard disks storage is not an issue, but it might be in future if I want to install the software onto a flash disk based system.

There's still an annoying bug where the I/O server and motion control server appear to lock up. I'm not yet certain what the cause of this is. CPU usage, even when the cameras are running appears to be nominal (actually running Firefox or Nautilus eats far more CPU power!).This might be a thread lock, or possibly a bug in the phidgets driver (fortunately these are open source, so I should be able to locate the problem if this is the case). The other possible candidate is a USB bandwidth overload, but the problem still seems to occur even when the robot is not moving (i.e. no encoder/motor events are being generated).

55 older entries...

X
Share this page