Older blog entries for motters (starting at number 59)

Reading about odometry models. All of the methods described either involve some amount of manual effort, or setting up special markers by which the robot can judge its actual pose and compare that against the odometry derived pose.

The thought occurs that it may be possible to have the robot learn its own odometry model in a fully automated way. By looking for systematic deviations between the visual localization and the odometry based localization probably an odometry model can be learned over time. This may also be relatively easy to do.

25 Jan 2009 (updated 25 Jan 2009 at 18:34 UTC) »

Recording of wheel odometry data looks good, and I can plot out the journey through which the robot has traveled. This will be subject to errors, but it's a start.

When replaying a pre-recorded journey I think what I'll do initially is have the robot perform an optical scan of the immediate vicinity from a static position using its pan and tilt head. This can be used to build up an initial occupancy grid which is as dense as possible, allowing the pose to be visually estimated as well as it can be. There's an obvious hazard in that if the initial pose estimate is completely wrong, the subsequent mapping could go awry.

I might also have the robot perform a visual search if the localization estimate is too uncertain - so that it would appear to pause and do a "double take".

Woooo. What's this I see? A new look to robots.net? The new look is ok, but when you click on recent blog entries they're grotesquely centre justified!

Anyway, the odometry stuff is coming along nicely. Fairly run-o-the-mill differential drive malarkey. Most of this I can develop and test purely in simulation, which assists the software validation.

The path through which the robot moves is reconstructed from the wheel odometry, then I'm doing some post-processing to ensure there is some minimal sensible distance between points along the path, and using a b-spline to smooth out any sudden turns. The resulting path should be fairly graceful. At this stage the path is calculated in a very simplistic way, totally ignoring any attempt to model the inherent uncertainties. This is a start, but will get more complex later on.

I might also attempt to build an odometry model, explicitly taking into account the systematic errors, but for initial tests I'll stick with a naive implementation and see whether the accuracy seems acceptable.

Working mainly on the odometry, trying to get dead reckoning performance as good as it can be. In this case just controlling the individual motors as closed loop systems seems to be insufficient for accurate movement. It looks like I need a higher level differential drive control which coordinates the action of the two motors together. I'll probably do this by adding two additional PID control objects, one being for forward velocity control and the other being for angular velocity. The outputs from these higher level objects will become an auxiliary error term within the PID loop which controls each individual motor.

Added some code which compresses the stereo images and odometry into a single file when training is stopped. This makes the handling of different training sets much easier to administer, and also saves on storage space. Given the size of current hard disks storage is not an issue, but it might be in future if I want to install the software onto a flash disk based system.

There's still an annoying bug where the I/O server and motion control server appear to lock up. I'm not yet certain what the cause of this is. CPU usage, even when the cameras are running appears to be nominal (actually running Firefox or Nautilus eats far more CPU power!).This might be a thread lock, or possibly a bug in the phidgets driver (fortunately these are open source, so I should be able to locate the problem if this is the case). The other possible candidate is a USB bandwidth overload, but the problem still seems to occur even when the robot is not moving (i.e. no encoder/motor events are being generated).

20 Nov 2008 (updated 20 Nov 2008 at 22:04 UTC) »

Poised to do the first data gathering run with the GROK2 robot I fired up the stereo vision server only to find that there seems to be some sort of fundamental regression with V4L1 webcams in Ubuntu Intrepid. Testing with fswebcam shows that frame grabbing from V4L1 devices is definitely broken.

This means that I could spend a lot of time trying to find out how to fix what I think may be a kernel problem, or alternatively downgrade to the previous version of Ubuntu. It's another frustrating setback, but there have been many during the development of this robot, so I'm quite accustomed to such things.

[supplemental] Yep. Reverting to kernel version 2.6.24 and fswebcam behaves normally. This is definitely a kernel snafu.

Added a speed control mode to the motion control software for GROK2. This allows the robot to be easily and smoothly jogged around using the joystick. At this point I have various software services which handle different aspects of the robot's operations, which can potentially be run concurrently (i.e. scales well with multiple CPU cores) and which could also run on different computers on a network - a sort of virtual robot network (VRN) if you like.

The next step is to write another service called "training". This recruits some of the other services and allows the robot to be moved with the joystick whilst gathering data from its various sensors. The resulting data sets can then be used to optimize the SLAM algorithm so that good maps result.

From a strict efficiency point of view the software on GROK2 is not optimal, but the way that I've written it should make it robust to changes of hardware and also scalable across multiple cores and networked computers.

14 Sep 2008 (updated 14 Sep 2008 at 21:08 UTC) »

Version 0.2 of the Surveyor stereo vision software has been released! (http://code.google.com/p/sentience/wiki/SurveyorSVS)

This version contains quite a few improvements which mean that one of these devices could be integrated with other software in a fairly straightforward way. There are also changes which mean that the same system could be used with webcams, although this is only supported on Linux at the moment.

http://code.google.com/p/sentience/wiki/WebcamStereoVisionUtilities

Further developments on the software for the Surveyor stereo camera, in preparation for release 0.2. The new features concentrate mainly upon usability issues. It's now possible for other programs to connect and receive stereo disparity data, and I added an audible beep on successful calibration so that you don't necessarily need to be looking at a screen.

http://code.google.com/p/sentience/wiki/SurveyorSVS

An initial release of software for the Surveyor stereo vision system. There is plenty of scope for improvement, but hopefully this is usable.

http://code.google.com/p/sentience/wiki/SurveyorSVS

50 older entries...

X
Share this page