Older blog entries for motters (starting at number 62)

Just spotted a couple of major bugs in the occupancy grid mapping ray throwing routine. They were real howlers!

Still, they've only been there for the last five years.

Ah, the importance of having users...

First successful following of a pre-recorded path. This is just open loop at the moment, since there is no integration with the visual odometry.

I added some user friendly audio messages, such as "The journey begins", "I am pausing" and "I am resuming my journey".

Squashed another odometry bug. It turns out that there appears to be an occasional momentary glitch in the values coming from the phidgets high speed encoders, whereby the sign of the encoder count flips from positive to negative or vice versa. This isn't a rollover of the accumulated count, and I think it's most likely due to EM interference from the motors, since it's not at all repeatable.

Fortunately there is a simple fix for this by having the software look for isolated sign inversions.

Reading about odometry models. All of the methods described either involve some amount of manual effort, or setting up special markers by which the robot can judge its actual pose and compare that against the odometry derived pose.

The thought occurs that it may be possible to have the robot learn its own odometry model in a fully automated way. By looking for systematic deviations between the visual localization and the odometry based localization probably an odometry model can be learned over time. This may also be relatively easy to do.

25 Jan 2009 (updated 25 Jan 2009 at 18:34 UTC) »

Recording of wheel odometry data looks good, and I can plot out the journey through which the robot has traveled. This will be subject to errors, but it's a start.

When replaying a pre-recorded journey I think what I'll do initially is have the robot perform an optical scan of the immediate vicinity from a static position using its pan and tilt head. This can be used to build up an initial occupancy grid which is as dense as possible, allowing the pose to be visually estimated as well as it can be. There's an obvious hazard in that if the initial pose estimate is completely wrong, the subsequent mapping could go awry.

I might also have the robot perform a visual search if the localization estimate is too uncertain - so that it would appear to pause and do a "double take".

Woooo. What's this I see? A new look to robots.net? The new look is ok, but when you click on recent blog entries they're grotesquely centre justified!

Anyway, the odometry stuff is coming along nicely. Fairly run-o-the-mill differential drive malarkey. Most of this I can develop and test purely in simulation, which assists the software validation.

The path through which the robot moves is reconstructed from the wheel odometry, then I'm doing some post-processing to ensure there is some minimal sensible distance between points along the path, and using a b-spline to smooth out any sudden turns. The resulting path should be fairly graceful. At this stage the path is calculated in a very simplistic way, totally ignoring any attempt to model the inherent uncertainties. This is a start, but will get more complex later on.

I might also attempt to build an odometry model, explicitly taking into account the systematic errors, but for initial tests I'll stick with a naive implementation and see whether the accuracy seems acceptable.

Working mainly on the odometry, trying to get dead reckoning performance as good as it can be. In this case just controlling the individual motors as closed loop systems seems to be insufficient for accurate movement. It looks like I need a higher level differential drive control which coordinates the action of the two motors together. I'll probably do this by adding two additional PID control objects, one being for forward velocity control and the other being for angular velocity. The outputs from these higher level objects will become an auxiliary error term within the PID loop which controls each individual motor.

Added some code which compresses the stereo images and odometry into a single file when training is stopped. This makes the handling of different training sets much easier to administer, and also saves on storage space. Given the size of current hard disks storage is not an issue, but it might be in future if I want to install the software onto a flash disk based system.

There's still an annoying bug where the I/O server and motion control server appear to lock up. I'm not yet certain what the cause of this is. CPU usage, even when the cameras are running appears to be nominal (actually running Firefox or Nautilus eats far more CPU power!).This might be a thread lock, or possibly a bug in the phidgets driver (fortunately these are open source, so I should be able to locate the problem if this is the case). The other possible candidate is a USB bandwidth overload, but the problem still seems to occur even when the robot is not moving (i.e. no encoder/motor events are being generated).

20 Nov 2008 (updated 20 Nov 2008 at 22:04 UTC) »

Poised to do the first data gathering run with the GROK2 robot I fired up the stereo vision server only to find that there seems to be some sort of fundamental regression with V4L1 webcams in Ubuntu Intrepid. Testing with fswebcam shows that frame grabbing from V4L1 devices is definitely broken.

This means that I could spend a lot of time trying to find out how to fix what I think may be a kernel problem, or alternatively downgrade to the previous version of Ubuntu. It's another frustrating setback, but there have been many during the development of this robot, so I'm quite accustomed to such things.

[supplemental] Yep. Reverting to kernel version 2.6.24 and fswebcam behaves normally. This is definitely a kernel snafu.

Added a speed control mode to the motion control software for GROK2. This allows the robot to be easily and smoothly jogged around using the joystick. At this point I have various software services which handle different aspects of the robot's operations, which can potentially be run concurrently (i.e. scales well with multiple CPU cores) and which could also run on different computers on a network - a sort of virtual robot network (VRN) if you like.

The next step is to write another service called "training". This recruits some of the other services and allows the robot to be moved with the joystick whilst gathering data from its various sensors. The resulting data sets can then be used to optimize the SLAM algorithm so that good maps result.

From a strict efficiency point of view the software on GROK2 is not optimal, but the way that I've written it should make it robust to changes of hardware and also scalable across multiple cores and networked computers.

53 older entries...

X
Share this page