Older blog entries for motters (starting at number 53)

Added a speed control mode to the motion control software for GROK2. This allows the robot to be easily and smoothly jogged around using the joystick. At this point I have various software services which handle different aspects of the robot's operations, which can potentially be run concurrently (i.e. scales well with multiple CPU cores) and which could also run on different computers on a network - a sort of virtual robot network (VRN) if you like.

The next step is to write another service called "training". This recruits some of the other services and allows the robot to be moved with the joystick whilst gathering data from its various sensors. The resulting data sets can then be used to optimize the SLAM algorithm so that good maps result.

From a strict efficiency point of view the software on GROK2 is not optimal, but the way that I've written it should make it robust to changes of hardware and also scalable across multiple cores and networked computers.

14 Sep 2008 (updated 14 Sep 2008 at 21:08 UTC) »

Version 0.2 of the Surveyor stereo vision software has been released! (http://code.google.com/p/sentience/wiki/SurveyorSVS)

This version contains quite a few improvements which mean that one of these devices could be integrated with other software in a fairly straightforward way. There are also changes which mean that the same system could be used with webcams, although this is only supported on Linux at the moment.


Further developments on the software for the Surveyor stereo camera, in preparation for release 0.2. The new features concentrate mainly upon usability issues. It's now possible for other programs to connect and receive stereo disparity data, and I added an audible beep on successful calibration so that you don't necessarily need to be looking at a screen.


An initial release of software for the Surveyor stereo vision system. There is plenty of scope for improvement, but hopefully this is usable.


Stereo camera calibration is looking good.


and I now have a method for calibrating the pan and tilt mechanism using the same data.


so it looks like I'm on track for the first test run soon.

As an aside I've also ordered a couple of 8 megapixel cameras, so that I can evaluate whether higher resolutions will provide significantly better quality stereo vision. There's always a tradeoff between speed and quality, and it might turn out that higher resolutions do not add much to the mapping quality, especially over short ranges of a few metres.

More hardware hacking. I added some buttons for starting and stopping the robot, a joystick to be used for teaching specific routes (amongst other things) and an additional infrared motion sensor. The motion sensor will be used to detect the presence of people in a room when the robot is stationary, just like a burglar alarm. Once the robot knows that there is someone in the general vicinity it can use its cameras and pan/tilt mechanism to locate them.


The physical construction of the robot is now complete. It looks like this.


Next weekend I'll do the first dead reckoning runs to determine how quickly position and pose errors typically accumulate. This information will then be used as part of the motion model. I may also need to do additional tuning of the main drive motors, since the original tuning parameters were for an unloaded situation with the robot sitting on a pile of books.

Things are now moving along pretty well. Using the phidgets motor controller and a pair of encoder modules I can get good closed loop position control of the robot, and this morning ran the first few tests, with the robot actually rolling along the floor rather than being jacked up on a couple of books as it was whilst writing the motion control software and tuning the PID gains. I'm quite pleased with the results so far, and it does look like I'll be able to achieve a reasonable dead reckoning performance which can then be integrated with the vision system to give reliable navigation.

Currently the robot looks like this:





There's still much more to be done with the software, but I think most of the hardware hacking is now out of the way. I only have some cable tidying to do, and will perhaps make the head covering a little more robust to protect the cameras. At present the robot is still tethered to a mains supply, and I'll probably leave it that way until testing navigation over significant distances becomes an issue.

Over the last few weekends I've been stuck with a microcontroller problem. Basically I'm just trying to use a couple of interrupts to count encoder pulses, then pass that back to a PC via RS232. The programming on this stuff is fairly archaic and requires proprietary software tools which are flaky and no longer supported (the classic proprietary "software death"). I don't really want to spend weeks or months on this so as a plan B I've opted for using a couple of phidget encoder counter boards instead.


This is a slightly more expensive solution, but it will allow me to use full quadrature rather than just single ended pulses and the rate of communication with the PC will be far higher than is the case for RS232 at 9600 baud.

I've also noticed that the servo which I'm using to pan the stereo cameras is underpowered, resulting in some laggy control. At the moment I'm just using bog standard 3kg/cm RC servos for pan and tilt, which you can get from any hobby store. I've ordered a couple of 13kg/cm RC servos with the same physical dimensions and metal gears which should give better controlled movement. Good control of the stereo cameras will be essential for 3D mapping performance.

If all goes well and there are no further holdups I'm hoping to be able to begin some dead reckoning tests in maybe a month.

Of course all of this hoo-haar is symptomatic of the fact that at present there is no reasonably sized PC based robot platform which you can buy on a hobbyists budget as an "off the shelf" package.

21 Jan 2008 (updated 21 Jan 2008 at 23:12 UTC) »

I've added a second rearward looking stereo camera to the robot. This isn't yet calibrated, but that's a fairly straightforward procedure which I always intended would be done with the cameras in situ. I'll probably also need a separate calibration procedure to characterise the pan and tilt behavior as well as possible. This will allow the grid maps to be updated properly, taking the head pose uncertainty into account.


One consideration when building the stereo head was whether the cameras should be rolled to a 45 degree angle or not. For simple types of stereo correspondence detecting vertically oriented features along each row of the image camera roll can be a good strategy allowing a mix of both vertically and horizontally aligned features in the environment to be ranged. However, for the stereo correspondence algorithm which I'm using rolling the cameras doesn't make all that much difference. This is mainly because I'm using simulated multi scale centre/surround fields, which turn each image into a kind of contour map, and also explicitly taking vertical context into account. In this case vertical context extends significantly beyond the usual patch matching windows. Hence features which may appear identical when narrowly viewed along each image row can be disambiguated from their wider surrounding context. There are still classic dilemmas, such as the "picket fence" effect, but vertical context tends to rule out all but the most mesmorizing situations.

44 older entries...

Share this page