Older blog entries for motters (starting at number 55)

Added some code which compresses the stereo images and odometry into a single file when training is stopped. This makes the handling of different training sets much easier to administer, and also saves on storage space. Given the size of current hard disks storage is not an issue, but it might be in future if I want to install the software onto a flash disk based system.

There's still an annoying bug where the I/O server and motion control server appear to lock up. I'm not yet certain what the cause of this is. CPU usage, even when the cameras are running appears to be nominal (actually running Firefox or Nautilus eats far more CPU power!).This might be a thread lock, or possibly a bug in the phidgets driver (fortunately these are open source, so I should be able to locate the problem if this is the case). The other possible candidate is a USB bandwidth overload, but the problem still seems to occur even when the robot is not moving (i.e. no encoder/motor events are being generated).

20 Nov 2008 (updated 20 Nov 2008 at 22:04 UTC) »

Poised to do the first data gathering run with the GROK2 robot I fired up the stereo vision server only to find that there seems to be some sort of fundamental regression with V4L1 webcams in Ubuntu Intrepid. Testing with fswebcam shows that frame grabbing from V4L1 devices is definitely broken.

This means that I could spend a lot of time trying to find out how to fix what I think may be a kernel problem, or alternatively downgrade to the previous version of Ubuntu. It's another frustrating setback, but there have been many during the development of this robot, so I'm quite accustomed to such things.

[supplemental] Yep. Reverting to kernel version 2.6.24 and fswebcam behaves normally. This is definitely a kernel snafu.

Added a speed control mode to the motion control software for GROK2. This allows the robot to be easily and smoothly jogged around using the joystick. At this point I have various software services which handle different aspects of the robot's operations, which can potentially be run concurrently (i.e. scales well with multiple CPU cores) and which could also run on different computers on a network - a sort of virtual robot network (VRN) if you like.

The next step is to write another service called "training". This recruits some of the other services and allows the robot to be moved with the joystick whilst gathering data from its various sensors. The resulting data sets can then be used to optimize the SLAM algorithm so that good maps result.

From a strict efficiency point of view the software on GROK2 is not optimal, but the way that I've written it should make it robust to changes of hardware and also scalable across multiple cores and networked computers.

14 Sep 2008 (updated 14 Sep 2008 at 21:08 UTC) »

Version 0.2 of the Surveyor stereo vision software has been released! (http://code.google.com/p/sentience/wiki/SurveyorSVS)

This version contains quite a few improvements which mean that one of these devices could be integrated with other software in a fairly straightforward way. There are also changes which mean that the same system could be used with webcams, although this is only supported on Linux at the moment.

http://code.google.com/p/sentience/wiki/WebcamStereoVisionUtilities

Further developments on the software for the Surveyor stereo camera, in preparation for release 0.2. The new features concentrate mainly upon usability issues. It's now possible for other programs to connect and receive stereo disparity data, and I added an audible beep on successful calibration so that you don't necessarily need to be looking at a screen.

http://code.google.com/p/sentience/wiki/SurveyorSVS

An initial release of software for the Surveyor stereo vision system. There is plenty of scope for improvement, but hopefully this is usable.

http://code.google.com/p/sentience/wiki/SurveyorSVS

Stereo camera calibration is looking good.

http://farm4.static.flickr.com/3030/2641981504_fd93c7d74b_o.jpg

and I now have a method for calibrating the pan and tilt mechanism using the same data.

http://streebgreebling.blogspot.com/2008/07/characterising-pantilt-mechanism.html

so it looks like I'm on track for the first test run soon.

As an aside I've also ordered a couple of 8 megapixel cameras, so that I can evaluate whether higher resolutions will provide significantly better quality stereo vision. There's always a tradeoff between speed and quality, and it might turn out that higher resolutions do not add much to the mapping quality, especially over short ranges of a few metres.

More hardware hacking. I added some buttons for starting and stopping the robot, a joystick to be used for teaching specific routes (amongst other things) and an additional infrared motion sensor. The motion sensor will be used to detect the presence of people in a room when the robot is stationary, just like a burglar alarm. Once the robot knows that there is someone in the general vicinity it can use its cameras and pan/tilt mechanism to locate them.

http://farm4.static.flickr.com/3280/2427779514_d28b368557.jpg

The physical construction of the robot is now complete. It looks like this.

http://farm3.static.flickr.com/2169/2408346868_efca6ed26f.jpg

Next weekend I'll do the first dead reckoning runs to determine how quickly position and pose errors typically accumulate. This information will then be used as part of the motion model. I may also need to do additional tuning of the main drive motors, since the original tuning parameters were for an unloaded situation with the robot sitting on a pile of books.

Things are now moving along pretty well. Using the phidgets motor controller and a pair of encoder modules I can get good closed loop position control of the robot, and this morning ran the first few tests, with the robot actually rolling along the floor rather than being jacked up on a couple of books as it was whilst writing the motion control software and tuning the PID gains. I'm quite pleased with the results so far, and it does look like I'll be able to achieve a reasonable dead reckoning performance which can then be integrated with the vision system to give reliable navigation.

Currently the robot looks like this:

http://farm4.static.flickr.com/3208/2373724362_3b23854c8d_b.jpg

http://www.youtube.com/watch?v=8FRJxWcAzI4

http://farm4.static.flickr.com/3024/2372889549_b50154a8a3_b.jpg

http://www.youtube.com/watch?v=CByUznJRs_g

There's still much more to be done with the software, but I think most of the hardware hacking is now out of the way. I only have some cable tidying to do, and will perhaps make the head covering a little more robust to protect the cameras. At present the robot is still tethered to a mains supply, and I'll probably leave it that way until testing navigation over significant distances becomes an issue.

46 older entries...

X
Share this page