Older blog entries for motters (starting at number 73)

A brief explanation about the new combined stereo and omnidirectional vision system on the GROK2 robot.

http://streebgreebling.blogspot.com/2010/02/combined-stereo-and-omnidirectional.html

Having tried the classical space carving/voxel colouring techniques, and found them wanting, I'm going to try simpler methods such as edge, line and motion detection. If the robot is stationary an observed moving object, like a person, should show up well and be easy to triangulate.

1 Feb 2010 (updated 1 Feb 2010 at 23:35 UTC) »

Looks like it's been a while since I last blogged here, so here's an update. In the last six months I've mainly been writing more stereo vision code. This was primarily for use with the Surveyor SVS, but I also wrote a version which runs on a PC under Linux.

http://code.google.com/p/libv4l2cam/

v4l2stereo was initially written as an easy way in which to test the stereo algorithm before transferring it to the blackfin, but later developed into a piece of software in its own right. You can see an example of the stereo disparities obtained with the Minoru webcam here:

http://www.youtube.com/watch?v=EUcLAarcj7U

The Minoru only has a short 6cm baseline, so the effective stereo range is not very great (probably less than two metres), but it works well on Linux. As is always the case with webcams the image capture is not synchronized, so if the cameras are moving quickly the delay does become a problem - but for most slow moving robots would be ok.

A recent nice feature of v4l2stereo is that it can be run in "headless" mode with no GUI output to the screen and also can stream the image over a network using gstreamer.

I also replaced Rodney's head with a simplified version which has the Minoru webcam mounted on it.

http://www.youtube.com/watch?v=gNRdcwrTOM0

As of December 2009 I've also been experimenting with omnidirectional vision. I saw one of the videos from Pi Robot a while back, and had been meaning to try out something similar using a Christmas tree decoration as a spherical mirror. This actually works very well, and I've now created a new project called Omniclops for this code, since I didn't want to mix it up with anything else.

http://code.google.com/p/omniclops/

Fortunately the geometry for a spherical mirror is pretty simple to deal with, and the results look promising. So promising in fact that it's a cause for regret that I didn't try doing this many years ago. I've been aware of this type of vision for at least a decade, but the mirrors always looked too exotic or expensive to be worth bothering with, and the idea of making a parabolic mirror by hand without milling machinery seemed like probably something which wouldn't be very successful.

Whilst fooling around with omnidirectional vision using a Christmas decoration a thought occurred to me. Could I somehow combine stereo vision with omnidirectional vision, so that objects could be ranged without needing to do structure from motion? At first I just thought of using a couple of mirrors with one of the stereo cameras, but then I thought why not just use a single camera with a wide field of view looking at multiple mirrors spaced some distance apart. This seems like a good way to do things for the following reasons:

- You only need a single camera

- There are no camera synchronisation issues

- There are no illumination/colour correction issues

- Ultra wide field of view compared to conventional stereo vision

- Very cheap to build

On the down side the resolution of the image within each mirror is rather low, but this probably isn't a major handicap. Also the geometry is more complex than for ordinary stereo vision, but not prohibitively so. I lashed up a prototype from aluminium and cardboard, using five mirrors made from Christmas decorations (carefully) sawn in half to make hemispheres. You can see the resulting effect like so:

http://www.youtube.com/watch?v=QIuljh7Piso

This is effectively the same as having five cameras with overlapping fields of view and fisheye lenses. Currently I'm thinking that this approach may be well suited to voxel coloring/space carving volumetric techniques, since it complies with the simple plane ordering constraint and the positions of the mirrors are known.

Workwise, I'm pretty much unemployed now - like a lot of software engineers at present - so I can work on this full time and see if I can get any useful volumetric modeling.

Calibrating the pan and tilt mechanism (again)

http://streebgreebling.blogspot.com/2009/07/pan-and-tilt.html

I've adapted older code to simplify this procedure and make it more integrated with the rest of the system.

Views from both cameras as an animated gif.

http://groups.google.com/group/sentience/web/anim1.gif

The effective stereo range with the cameras spaced 12cm apart looks like it's 4-5 metres. Objects in the far distance shouldn't appear to move.

GROK2 gets new cameras.

http://streebgreebling.blogspot.com/2009/07/grok2-gets-new-cameras.html

A brief guide to using the Minoru stereo webcam.

http://code.google.com/p/sentience/wiki/MinoruWebcam

It seems to me that this device might be quite useful for robot projects. It wasn't very long ago that such as device would cost a couple of thousand dollars or more.

In addition to the feature based stereo I may also try implementing a dense stereo algorithm. My thoughts on using this as a replacement for the cameras on GROK2 are that the baseline is probably a little on the short side, but that it probably would work.

This weekend I've been experimenting with the Minoru stereo webcam, which I think could turn out to be very useful for robotics purposes.

You can find my review here:

http://streebgreebling.blogspot.com/2009/05/minoru-stereo-webcam-review.html

At the moment I'm still undecided as to whether I'll use the Minoru to replace the existing cameras on GROK2. I've ordered some wide angle lenses which I'll try using. If the new lenses fit then this device might be ideal.

Have done more testing this weekend and fixed some important bugs. It transpired that there was still a "glitch" issue with the direction of one of the encoders reversing seemingly at random, but I've managed to work around that. Also the motion control and joystick servers are now talking together as they should.

I need to write a new utility program which will allow me to visualise the occupancy grids after doing an initial joystick guided training run. This should allow me to check that things look as I expect them to.

I've also ordered one of the Minoru 3D webcams. Apparently these are UVC compliant and will work on Linux. If I can get a pair of images out of the device this would make an excellent replacement for the existing stereo cameras, whilst also solving my V4L1 issues. However, even if the Minoru does look like a usable stereo camera I'll need to assess image quality and field of view compared to the existing cameras which I'm using.

Wheel odometry is now calibrated, and repeatability looks good over short distances, such that the rate of increase in pose uncertainty should be unmanageably small.

I'm getting closer to having a working robot, although there has been a recent setback in that the new version of Ubuntu (9.04) doesn't seem to support the webcams which I'm using. This is odd, because they worked without a hitch in previous versions. A simple solution would be to downgrade the OS, although I'm reluctant to do that. I suspect that there has been some change in the kernel, perhaps related to gspca and V4L1 devices.

As a workaround I've continued development on Windows. This is ok, because the Windows version has not received too much attention and so was lagging behind in some features. My current strategy is to ensure that the robot's software works both on Windows and on Linux in order to maximise the possible range of use cases.

There's some extra work to be done on path integration, and no doubt there will be additional bugs to fix once I start testing on the robot in earnest (as opposed to simulation/unit testing). Both stereo cameras are working and calibrated, and are returning reasonable stereo features. The camera images seem to suffer from occasional glitches, and this might be something to do with their age (most of them were bought 4 years ago) or more likely it could be electrical interference with the USB cables from nearby pan and tilt servos. Either way, the glitches are not sufficiently serious to cause major concern at this point.

11 May 2009 (updated 11 May 2009 at 22:21 UTC) »

I now have both of the stereo cameras calibrated, and am fairly confident that I'm getting good quality disparities, which should at least suffice for navigation purposes. I wrote an extra program which allows me to visualise the stereo disparity and manually alter calibration parameters to observe the effects. Hence, if there are problems I can check the camera calibration more thoroughly than was possible previously.

The next step is to do integration testing with all of the systems running - stereo vision server, motion control server, servo control server, steersman and ultrasonics server. With luck I should be able to create some real maps soon.

64 older entries...

X
Share this page