Older blog entries for motters (starting at number 46)

Things are now moving along pretty well. Using the phidgets motor controller and a pair of encoder modules I can get good closed loop position control of the robot, and this morning ran the first few tests, with the robot actually rolling along the floor rather than being jacked up on a couple of books as it was whilst writing the motion control software and tuning the PID gains. I'm quite pleased with the results so far, and it does look like I'll be able to achieve a reasonable dead reckoning performance which can then be integrated with the vision system to give reliable navigation.

Currently the robot looks like this:

http://farm4.static.flickr.com/3208/2373724362_3b23854c8d_b.jpg

http://www.youtube.com/watch?v=8FRJxWcAzI4

http://farm4.static.flickr.com/3024/2372889549_b50154a8a3_b.jpg

http://www.youtube.com/watch?v=CByUznJRs_g

There's still much more to be done with the software, but I think most of the hardware hacking is now out of the way. I only have some cable tidying to do, and will perhaps make the head covering a little more robust to protect the cameras. At present the robot is still tethered to a mains supply, and I'll probably leave it that way until testing navigation over significant distances becomes an issue.

Over the last few weekends I've been stuck with a microcontroller problem. Basically I'm just trying to use a couple of interrupts to count encoder pulses, then pass that back to a PC via RS232. The programming on this stuff is fairly archaic and requires proprietary software tools which are flaky and no longer supported (the classic proprietary "software death"). I don't really want to spend weeks or months on this so as a plan B I've opted for using a couple of phidget encoder counter boards instead.

http://www.active-robots.com/products/phidgets/encoder-1057-details.shtml

This is a slightly more expensive solution, but it will allow me to use full quadrature rather than just single ended pulses and the rate of communication with the PC will be far higher than is the case for RS232 at 9600 baud.

I've also noticed that the servo which I'm using to pan the stereo cameras is underpowered, resulting in some laggy control. At the moment I'm just using bog standard 3kg/cm RC servos for pan and tilt, which you can get from any hobby store. I've ordered a couple of 13kg/cm RC servos with the same physical dimensions and metal gears which should give better controlled movement. Good control of the stereo cameras will be essential for 3D mapping performance.

If all goes well and there are no further holdups I'm hoping to be able to begin some dead reckoning tests in maybe a month.

Of course all of this hoo-haar is symptomatic of the fact that at present there is no reasonably sized PC based robot platform which you can buy on a hobbyists budget as an "off the shelf" package.

21 Jan 2008 (updated 21 Jan 2008 at 23:12 UTC) »

I've added a second rearward looking stereo camera to the robot. This isn't yet calibrated, but that's a fairly straightforward procedure which I always intended would be done with the cameras in situ. I'll probably also need a separate calibration procedure to characterise the pan and tilt behavior as well as possible. This will allow the grid maps to be updated properly, taking the head pose uncertainty into account.

http://farm3.static.flickr.com/2099/2205545549_e6bea41a36.jpg

One consideration when building the stereo head was whether the cameras should be rolled to a 45 degree angle or not. For simple types of stereo correspondence detecting vertically oriented features along each row of the image camera roll can be a good strategy allowing a mix of both vertically and horizontally aligned features in the environment to be ranged. However, for the stereo correspondence algorithm which I'm using rolling the cameras doesn't make all that much difference. This is mainly because I'm using simulated multi scale centre/surround fields, which turn each image into a kind of contour map, and also explicitly taking vertical context into account. In this case vertical context extends significantly beyond the usual patch matching windows. Hence features which may appear identical when narrowly viewed along each image row can be disambiguated from their wider surrounding context. There are still classic dilemmas, such as the "picket fence" effect, but vertical context tends to rule out all but the most mesmorizing situations.

13 Jan 2008 (updated 13 Jan 2008 at 17:50 UTC) »

The new robot is now being constructed. I scavenged an ASC-16 servo controller from an earlier project and am using this to control the pan and tilt of the stereo cameras. Although it's quite old the ASC-16 (sold by www.medonis.com) allows good control of speed and acceleration and has quite a few features. I also have five ultrasonic sensors networked and connected to an I2C-USB converter. These have been tested out and are working well.

The robot currently looks like this http://farm3.static.flickr.com/2110/2169598662_0fce766497.jpg

Ultrasonics being tested http://farm3.static.flickr.com/2182/2141668593_14624ef85e.jpg

The main purpose of this robot will be to test and develop the Sentience system (http://code.google.com/p/sentience/), and I'll also be using a bridgeware framework (http://code.google.com/p/robotbridgeware/) so that the various pieces of hardware are all treated as network devices by the higher level software. This idea was inspired from a talk last year by Matt Trossen, and also in another recent talk Rod Brooks mentioned that all the devices on the Pacbots are treated as network devices.

Added multi-threading to the stereo correspondence part of the software, so that when multiple stereo cameras are used together with a multi-core CPU the processing load is more evenly distributed. The processing time for dense stereo correspondence on a 2GHz processor is now about 50 milliseconds, since I've introduced a fast binarization step and am making greater use of vertical context. The bulk of the time is now not actually in the matching but the integral image calculation, which is very unlikely to be significantly improved upon.

I've got some electronics on order and mobile robot testing is hopefully going to commence in January, after a lot of delay and dithering (all of which is hardware related - the software is pretty much ready to run). Once I have something proven to work I'll release an initial alpha version of the software and a detailed description of how to independently reproduce the result (at last, robotics may yet become a proper science!).

Have been experimenting with simple mirrored stereo vision, as described on the Mirage Robotics site. It's fairly trivial to adjust the stereo sensor models to handle this type of setup. The good news is that this effect certainly works, and would simplify the camera calibration procedure. However it does look as if you need quite a large mirror in order to be able to get a good stereo effect, which limits the practicality of this design. This would be ok if you needed stereo vision on a large robot, but I think for most smaller robots it wouldn't be suitable.

So for the present it still looks like conventional twin camera stereo is the preferable option.

Found an interesting looking web site selling robot bases, which might be ideal for further development of the stereo vision system.

https://www.zagrosrobotics.com/

I could buy a cheap laptop with a dual core CPU which would probably provide more than enough computing power to do the job, and the whole robot could be constructed reasonably cheaply.

I've added a program to visualise the depth maps resulting from stereo correspondence. This is mainly so that I can tweak the algorithm and view the results. It has user interfaces for VS2005 and also for MonoDevelop.

http://sentience.googlecode.com/svn/trunk/applications/stereocorrespondence/

Also I've added a roadmap with a list of things remaining to be done. Mostly its just down to integration testing.

http://code.google.com/p/sentience/wiki/ProjectRoadmap

I could use the humanoids for some testing, but the way that their vision is physically constructed isn't ideal and could introduce significant errors. Over the years I've gone from trying to implement biological-type stereo vision with eyes which can verge to a more engineering approach where the cameras are rigidly fixed in place and calibrated.

A bit more weekend roboting. I've now made the SLAM mapping multithreaded so that the system will scale nicely over multiple CPU cores. Testing on a dual core system shows that I'm getting a good distribution of the processing load. This means that the number of particles used to model the robots pose uncertainty at any point in time is scalable, so as more cores become available the mapping will be more robust and accurate.

Have done a little tidying up on the Sentience code and made a few speed improvements. My financial status is now recovering after a couple of years of economic delinquency and it looks like I'll soon be in a position to perhaps order an off the shelf robot such as the Corobot or 914 and begin integration testing the stereo vision and mapping software.

The main aim here is to develop visual perception software which will enable useful and economical robotics in a home or office environment, using cameras rather than laser scanners.

37 older entries...

X
Share this page