Older blog entries for motters (starting at number 83)

13 Jun 2011 (updated 13 Jun 2011 at 11:59 UTC) »

After a day of hacking, bashing and drilling I've slimmed down the GROK2 robot, reducing its width by 40mm on either side. This robot has an AL-101 chassis (Zagros robotics) and fortunately it's made from 3mm aluminium, which is just about sawable with some exertion.

This should give the robot more clearance when passing through doorways. It's still wide enough for a netbook, mini-itx or even a full sized motherboard but it's no longer wide enough to carry the laptop - at least not in the usual orientation.

It's always been in the plan to eventually have some permanent onboard PC, and at present it looks as if netbooks are just not quite up to the job unless they're the latest and most powerful devices (which are expensive). So I might have a go at installing a mini-itx, which are much cheaper than a high end netbook. I could then use a laptop or netbook to ssh into the robot. I have a couple of spare SATA hard drives which could be used, and also a couple of USB wireless adaptors. Another advantage of the mini-itx boards is that they can be run off of a 12 volt supply, which avoids the wasteful DC->AC->DC conversion.

All in all the future for robotics is looking very good, particularly for low hanging fruit applications, such as fetch and carry or just hawling stuff around. I think it would be quite feasible to build a prototype shop/supermarket shelf stacker robot, and also to add an autopilot feature to mobility scooters or wheelchairs.

9 Jun 2011 (updated 9 Jun 2011 at 23:07 UTC) »

It has been a while since my last blog entry here. As far as ambient events are concerned I continue to be an unemployed software engineer, with the prospects of re-employment looking increasingly remote, but in terms of robotics projects things are going very well indeed. In the last six months using ROS and the Kinect sensor I've made more progress than I'd made over the previous five years of SLAM and stereo vision development.

The GROK2 robot is now navigating well from one room to another. Tuning the localisation parameters took a while, but now the movement looks quite smooth and decisive. I've been able to have the robot navigate reliably to various locations in the kitchen, such as sink, kettle and table. It doesn't have any arms presently, but if I can get some object recognition going then adding an arm would be the next logical step. It's easy to become complacent, but the current level of navigation performance was, until only a few months ago, merely a vague ambition somewhere in the future.

One problem is that it looks as if the robot in its current form is just too wide to get through one particular doorway. This might mean that I need to do some mechanical hacking to thin it down a little and provide more clearance. The small amount of clearance currently available is just too narrow to realistically expect the localisation to be able to handle it reliably. As part of the redesign I may also add a dedicated onboard PC, rather than using a laptop.

Using this sort of system with a PC of some description and RGBD sensor the prospects for robotics over the next decade look far better than at any previous time. 2011 is probably going to be a watershed year in which both the software and the sensor technology became good enough for break-even navigation at a reasonable cost.

I've now added a Kinect sensor to the GROK2 robot, which is described here:

http://streebgreebling.blogspot.com/2011/01/grok2-kinect.html

I think that 2011 could be quite an exciting year for robotics, with some real progress being made on age-old problems.

I havn't done very much by way of a write up on the GROK2 robot so far, so here is some explanation of the story to date.

http://sluggish.homelinux.net/wiki/GROK2

The robot is still quite static, although it can be driven by joystick. The next stage is to create a URDF model and try out some of the ROS mapping/localisation to see whether it's suitable for use with stereo vision. From the navigation stack's point of view it shouldn't care what sensors are being used, since all it will be seeing is point cloud data.

Another point cloud model, with better registration than some of the previous ones.

http://streebgreebling.blogspot.com/2010/12/point-cloud-model-of-author.html

I may revert to the initial design of the GROK2 head, with forward and reverse facing stereo cameras. That way I can grab twice the amount of range data in a similar amount of time.

6 Dec 2010 (updated 6 Dec 2010 at 23:27 UTC) »

The first dense composite point cloud model has been generated from the GROK2 robot. Whilst the depth resolution might not be as good as a Kinect, and the registration of glimpses is not perfect, I think this proves - at least to my own satisfaction, if nobody else's - that stereo vision can be used as a practical depth sensing method.

http://sluggish.homelinux.net/wiki/3D_Models

There's still a fair amount of work to be done to improve on these results, but it's certainly looking feasible that recognition of sizable objects such as chairs or desk surfaces may be achievable. An obvious quick heuristic would simply be to run an elevation histogram and search for peaks which could indicate horizontally oriented surfaces.

No doubt the points could also be condensed into voxels to increase the efficiency of subsequent higher level processing.

Whilst there is a big song and dance about the Kinect progress on Sentience continues. Since they're not prohibitively expensive and I have plenty of time on my hands I'll try to acquire a Kinect and evaluate how suitable it is for robotics uses. Willow Garage already seem to be doing something Kinect related.

A new dense stereo algorithm called ELAS, developed by Andreas Geiger, has been added to the v4l2stereo utility. This works well and at a reasonable frame rate on the Minoru. It's probably the best dense stereo method that I've tried to date.

http://code.google.com/p/sentience/wiki/MinoruWebcam

It may turn out that the structured light method which the Kinect uses isn't very useful outdoors, or suffers from interference when multiple units are used in close proximity, so there may still be a place for stereo vision as a depth sensing method.

8 Apr 2010 (updated 8 Apr 2010 at 20:56 UTC) »

Whilst testing out omnidirectional stereo vision I thought it would be a good idea to try to apply a dense stereo method to images like the ones used to produce this anaglyph.

http://www.youtube.com/watch?v=3L-gJhATQOg

Here the disparity is vertical rather than horizontal as is usually the case for stereo cameras.

However, it became apparent that I don't yet have a dense stereo algorithm for v4l2stereo, so I decided to take some time out to develop one, with the hope being that whatever is developed in a conventional stereo vision setup can be similarly applied to the omnidirectional case.

The stereo correspondence method which I've used for dense stereo is a fairly conventional one, and I've made extensive use of openmp to make it as multi-core scalable as possible. This uses the simple "patch matching" approach which is commonly described in the literature, but works reasonably well on the Minoru provided that some initial correction is done to make the colour mean and variance in the left and right images as similar as possible, so that comparing pixels becomes a less haphazard affair.

An example of the end result is the "big blob detection" reminiscent of what I had running on the Rodney humanoid over five years ago appears in the following video.

http://www.youtube.com/watch?v=ZKnWJTOzyk4

The depth resolution isn't fantastic, but it's functional and may be of use for obstacle detection or just detecting people nearby.

Also I wanted to experiment with integrating this code with the Willow Garage ROS system. This would potentially enable very expensive stereo cameras traditionally used in academic research to be replaced by something like a Minoru webcam, or a pair of webcams, which would be affordable to the hobbyist. The current source code release for v4l2stereo includes example ROS publisher and subscriber, which should make integration with ROS based robots into a fairly straightforward process.

http://code.google.com/p/sentience/wiki/MinoruWebcam

http://code.google.com/p/libv4l2cam/

The current plan is to attempt to construct a 2D map based upon the features from omnidirectional stereo vision. I can locate edge features close to the ground plane quite well, but the trouble with edges is that they're not very unique. I could use the edge data, projected into cartesian coordinates, to begin building a local map, but after a short time the map would begin to degenerate.

So what's needed are more unique features rather than edges. These could be tracked between frames (data association), and I could then use an off-the-shelf graph based SLAM algorithm, such as TORO to build a map. At first I thought of using SIFT, which would be the obvious choice if I were an academic researcher, but there are software patent issues associated with that method that I'd rather not have to deal with. FAST corners would be nice, but the relatively low resolution caused by the mirror distortion means that this algorithm doesn't work well. But I can use the Harris corner features from "good features to track" which is already built into OpenCV. Having been an OpenCV refusenick for quite a number of years I'm now slowly growing to like it. Harris corners seem to work quite reliably, despite the low resolution.

Detecting the ground using an omnidirectional stereo vision system.

http://www.youtube.com/watch?v=JhNotSaBmnA

The green features in the centre mirror have been identified as being close to the ground plane.

Edge features close to the ground plane are detected by projecting all features from the centre mirror to the ground plane (the height of the camera is known), then reprojecting the ground features back into the image plane of the four peripheral mirrors. The reverse operation is then applied, and features within the centre mirror are compared. Features with small reprojection error must belong somewhere close to the ground plane.

This provides a convenient and general way of locating the ground, which does not depend upon unreliable texture, colour histogram or image segmentation methods.

74 older entries...

X
Share this page