Recent blog entries for motters

Fitted a D525MW mini-ITX motherboard to the robot, and installed Linux Mint 11 and ROS onto a 16GB USB flash drive.

https://sluggish.homelinux.net/wiki/File:Mini-itx1.jpg

This makes a good minimalist onboard computer, and was considerably cheaper than buying an equivalent netbook. To set everything up I connected a keyboard, mouse and monitor as usual, but once the motherboard was installed on the robot it only requires the wifi adaptor and USB drive to be connected. I deliberately didn't want to use a hard drive (although I have a couple of old ones available) based upon bad experiences with mobile robots and hard drives in the past. Also I reused some old PC speakers which havn't seen the light of day for probably more than a decade. You never know when such things may come in useful.

https://sluggish.homelinux.net/images/f/f2/Grok2_electrical2.jpg

One trick with running the OS from a flash disk is to delete the existing casper-rw file, then create a partition labelled casper-rw. This enabled me to make use of the full USB drive, rather than being limited to 4GB of persistent storage.

The user interface of the robot currently consists of buttons and audio. When you press buttons the robot says something appropriate, so it's not so much a graphical user interface as an audio user interface. For the sorts of tasks I envisage the robot doing this is quite adequate, although if more elaborate instructions were needed I could add a small screen of some sort (finances permitting).

With the robot running I can then use either VNC or ssh to debug code or run different programs.

There's some tidying up remaining to be done on the head of the robot, Since the Kinect sensor's circuit boards are exposed and vulnerable to collisions. I'll devise some sort of covering to go over that.

An initial localising test run done earlier today indicates that everything seems to be working as expected, and the new computer can handle the processing demand. Even for a relatively simple differential drive robot like this there are a considerable number of electrical connections, and there's always some degree of trepidation over whether I've connected them back in the right order. Labelling everything helps a lot.

13 Jun 2011 (updated 13 Jun 2011 at 11:59 UTC) »

After a day of hacking, bashing and drilling I've slimmed down the GROK2 robot, reducing its width by 40mm on either side. This robot has an AL-101 chassis (Zagros robotics) and fortunately it's made from 3mm aluminium, which is just about sawable with some exertion.

This should give the robot more clearance when passing through doorways. It's still wide enough for a netbook, mini-itx or even a full sized motherboard but it's no longer wide enough to carry the laptop - at least not in the usual orientation.

It's always been in the plan to eventually have some permanent onboard PC, and at present it looks as if netbooks are just not quite up to the job unless they're the latest and most powerful devices (which are expensive). So I might have a go at installing a mini-itx, which are much cheaper than a high end netbook. I could then use a laptop or netbook to ssh into the robot. I have a couple of spare SATA hard drives which could be used, and also a couple of USB wireless adaptors. Another advantage of the mini-itx boards is that they can be run off of a 12 volt supply, which avoids the wasteful DC->AC->DC conversion.

All in all the future for robotics is looking very good, particularly for low hanging fruit applications, such as fetch and carry or just hawling stuff around. I think it would be quite feasible to build a prototype shop/supermarket shelf stacker robot, and also to add an autopilot feature to mobility scooters or wheelchairs.

9 Jun 2011 (updated 9 Jun 2011 at 23:07 UTC) »

It has been a while since my last blog entry here. As far as ambient events are concerned I continue to be an unemployed software engineer, with the prospects of re-employment looking increasingly remote, but in terms of robotics projects things are going very well indeed. In the last six months using ROS and the Kinect sensor I've made more progress than I'd made over the previous five years of SLAM and stereo vision development.

The GROK2 robot is now navigating well from one room to another. Tuning the localisation parameters took a while, but now the movement looks quite smooth and decisive. I've been able to have the robot navigate reliably to various locations in the kitchen, such as sink, kettle and table. It doesn't have any arms presently, but if I can get some object recognition going then adding an arm would be the next logical step. It's easy to become complacent, but the current level of navigation performance was, until only a few months ago, merely a vague ambition somewhere in the future.

One problem is that it looks as if the robot in its current form is just too wide to get through one particular doorway. This might mean that I need to do some mechanical hacking to thin it down a little and provide more clearance. The small amount of clearance currently available is just too narrow to realistically expect the localisation to be able to handle it reliably. As part of the redesign I may also add a dedicated onboard PC, rather than using a laptop.

Using this sort of system with a PC of some description and RGBD sensor the prospects for robotics over the next decade look far better than at any previous time. 2011 is probably going to be a watershed year in which both the software and the sensor technology became good enough for break-even navigation at a reasonable cost.

I've now added a Kinect sensor to the GROK2 robot, which is described here:

http://streebgreebling.blogspot.com/2011/01/grok2-kinect.html

I think that 2011 could be quite an exciting year for robotics, with some real progress being made on age-old problems.

I havn't done very much by way of a write up on the GROK2 robot so far, so here is some explanation of the story to date.

http://sluggish.homelinux.net/wiki/GROK2

The robot is still quite static, although it can be driven by joystick. The next stage is to create a URDF model and try out some of the ROS mapping/localisation to see whether it's suitable for use with stereo vision. From the navigation stack's point of view it shouldn't care what sensors are being used, since all it will be seeing is point cloud data.

Another point cloud model, with better registration than some of the previous ones.

http://streebgreebling.blogspot.com/2010/12/point-cloud-model-of-author.html

I may revert to the initial design of the GROK2 head, with forward and reverse facing stereo cameras. That way I can grab twice the amount of range data in a similar amount of time.

6 Dec 2010 (updated 6 Dec 2010 at 23:27 UTC) »

The first dense composite point cloud model has been generated from the GROK2 robot. Whilst the depth resolution might not be as good as a Kinect, and the registration of glimpses is not perfect, I think this proves - at least to my own satisfaction, if nobody else's - that stereo vision can be used as a practical depth sensing method.

http://sluggish.homelinux.net/wiki/3D_Models

There's still a fair amount of work to be done to improve on these results, but it's certainly looking feasible that recognition of sizable objects such as chairs or desk surfaces may be achievable. An obvious quick heuristic would simply be to run an elevation histogram and search for peaks which could indicate horizontally oriented surfaces.

No doubt the points could also be condensed into voxels to increase the efficiency of subsequent higher level processing.

Whilst there is a big song and dance about the Kinect progress on Sentience continues. Since they're not prohibitively expensive and I have plenty of time on my hands I'll try to acquire a Kinect and evaluate how suitable it is for robotics uses. Willow Garage already seem to be doing something Kinect related.

A new dense stereo algorithm called ELAS, developed by Andreas Geiger, has been added to the v4l2stereo utility. This works well and at a reasonable frame rate on the Minoru. It's probably the best dense stereo method that I've tried to date.

http://code.google.com/p/sentience/wiki/MinoruWebcam

It may turn out that the structured light method which the Kinect uses isn't very useful outdoors, or suffers from interference when multiple units are used in close proximity, so there may still be a place for stereo vision as a depth sensing method.

8 Apr 2010 (updated 8 Apr 2010 at 20:56 UTC) »

Whilst testing out omnidirectional stereo vision I thought it would be a good idea to try to apply a dense stereo method to images like the ones used to produce this anaglyph.

http://www.youtube.com/watch?v=3L-gJhATQOg

Here the disparity is vertical rather than horizontal as is usually the case for stereo cameras.

However, it became apparent that I don't yet have a dense stereo algorithm for v4l2stereo, so I decided to take some time out to develop one, with the hope being that whatever is developed in a conventional stereo vision setup can be similarly applied to the omnidirectional case.

The stereo correspondence method which I've used for dense stereo is a fairly conventional one, and I've made extensive use of openmp to make it as multi-core scalable as possible. This uses the simple "patch matching" approach which is commonly described in the literature, but works reasonably well on the Minoru provided that some initial correction is done to make the colour mean and variance in the left and right images as similar as possible, so that comparing pixels becomes a less haphazard affair.

An example of the end result is the "big blob detection" reminiscent of what I had running on the Rodney humanoid over five years ago appears in the following video.

http://www.youtube.com/watch?v=ZKnWJTOzyk4

The depth resolution isn't fantastic, but it's functional and may be of use for obstacle detection or just detecting people nearby.

Also I wanted to experiment with integrating this code with the Willow Garage ROS system. This would potentially enable very expensive stereo cameras traditionally used in academic research to be replaced by something like a Minoru webcam, or a pair of webcams, which would be affordable to the hobbyist. The current source code release for v4l2stereo includes example ROS publisher and subscriber, which should make integration with ROS based robots into a fairly straightforward process.

http://code.google.com/p/sentience/wiki/MinoruWebcam

http://code.google.com/p/libv4l2cam/

The current plan is to attempt to construct a 2D map based upon the features from omnidirectional stereo vision. I can locate edge features close to the ground plane quite well, but the trouble with edges is that they're not very unique. I could use the edge data, projected into cartesian coordinates, to begin building a local map, but after a short time the map would begin to degenerate.

So what's needed are more unique features rather than edges. These could be tracked between frames (data association), and I could then use an off-the-shelf graph based SLAM algorithm, such as TORO to build a map. At first I thought of using SIFT, which would be the obvious choice if I were an academic researcher, but there are software patent issues associated with that method that I'd rather not have to deal with. FAST corners would be nice, but the relatively low resolution caused by the mirror distortion means that this algorithm doesn't work well. But I can use the Harris corner features from "good features to track" which is already built into OpenCV. Having been an OpenCV refusenick for quite a number of years I'm now slowly growing to like it. Harris corners seem to work quite reliably, despite the low resolution.

75 older entries...

X
Share this page