Sensors

Are Sensors the key to Grand Challenge?

Posted 11 May 2004 at 16:06 UTC by steve Share This

David Duke of RobotCorps has written a new Robotics Trends article that proposes the idea that the DARPA Grand Challenge is less about robots than about sensor technology. David offers an overview of the types of sensors used by robots entered in this year's DARPA contest and suggests possible sensor technology that we may see in the next contest such as SEEGRID's vision technology.

I agree, and I don't..., posted 11 May 2004 at 17:08 UTC by tim.holt » (Journeyer)

I was talking with someone about the grand challenge results (after watching the videos of the racers starting (or trying to)). The comment I remembered was, "You know, a 4 year old could figure out where to go in some of these cases", and I thought that was pretty telling. A four year old wouldn't walk into the ditch, or into a barbed wire fence, etc. And her senses aren't THAT good - in that she's not thinking, "Undetermined obstacle at bearing 035, range of 15.2 meters". More like "Something over there". If a toddler on a trike (who had a lot of endourance) could do better than some of those vehicles, I question if more really is better.

I also think the vehicles tried to go too fast. I'd have been much more impressed with a vehicle that went 20 miles at 5 MPH instead of one that went 7 miles at 35 MPH. Why didn't anyone go for the "slow but sure" approach? The problem with going fast is that if you see a rock or a barbed wire fence or a ditch, it's too late as it were.

And on a lighter note, as far as video recognition of roads goes, I couldn't help thinking of the old Road Runner cartoons. I could imagine someone painting a fake road that led to a cliff. Or just make a fake bridge out of cardboard. Once they get up and working well, playing around with how to fake these things out could be a lot of fun!

One more analogy..., posted 11 May 2004 at 17:11 UTC by tim.holt » (Journeyer)

A blind person could probably have dealt with a few of the challenges better than some of the vehicles did...

Interesting, posted 11 May 2004 at 17:12 UTC by ROB.T. » (Master)

So is that a 4 year old going 50 MPH?

re: Interesting, posted 11 May 2004 at 17:33 UTC by tim.holt » (Journeyer)

So is that a 4 year old going 50 MPH?

Well that's the thing - if you go 50, you're going to have problems. If you go 5 you won't have as many.

intelligence+sensors vs better sensors, posted 11 May 2004 at 18:53 UTC by steve » (Master)

I agree with Tim. The 4 year old has much more intelligence than any of the Grand Challenge robots. I don't really see a need for more sensors so much as more intelligent use of the sensor data they've already got. While 4 year old child level intelligence is probably beyond our capability at present, good insect-level or maybe reptile-level intelligence is probably attainable. I bet your average dragon fly or one of those threatened desert tortoises DARPA officials worried about could have made it out of the starting area and quite a few miles down the course without any problem at all.

The hardware needed to equal the processing power of a dragonfly or (maybe) a turtle is afforable. The question is whether any of us are clever enough to use that processing power to match the intelligence of a dragonfly or turtle.

Read the rules, posted 11 May 2004 at 19:42 UTC by ROB.T. » (Master)

People, it isn't possible to go 5 MPH and win the race, so you're 4- year-old is going to have to pick up the pace.

winning the race, posted 11 May 2004 at 20:27 UTC by steve » (Master)

It terms of completing the course in time to win the prize, you'd have to average 25 miles per hour (250 miles in 10 hours). But if winning simply means doing better than everyone else, maintaining 5 miles an hour for the full 10 hours would have done it this year. My guess is once you get the intelligence problem solved there won't be a big difference between 5 MPH and 25 MPH. The big hurdle is making the robot intelligent enough to work at all.

Was thinking the same, posted 11 May 2004 at 21:26 UTC by tim.holt » (Journeyer)

I know I'm playing "armchair challenger" here but I guess when I watched the results videos on some of those vehicles, I just shook my head a bit. I had a lot of questions like, "Did you ever actually test your vehicle out in a real extensive trip?", or "Did you honestly expect to finish or even go more than 5 miles?" I guess if the answer was, "We consider 5 miles success" even though that clearly is failure because you didn't finish the course, then I'd have been much more impressed with 20 miles at a slow pace.

intelligence, not sensors, is the answer, posted 12 May 2004 at 01:55 UTC by Nic » (Master)

I agree with Steve.

A dumb robot will not be improved by adding sensors to it - the key to building smart robots is the artificial intelligence. One 320 x 240 camera provides sufficiant input to a robot. Additional sensors could, obviously, help the robot become more efficiant, but it needs the intelligence capacity to deal with the sensors before they will be beneficial.

Also, the exact race length was 142.23 miles, so an average of 15 miles per hour would have done.

The Prodigies, posted 12 May 2004 at 02:01 UTC by Nic » (Master)

By the way, I'm the team leader of The Prodigies , a team that nearly made it into the 2004 Grand Challenge and will be competing in the 2005 Grand Challenge. You can see our website at

http://dnps-brad.home.comcast.net

New sensors are required, posted 12 May 2004 at 03:04 UTC by WhoPhlungPoo » (Journeyer)

IMHO it's not about robo intelligence, it's more about sensors. Sure a 4 year old would avoid running into a barbwire fence; the 4 year old can detect the existence of the fence; I doubt that any of the robots fielded could see or sense the thin barbwire. As long as the robot can detect the existence of the fence or other obstacles, it doesn't require any super spiffy AI to say, Oh!, there's something in front of me, stop and go around. However, it does require a sensor capable of detecting something like a barbwire fence or ditch in the road at X mph in order to avoid it.

The other statement I find interesting was the one about a blind man doing a better job; you could have your robot do the same thing, drive about real slow and "feel" it's way around. This would also not require supreme machine intelligence, in fact, it wouldn't require more than a small-embedded processor.

When you think about it, the task of traversing the desert with a robot isn't really that hard, however, attempting to do it at high speeds is only difficult because we lack sensors capable of seeing something as fine as a piece of wire strung between two posts at a great enough distance to allow for enough time to stop.

The current vision systems are light-years behind the capabilities of a 4 year old riding a big wheel, so for now we need to come up with some other form of sensing apparatus that will do the job.

Actually,, posted 12 May 2004 at 14:09 UTC by earlwb » (Master)

Actually, it was "time". Everyone ran out of time trying to get it all to work. Thus they all had various software problems that caused hardware problems. If they had been able to get their funding together earlier, then they would have had more time to get it to work. Plus I am sure human nature had something to do with it too, procrastination and et al. Frantically working on things for 20 hours a day up until "show time" that last week, always leads to bugs and problems.

Good point..., posted 12 May 2004 at 15:21 UTC by steve » (Master)

Development time is often a limiting factor even on little robots for local contests. I bet the phrase "it worked last night back at the lab..." was heard a few times out there on the starting line too. ;-)

Wired says we're all wrong... ;-), posted 12 May 2004 at 17:12 UTC by steve » (Master)

According to this new Wired article, what's needed for the Grand Challenge is not sensors, or intelligence, or more development time, but robots that learn!

http://www.wired.com/news/technology/0,1282,63421,00.html

wrong..., posted 12 May 2004 at 19:55 UTC by while_true » (Observer)

it is reidiculous to say the race can be done with even a single 640x480 camera.

What evidence is there?

Humans have FAR better cameras than any electronic one out there. The resolution is enormous and the onboard parallel processing is amazing. I've read in multiple places that 50-70% of our brain is doing vision. That means you would need ~10Tflops of computation, which that 4-year-old has.

Simply put, saying it's a sensors issue neglects the need to process the input to get something usable. Saying it's an "intelligence" issue is almost meaningless; it's like saying "these robots need to be smarter."

You need to do something with that computation, and currently I haven't seen any usable computer vision algorithms which are up to task, even with plenty of computation.

So what is needed? Humans have (at least) the following:

-Cameras with super resolution, robustness to lighting conditions/saturation, directed attention, wide-angle field of view
-massive ammounts of computation
-excellent stabilization (red team gimbal is perfect)
-quality inertial sensors.

Rather than saying a "4 year old" could do it, you first need to appreciate how much kick-ass hardware there is in that little girl.

hi-res cameras vs intelligence, posted 12 May 2004 at 21:18 UTC by steve » (Master)

I not sure that saying they need to be "more intelligent" is different from saying they need "better computational algorithms" Whatever label you want to use, they boil down to the same thing - software improvements. The raw computational power and sensor accuracy available inexpensively today makes it hard to justify saying the problem is hardware.

The 4 year old human might have hi-res vision but there's plenty of research showing that pre-5 year old children have fairly primitive vision capabilities and navigate primarily on geometric rather than contextual clues about their environment. An older child or adult has lots of contextual knowledge that comes into play with vision processing but small children don't.

And what about a Dragonfly? It can fly rings around any of the Grand Challenge robots using only simple optical-flow-based vision processing and minimal computing hardware. It can navigate so quickly and accurately that as it approaches prey in flight it matches their movements in a way that makes the approaching dragonfly appear motionless to the prey. Some of the GC robots probably had more sensory and raw computational power than a Dragonfly.

What's different? If not hardware, maybe better software - the Dragonfly is (more intelligent / has better computation algorithms) and makes better use of the sensor data it can get. And to some extent the Dragonfly may support Earl's argument about time as well - the Dragonfly has been under development for quite a few million years. But I expect the GC robots will catch up with it before long! :-)

Oh, and those Dragonfly eyes are around 30k pixels each. Combined, they'd be roughly equivalent to a single 346 x 173 camera!

Neat debate!, posted 12 May 2004 at 23:09 UTC by tim.holt » (Journeyer)

Glad actually to see this debated this much.

I like the Dragonfly analogy there - much closer in computation and sensor capacity required than mine I must say :^)

There's an interesting description of how routes are defined for the vehicles. I imagine some of you already know it by heart, but it's sorta new to me.

Basically you've got two waypoints that define a leg. Then you've got a lateral boundary offset that basically says how far off the line between the two points you can go. So - given the two waypoints, and the offset, your main job is to just get from point A to point B and not stray too far off the line. And given going too far off disqualifies you, one could assume that what ever the potential solution is from point A to B, it's somewhere inside that boundary.

So I guess the short version is, you only need to figure out how to navigate a series of 1/4 mile distances over and over, and not screw up. You don't need to know how to go 200 miles or get from LA to Vegas. Just figure out how to go a couple of city blocks at a time since you're basically repeating the same "small" problem over and over, and the sum of those solutions gets you from LA to Vegas.

I'd love to see someone set up a threaded discussion some time and let us arm chair challengers design one of these on paper. Be interesting what people would suggest or come up with.

320 x 240 camera, posted 12 May 2004 at 23:14 UTC by Nic » (Master)

Imagine this situation:

You are in a car and you are told to drive on a desert road for 150 miles at an average of 15 MPH; however, the car has no windows and no windshield. There is a small TV monitor connected directly to a 320 x 240 camera that is mounted on the roof of the car facing forward. You also have a map of the course and a GPS.

Could you do it? I wouldn't guarantee that I could finish the entire 2004 GC course in that situation, but from what I saw of the course (just the beginning) I'm sure I could make it a lot farther than some of the AGVs.

Nic The Prodigies

Simulator?, posted 12 May 2004 at 23:42 UTC by steve » (Master)

Has anyone written any Open Source simulation software for the Grand Challenge rule set - something with the actual waypoints and terrain maps from the first event? That would be a great way to work out some of the basic problems.

well..., posted 13 May 2004 at 00:59 UTC by c6jones720 » (Master)

To quote the immortal Cylon from The Return of Starbuck, a four year old what???

Re: Simulator?, posted 13 May 2004 at 01:34 UTC by tim.holt » (Journeyer)

steve wrote...
Has anyone written any Open Source simulation software for the Grand Challenge rule set - something with the actual waypoints and terrain maps from the first event? That would be a great way to work out some of the basic problems.

Interesting idea. Dunno how familiar ppl are with it, but there's an interesting robot development (and simulation testbed at http://playerstage.sourceforge.net/

Flightgear project, posted 13 May 2004 at 13:24 UTC by aplumb » (Journeyer)

There's also Flightgear, but it's more flight-centric. Related projects have hooks into GPS hardware and the like.

That is not a bad idea, posted 13 May 2004 at 13:49 UTC by earlwb » (Master)

Actually, that isn't a bad idea. Setup a vehicle to have everything like a robot would, no windows, etc. Just the sensors and GPS. Then have a person attempt to follow the course and record everything possible about what and how that person does it. It should be a lot easier trying to program a robot to perform then. If you can determine what the person did to follow the course of course.

From The Author, posted 13 May 2004 at 17:21 UTC by ddukemail » (Observer)

Hi there. Steve was kind enough to adjust my permissions so that I could post.

I fully agree with prior posts that more sensors are not the answer. More sensors would only increase the designer's workload to perform sensor fusion and would likely result in an even noisier data set. The idea is to simplify the designer's job by driving up signal-to- noise ratio and by extracting useful information up front before the data even reaches the main processor.

The article was meant to open the idea that GC (and robotics in general) could be tackled from a whole economic perspective rather than a pure technical one. So, the question I'm raising is, how can robotics challenges be addressed in the marketplace to drive investment, open standards, and generate economies of scale? Computational sensors are one way to distribute the R&D and computational workload in a way that unbundles the problem and makes it more manageable. I think that this will be a natural progression for advanced robotics as the industry matures.

Any thoughts?

R / David

marketplace and standards, posted 13 May 2004 at 19:07 UTC by steve » (Master)

The example of SEEGRID from your article is one good idea. Another would be a ready-to-use optical flow vision sensor. I've thought out a really cool product based on the idea but have never had the spare time or money to pursue it. But I think in the end, what's really needed is an open framework for a moderately intelligent AI that can do the sensor fusion. We're just begining to see the building blocks needed for this; on the software side, with things like Orocos and Player/Stage, and, on the standards side with things like the SensorML, RoboML, and XRCL markup languages.

Computational sensors == win, posted 13 May 2004 at 19:27 UTC by tim.holt » (Journeyer)

I definitely like the idea of computational sensors. You could think of running an autonomous vehicle based on those like a well run team with excellent management. Good managers hire people who know more than they do and listen, then make decisions if necessary.

A sort of more concrete analogy is the use of a good programming API for developing software. An API shields the coder from the complex and sometimes ugly details of actual implementation, and lets them concentrate on the higher level task at hand, which is good.

For economics, I'd think computationally smart sensors are far easier to integrate. I think the same concept goes the other way - in that computationally smart actuators are also useful. I like the idea of an architecture where I could just tell a motor to go a certain speed and direction, and don't have to worry about complex codes to a motor controller for example.

As Steve mentions, standards are useful here. Imagine some standardized format for near environment scanning. Imagine one format where LIDAR, RADAR, and Ultrasonic systems from different vendors all reported the same format - "Object range R and heading T" (or whatever). The cool thing is when a higher level vision based system starts reporting the same results, and teh underlying application really doesn't care one bit where it comes from (as long as it's accurate). And for anyone creating a simulation test bed, the simulator can utilize the same format, which means the same code used in the vehicle can be used in a testing/simulation environment.

A good example of a standard is NMEA 0183. Imagine if every GPS had some whacky proprietary format? It would make vital modern tools like ECDIS extremely difficult.

er - PS... Has the actual waypoint dataset from the grand challenge ever been published and made available?

Exactly, posted 13 May 2004 at 20:59 UTC by ddukemail » (Observer)

Yes, absolutely. We are definitely talking about the same thing here. How could standards and architectural decisions be made to accelerate development? For instance, the Internet has been built on the dumb network, smart node theory, i.e. keep the network simple, let the nodes carry the bulk of the processing.

Similarly, I could imagine an emebedded network for robotics that would have some of the features of a CAN or Firewire network at low cost. Luxury vehicles already have this and it allows vehicle designers to generate very complex designs while integrating multimedia, navigation, drive, and control systems in with a whole plethora of OEM components. In this case, a simple network decision helps drive vendors to standardize communications protocols and keeps the designer's interface work to a minimum. The designer can then focus on using the capability instead of integrating it. Something like plug and play. The net result is that the designer then becomes able to work at a higher level of abstraction.

Tim, your point about actuators is right on the mark. Advanced actuators not only have the ability to actuate, but they have the ability to capture a tremendous wealth of data as well. Torque or force, position, velocity, acceleration, vibration, and even temperature are some parameters that let the robot know that much more about its environment. Distributed industrial control systems are now operating on networks like firewire to help capture that data and provide low latency, high bandwidth controls. Why couldn't an actuator have a simple plug and play interface + digital signal processing to assist the A.I.? DSPs and PICs are at price points now where the economics aren't the problem.

Not sure about the GPS waypoints. I'd have to look that up.

Any GC participants who would care to weigh in?

optical-flow from optical mouse assembli, posted 14 May 2004 at 00:31 UTC by aplumb » (Journeyer)

Riffing off the dragonfly optical-flow concept...

Sounds a lot like the way an optical mouse works. Assemble an array of those sensors - with modified lense assemblies of course - and work with the resulting X/Y motion data. Agilent makes some Optical Mouse Sensors if you want to go straight to the source, browse the spec sheets, etc.

Bits and Pieces, posted 14 May 2004 at 03:21 UTC by ddukemail » (Observer)

In doing research for the article I recall finding research from Fraunhofer AIS that had combined optical flow and omnidirectional vision. The website provides some insight into why this was done:

One of the problems of motion detection using traditional dynamical image processing is the aperture problem, which is caused by a narrow field of view. Limited field of view makes the tracking of the surface variations in the scene difficult. Also, sudden appearance and disappearance of the objects in an image scene violate the total intensity conservation constraint, which is the basic assumption for optical flow calculations

I also found some info on a technology called Terahertz Imaging. Terahertz frequencies have some interesting properties, not the least of which is the ability to penetrate smoke and fog, and even clothing! One of the most interesting sites on the web is the European Space Agency's (ESA) StarTiger R&D Team. They have an interesting video showing a man through his clothing showing a hidden weapon on his person. From the website:

Terahertz imaging can be achieved by observing the natural terahertz waves emitted by pretty much everything. Unlike light, terahertz waves are able to propagate through cloud and smoke providing a powerful advantage for certain remote sensing measurements. From a practical aspect they are also able to pass through windows, paper, clothing and in certain instances even walls.

This *seems* to be a dead thread now..but..., posted 2 Jun 2004 at 20:05 UTC by brybert » (Observer)

Hi! Ah! This is exactly what I've been looking for! Ok...a couple of things have been asked but not completely answered. I have some of those answers. First of all, a few things about me and why I know so much about this. I have lost *quite* a bit of sleep researching this...I learned about the race in March from a Pop Sci article..and was disappointed there wasn't even time to find a team to help for free, much less come up with my own entry. Deep down, though...even then I had a feeling things would not go as DARPA had planned. They didn't. Thanks to the last few weeks of as little sleep as possible and as much thought and research into this...I have came to several conclusions. I can find sponsers. I can build one entry by myself..perhaps even two. I can do it right. That said, let me share some things that will help you all in understanding more about this whole thing and why this contest idea is a very good thing.

First, you must understand why this contest came about. DARPA is concerned (and quite rightly so) that the defense contractors they have now are not going to have a autonomous vechicle in production anytime soon. The progress they have made in the last few years has been little or nothing...and the Armed Forces are very serious about having one-third of all operational ground combat vechicles to be unmanned by 2015. Hence this contest. According to Mr. Whittaker, we're looking at a race for next year, sometime in September or October. I have also seen 2006, although I do not think that is correct. The prize money will be doubled..$2Mil to the winner. Oh...by the way...if someone had finished the course and took more than 10 hours to do it...no prize money would have been awarded.

Ok..A couple of threads ago on this page, Steve asked if actual waypoints of the race had been incorporated into a simulator. No. If all the participants do as DARPA has asked, that will not happen ever. DARPA has asked that actual course data not to be released to the public. The area in which the race was held is a desert. At first look, the desert is a very dead place...but it's not at all. In fact, it is teeming with life..wildlife that would be put in danger if everybody and his brother decided to run that course. The environment would be damaged and altered beyond repair..it's just that simple. The is one of the reasons..if you have researched this as well as I have...that DARPA had to limit the number of participants that would be allowed to run this race.

Now...to the main thread question....and my observations about this. Are sensors the key? Nope. Faster computers? Nope. The right mix of sensors is critical...true...but just having the biggest, baddest or the most of them won't win this race. The true answer is prolly not one you have heard before from a normal engineer..but then...I am not no ever will be a "normal" engineer. The answer is this: Less everything. Fewer bells and whistles both in hardware and software...and better, simpler software that does the job it needs to do..and do that job well...day after day...week after week...year after year...everytime you turn it on. No crashes..no excuses..the job done right the first time. Yes..and that means (sorry unca Bill) nothing with Microsoft's name anywhere in the system. On the client...oh yes...sure...but nowhere near any core, vital processes. This also means staying away from some Linux distributions as well...trying to be a Windoze replacement. More complexity=more code=more bugs=more instability=less reliablity. I think us, as engineers, when we design things like this need to ask ourselves more questions. Do I really need this "feature/technology"? Is this going to be easy to integrate into the other systems to make a better more complete and stable primary system? One more thing...don't be afraid to say you were wrong...don't waste time on a unworkable design or idea. I have a feeling we're going to treated to several repeats of what happened this year. Why? Because there people cannot admit they were wrong....or made a design mistake. That's it for now...sorry about getting so long winded here. I really want to see somebody win next year...or whenever the next race is.

Brybert - no and yes, posted 14 Jun 2004 at 02:55 UTC by ROB.T. » (Master)

I don't think DARPA lost anything - it was a relatively cheap learning experience, they got gobs of media coverage, and now everyone has an idea of how hard some of these technical challenges they attempt to solve are.

As far as your "simple is better" concept, I would say "create the solution that fits the application" but I think we're talking the same language.

See more of the latest robot news!

Recent blogs

26 Nov 2014 shimniok (Journeyer)
15 Nov 2014 mwaibel (Master)
14 Nov 2014 Sergey Popov (Apprentice)
14 Nov 2014 wedesoft (Master)
5 Aug 2014 svo (Master)
20 Jul 2014 Flanneltron (Journeyer)
3 Jul 2014 jmhenry (Journeyer)
3 Jul 2014 steve (Master)
2 Jul 2014 Petar.Kormushev (Master)
10 Jun 2014 robotvibes (Master)
X
Share this page