Science

Depth Perception From One Camera

Posted 8 Dec 2005 at 18:30 UTC by The Swirling Brain Share This

Stanford computer scientists unveiled a machine vision algorithm that gives robots the ability to approximate distances from a single monocular image. Using multiple cameras and computing power to gauge depth can be expensive and time consuming. Researchers have figured out that some depth cues can be figured out from a single image. The depth cues include variations in texture detail, lines that appear to be converging, and objects that appear hazy are likely to be farther away. The Stanford algorithm has a 35 percent error rate on the distance but they figure a robot processing 10 frames a second will have plenty of time to adjust for the error by the time it reaches an object 20 or 30 feet away.


SFM, posted 8 Dec 2005 at 21:01 UTC by while_true » (Observer)

There are standard computer vision techniques for getting depth from a moving camera. It is very similar to traditional stereo vision, but the location of the source camera in multiple frames isn't controlled.

You should look at Structure From Motion (SFM). It's a very common algorithm. Here is one paper that doesn't require tracking features.

Further, getting stereo from 2 fixed cameras is not that hard. You get a good depth resolution related to your 'baseline': the distance between the cameras. The Grand Challenge team DAD did well with DSPs and two high resolution cameras in a stereo configuration.

See more of the latest robot news!

Recent blogs

25 Jul 2014 mwaibel (Master)
20 Jul 2014 Flanneltron (Journeyer)
11 Jul 2014 shimniok (Journeyer)
3 Jul 2014 jmhenry (Journeyer)
3 Jul 2014 steve (Master)
2 Jul 2014 Petar.Kormushev (Master)
10 Jun 2014 robotvibes (Master)
10 May 2014 evilrobots (Observer)
2 Mar 2014 wedesoft (Master)
1 Dec 2013 AI4U (Observer)
X
Share this page