Depth Perception From One Camera

Posted 8 Dec 2005 at 18:30 UTC by The Swirling Brain Share This

Stanford computer scientists unveiled a machine vision algorithm that gives robots the ability to approximate distances from a single monocular image. Using multiple cameras and computing power to gauge depth can be expensive and time consuming. Researchers have figured out that some depth cues can be figured out from a single image. The depth cues include variations in texture detail, lines that appear to be converging, and objects that appear hazy are likely to be farther away. The Stanford algorithm has a 35 percent error rate on the distance but they figure a robot processing 10 frames a second will have plenty of time to adjust for the error by the time it reaches an object 20 or 30 feet away.

SFM, posted 8 Dec 2005 at 21:01 UTC by while_true » (Observer)

There are standard computer vision techniques for getting depth from a moving camera. It is very similar to traditional stereo vision, but the location of the source camera in multiple frames isn't controlled.

You should look at Structure From Motion (SFM). It's a very common algorithm. Here is one paper that doesn't require tracking features.

Further, getting stereo from 2 fixed cameras is not that hard. You get a good depth resolution related to your 'baseline': the distance between the cameras. The Grand Challenge team DAD did well with DSPs and two high resolution cameras in a stereo configuration.

See more of the latest robot news!

Recent blogs

21 Mar 2017 AI4U (Observer)
18 Mar 2017 mwaibel (Master)
25 Feb 2017 steve (Master)
17 Jan 2017 shimniok (Journeyer)
16 Aug 2016 Flanneltron (Journeyer)
9 Jul 2016 evilrobots (Observer)
27 Jun 2016 Petar.Kormushev (Master)
2 May 2016 motters (Master)
6 Nov 2015 wedesoft (Master)
10 Sep 2015 svo (Master)
Share this page