Stanford computer scientists unveiled a machine vision algorithm that gives robots the ability to approximate distances from a single monocular image. Using multiple cameras and computing power to gauge depth can be expensive and time consuming. Researchers have figured out that some depth cues can be figured out from a single image. The depth cues include variations in texture detail, lines that appear to be converging, and objects that appear hazy are likely to be farther away. The Stanford algorithm has a 35 percent error rate on the distance but they figure a robot processing 10 frames a second will have plenty of time to adjust for the error by the time it reaches an object 20 or 30 feet away.