Widely Used Computer Vision Tests Flawed says MIT

Posted 26 Jan 2008 at 23:37 UTC by steve Share This

Computer vision may not be making as much progress as recent research results seem to suggest, a new MIT study says. The new study, "Why is Real-World Visial Object Recognition Hard?" (PDF format), was written by MIT researchers Nicolas Pinto, David D. Cox, and James J. DiCarlo of MIT's DiCarlo Lab. Commonly used computer vision testing methods may actually be making things worse, say the researchers: "these results demonstrate that tests based on uncontrolled natural images can be seriously misleading, potentially guiding progress in the wrong direction." The researchers offer suggestions for types of images that should be used to test computer vision progress and call for "a renewed focus on the core problem of object recognition—real-world image variation." For more, see the MIT News summary of their paper.

In favour of naturalism, posted 27 Jan 2008 at 11:09 UTC by motters » (Master)

Synthetic images can be used for object recognition tests. However, you need to be extremely careful about how such images are produced to ensure that they contain realistic variations and imperfections. The best rendering systems currently available may be able to produce images of such quality, which are hard to distinguish from photographs.

There are a few classic images often used by researchers to test the performance of stereo correspondence algorithms. However, these images are synthetic and as such contain easily identifiable edges and no significant noise or other artefacts which you would typically find in real camera images. So for example I tried using Stan Birchfield's stereo algorithm some years ago and it worked well on idealised test images, but was almost totally useless on real camera images.

So generally I'm in favour of using real camera images wherever possible rather than synthetically generated ones. Real images force you to address the same problems which biological vision systems also have to deal with. However, as the article states even with "natural" images you need to be careful to choose sets which are not statistically biased in one way or another.

Like, that's totally random, posted 3 Feb 2008 at 18:27 UTC by TheDuck » (Journeyer)

"In favour of naturalism" LOL

I have to agree. When I consider robot simulations I factor in random noise. Just like in the book Evolutionary Robotics, they determined that a "perfect" simulation lead to significantly less than perfect real-world results. Adding randomness to the simulation, then fine-tuning the training in the real world resulted in a much better result; almost ideal (or as ideal as trained control programs could be). They also developed models from real-world sensor data to use as inputs rather than what might be technically ideal computed sensor data. A cylinder bounced back to sensor array not only doesn't "look like" a perfect cylinder but even changes while everything is stationary.

See more of the latest robot news!

Recent blogs

30 Sep 2017 evilrobots (Observer)
10 Jun 2017 wedesoft (Master)
9 Jun 2017 mwaibel (Master)
25 May 2017 AI4U (Observer)
25 Feb 2017 steve (Master)
16 Aug 2016 Flanneltron (Journeyer)
27 Jun 2016 Petar.Kormushev (Master)
2 May 2016 motters (Master)
10 Sep 2015 svo (Master)
14 Nov 2014 Sergey Popov (Apprentice)
Share this page