Comment by sumodds
13 years ago
Am not sure if you can apply winner takes all for such marginal difference in error. Give a slightly different database and things go awry.
Check out : "Unbiased Look at Dataset Bias", A. Torralba, A. Efros,CVPR 2011.
The difference in error between the first and the rest is ENORMOUS.
Task 1:
Differences:
As you can see the first is way ahead of the rest. The difference between the 1st and 2nd is ~11%, between the second and third ~1%.
Task 2:
Idem dito.
But the most exciting thing is that the results were obtained with a relatively general purpose learning algorithm. No extraction of SIFT features, no "hough circle transform to find eyes and noses".
The points of the paper you cite are important concerns, but this result is still very exciting.
the results were obtained with a relatively general purpose learning algorithm. No extraction of SIFT features, no "hough circle transform to find eyes and noses".
This deserves even more emphasis. All of the other teams were writing tons of domain specific code to implement fancy feature detectors that are the results of years of in-depth research and the subject of many PhDs. The machine learning only comes into play after the manually-coded feature detectors have preprocessed the data.
Meanwhile, the SuperVision team fed raw RGB pixel data directly into their machine learning system and got a much better result.
Lol.. my bad. I did not pay attention. I thought the error was in percentages. (I was comparing with MNIST and somehow assumed this too was percentages). Come to think of it, that is really dumb (what that would mean) !!
Thanks for the reference. It goes well with "Machine Learning that Matters", a paper cited by Terran Lane in his recent blog post "On leaving Academia".
I worry you may have taken a biased look at "Unbiased Look at Dataset Bias".
Not that only, I had a high variance on my bias.. ;)