That's a bold claim. As far as I know there was one paper that reported a model beating human scores in a specific test (imagenet, I believe). Whether that translates to "superhuman" results in general is followed by a very big question mark.
In general I really struggle to see how any algorithm that learns from examples, especially one that minimises a measure of error against further examples, can ever have better performance than the entities that actually compiled those examples in the first place (in other words, humans).
I'm saying: how is it possible to learn superhuman performance in anything from examples of mere human performance at the same task? I don't believe in magic.
First of all no one ever expected machines to beat humans at Imagenet. At least not this soon. It's an amazing accomplishment, because Imagenet is high resolution pictures of many different types of objects. Which is very different than tiny photos or pictures of digits.
Second the examples were produced by scraping Flickr. Then mechanical turkers were asked to confirm if the object was in the image or not.
There are many images that are kind of ambigious, or contain multiple objects, so humans don't do perfectly. One researcher tried to estimate human performance, and got about 5%. Which has been beaten by computers now, by a lot.
> First of all no one ever expected machines to beat humans at Imagenet.
I'm not contesting the fact that it's surprising and overall a sign of progress. I'm contesting the claim that it demonstrates "superhuman" performance.
By analogy, a good student at a bad school is "superhuman" because he or she got a good mark in an exam that most other pupils _in that school_ failed. You gotta go a lot further than that before you put on the red cape.
> how is it possible to learn superhuman performance in anything from examples of mere human performance at the same task? I don't believe in magic.
Computers could be better at assigning probabilities to ambiguous examples. In particular, for an image that is very ambiguous for most humans, maybe a computer would assign 99% probability to it (hence it would be only a little bit ambiguous).
That's not how it works. Assigning a high probability to anything is trivial: just add 90% to any probability calculation. The important thing is how close your guess is to the right answer.
Ensembles of humans can outperform the average human, and in the same way an algorithm trained on data labeled by an ensemble of humans can outperform the average human.
Beating the average human does not make you "superhuman". Here's a quick proof: there exist mere humans with above-average performance who can outperform the "average human". Those people are human. Therefore, they're not superhuman.
Besides, I have no idea whether the people who tagged Imagenet are the "average human", nor whether an ensemble of them can outperform the "average human".
Also, I'm pretty sure that it doesn't necessarily follow that an algorithm trained by many X can outperform any X. Most humans are trained by an ensemble of humans and they don't necessarily outperform the "average human".
Mind you, I'm not saying I _know_ what "superhuman" is, but then again I'm not the one who claims to have created an example of it.
Then by implication this task does not require intelligence ;)
Computers are faster serial processors but brains do more in parallel.
Parallel pipelines only really hit Neural Nets with GPU's and the Imagenet convnet solvers like Alexnet were among the 1st parallel implementations - this gave 30 - 300 speedup but still relatively tiny compared with squishy wetware.
That's a bold claim. As far as I know there was one paper that reported a model beating human scores in a specific test (imagenet, I believe). Whether that translates to "superhuman" results in general is followed by a very big question mark.
In general I really struggle to see how any algorithm that learns from examples, especially one that minimises a measure of error against further examples, can ever have better performance than the entities that actually compiled those examples in the first place (in other words, humans).
I'm saying: how is it possible to learn superhuman performance in anything from examples of mere human performance at the same task? I don't believe in magic.