ImageNet has been a deep learning benchmark data set since it was created. It was the competition that showed that DL networks could outperform non-ML techniques and it’s been used by academics as a standard for testing new image classification systems. A few days ago an exciting paper was published on arxiv for training ImageNet in four minutes. Not weeks, days or hours but minutes. This is on the surface a great leap forward but it’s important to dig beneath the surface. The Register sub headline says all you need to know:
This is your four-minute warning: Boffins train ImageNet-based AI classifier in just 240s https://t.co/j6Wu1yMMkM
— The Register (@TheRegister) August 1, 2018
So if you don’t have a relaxed view on accuracy or thousands of GPUs lying around, what’s the point? Is there anything that can be taken from this paper?
Continue reading ImageNet in 4 Minutes? What the paper really shows us