Cool demo but I still wonder if fundamentally this is just a brute-force approach. Wouldn't it be better to do some traditional preprocessing (e.g. recognizing rectangles, circles, etc.) and feeding higher-level descriptors into the classifier?
If the net learns based on pixels you still have to somehow solve rotation and scale invariance. Or is there something new in deep-learning vs. old-school neural nets that fixes the issues that bedeviled neural nets the first time they were popular?
If the net learns based on pixels you still have to somehow solve rotation and scale invariance. Or is there something new in deep-learning vs. old-school neural nets that fixes the issues that bedeviled neural nets the first time they were popular?