Parents thinking was probably: If you can achieve similar results with a fraction of memory/compute usage then capability at the same hardware level will increase even more.
My meek opinion is this is obvious. Human-level intelligence requires at most 20 watts and substrate no more complicated than can be constructed from simple organic molecules in a dirty environment.
What is possible with 20 kilowatts and wafer fabricators?
It's specifically the fact that the network is directing its own optimization. Which yes, could then potentially be used to get more capability from the hardware, but that's true of manually optimized networks as well. Needing less human help is the... interesting part.
We're doing a double search - searching for experience outside, collecting data - and searching for understanding inside, by compressing the data. Search and learn, they define both AI and us.