Old by how close the industry's products are to physical limits.
For example, the newest gas turbine designs are already exceeding 50% of the maximum theoretically possible efficiency as allowed by thermodynamics. So it doesn't matter how many thousands or millions more years gas turbines are optimized, by us, aliens, future descendants, etc...
No future gas turbine industry in this universe can possibly double the thermodynamic efficiency of the finished product.
>Old by how close the industry's products are to physical limits.
I was just listening to an AI podcast and they were duscussing going from the 1 second, 1 minutes, 1 day [unit of response - I cant recall the name of the measurement] -- but I assume thats the "moores law" of AI right now?
And as we get closer to the physical limits of chip production scale/die/etc - I assume they will be scaled horizontally while the GPT-X capabilities will scale volumetric-ally.
Not OP, but even GPT-X and ML reaches limits due to lack of compute and/or datasets.
For example, CNNs were largely known and discovered by the 1990s-2000s, but the compute simply didn't exist yet until GPU manufacturing became commoditized.
OpenAI's massive quantum leap is thanks to the massive corpora they were able to leverage, which until the past 5 years, simply didn't exist.
This is why we've seen massive jumps in Mandarin Machine Translation and Computer Vision from the PRC due to their massive corpora/dataset of English+Mandarin language news from CCTV/CGTN and local surveillance camera data respectively.
This compute limit is a big reason why the US Federal Govt has been working on the Exascale Computing Project for example.
Even for training on a massive dataset, you are still limited by compute and processing time (good ole Computational Complexity), which is why HPC projects like the Exascale Compute Project were created in 2015 along with additional funding+research in efficient and alternative data structures.
I highly recommend going down the rabbit hole of High Performance/Accelerated Machine Learning.
The eminence of AI is more 'volumetric' than 'out'
Out scales "up and out like a hill or a lift"
- volumetric is spherical - it presses into the future AND the past (it already has been harvesting history, but created a fire-hosed spigot-interface for the future as well) -- and draws it into its center for eval...
They are different.
Always on the positive expressions of XYZ axis - but had ignored the negative of each, where AGI will go in all dimensions...
I don't think this aligns with the Chinchilla scaling law. There is indeed a point at which you can oversaturate a model with data, as well as such thing as not giving it enough. Compute is the constraining factor, and it scales more-or-less linearly in both directions.
For example, the newest gas turbine designs are already exceeding 50% of the maximum theoretically possible efficiency as allowed by thermodynamics. So it doesn't matter how many thousands or millions more years gas turbines are optimized, by us, aliens, future descendants, etc...
No future gas turbine industry in this universe can possibly double the thermodynamic efficiency of the finished product.