Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> It's not fair to compare them like this!

As someone who leans pro in this debate, I don't think I would make that statement. I would say the results are exactly as we expect.

Also, a highly verifiable task like this is well suited to LLMs, and I expect within the next ~2 years AI tools will produce a better compiler than gcc.

 help



Don't forget that gcc is in the training set.

That's what always puts me off: when AI replaces artists, SO and FOSS projects, it can only feed into itself and deteriorate..


The AlphaZero approach shows otherwise, as long as there is an automated way to generate new test cases and evaluate the outcomes.

We can't do it for all domains, but I believe we can for efficient code.

today's models could be probably already good enough to compose tasks, and evaluate the results.


it can feed into itself and improve. the idea that self-training necessarily causes deterioration is fanfic. remember that they spend massive amounts of compute on rl.

> I expect within the next ~2 years AI tools will produce a better compiler than gcc.

Building a "better compiler than gcc" is a matter of cutting-age scientific research, not of being able to write good code


Given that GCC is in the training data, it should not take much research to create an equally good compiler.

If reproducing data from the training set counts, then creating an equally good compiler is a matter of running `git clone` :D

The same two years as in "full self driving available in 2 years"?

Right.


These are different technologies with different rates of demonstrated growth. They have very little to do with each other.

Well let's check again in two years then.

> and I expect within the next ~2 years AI tools will produce a better compiler than gcc

and the "anti" crowd will point to some exotic architecture where it is worse


No, they will point out that the way to make GCC better is not really in the code itself. It's in scientific paper writing and new approaches. Implementation is really not the most work.

But only if there is a competent compiler engineer running the AI, reviewing specs, and providing decent design goals.

Yes it will be far easier than if they did it without AI, but should we really call it “produced by AI” at that point?


Yes, we will certainly go that way, probably code already added to gcc has been developed through collaborative AI tools. Agree we don't call that "produced by AI".

I think compilers though are a rare case where large scale automated verification is possible. My guess is that starting from gcc, and all existing documentation on compilers, etc. and putting ridiculous amounts of compute into this problem will yield a compiler that significantly improves benchmarks.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: