Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I've heard that LLVM uses neural networks to drive the register allocator. I don't know how well it works, but it's a pretty cool idea.


I listened to a talk about this at pldi, wrt the auto-vectorizer. Given a piece of of sequential code, there are many ways to auto-vectorize it, and finding the fastest one is computationally complex. The current auto-vectorizer uses a faster algorithm that won't always generate the fastest possible vectorization. When they threw a neural network at it, they found it sometimes generated faster code than the slow 'optimal' algorithm, because the neural net was able to take into account factors the humans hadn't thought to in their model.


In the PGO category, there's also this recent proposal: http://lists.llvm.org/pipermail/llvm-dev/2020-April/140763.h...

At compile-time only, people have also been using trained models to derive cost functions for sequences of instructions (as opposed to analytical models which become very difficult to derive these days given the complexity of modern CPU architectures)


This is interesting, as I have essentially given up on attempting to figure out which sequences are faster. Instead, I go for the approximations that fewer instructions equals faster, and registers are faster.

It's hard to even figure out if the scheduling algorithms that work well on the Pentium and PentiumPro are worthwhile for the x86-64.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: