Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> If the compiler is required to make all code paths result in the same uninitialized value, that can limit code generation options

Can you provide (on say x86_64) an example of this, other than the case where the compiler prunes cases based on characterizing certain paths as UB? In other words, a case where "an uninitialized value is well-defined but can be different on each read" allows more performance optimization than "the value will be the same on each read".

> Also, an uninitialized value might be in a memory page that gets reclaimed and then mapped in again, in which case (because it hasn’t been written to) the OS doesn’t guarantee it will have the same value the second time. There was recently a bug discovered in one of the few algorithms that uses uninitialized values, because of this effect.

This does not sound correct to me, at least for Linux (assuming one isn't directly requesting such behavior with madvise or something). Do you have more information?



The most obvious general case (to me) is reading an uninitialized local variable in a loop. If uninitialized has to be the same value every time, you’d have to allocate a register or stack space to ensure the value was the same on every iteration. Instead, you’d don’t have to allocate anything, just use whatever value is in any register that’s handy. (By this logic you can also start pruning code, by picking the “most optimal” value for the uninitialized variable.)


I can’t find a citation, but my recollection is the problem happened with the Briggs-Torczon sparse set algorithm, which relies on uninitialized memory not changing. For performance, they were using MMAP_UNINITIALIZED (which has to be enabled with a kernel config).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: