Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

5σ seems like an unnecessarily high standard to this non-physicist; what's the rationale for that? At 5σ we could publish 1,000 major discoveries a year and have E[false discovery accepted] = 1,740 years.


The two main rationales are that systematic uncertainties are historically under-estimated and that we are looking in many channels (more than 1000), so it would not be too hard to find a 3 or 4 sigma anomaly. The second part is the so-called "Look Elsewhere Effect." If you hit 5 sigma, you are fairly safe from either of these effects ruining your "discovery."


> The second part is the so-called "Look Elsewhere Effect."

Also more generally known as the "multiple testing" problem, fwiw (not sure why it has a different name in physics, unless I'm missing a subtlety).

It's a major problem in "big data" also, where people just data-dredge thousands of possible parameter choices and pairwise correlations, and then report the p<0.01 results that came up, even though you'd expect several false positives just by chance with that methodology.


I think it's mainly so they can be as condescending and smug as possible at interdisciplinary conferences.


Actually, it's because the rest of you "scientists" end up publishing bullshit results that you got by chance.

I'm kind of joking - most other scientists don't collect enough data to have to worry about 1 in 10,000 events happening by chance. In medicine, though, I'm not joking at all - those guys publish absolute statistical garbage all the time, I hesitate to even consider it a science because the data dredging is so bad. I can "prove" just about anything if the publication standard is 95% significance...




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: