Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I would really love that to be possible. It is ultimately, I suspect, one of the Hard Problems of epistemology / epistemic systems.

Diverging slightly: truth is not a popularity contest. The "wisdom of crowds" concept argues that crowds are, on average, more intelligent than individuals, even expert individuals. In practice ... crowds are subject to their own biases and failures. While uninformed (or lightly-informed) opinion may be better than no opinion, expert opinion tends to be superior to both ... though of course it is also subject to biases (co-option of motives, ideological and academic conservativism, etc.). Still, there are times when the popular winner is quite evidently not the most informative or relevant winner. Reddit is especially subject to this (and more so in the past couple of years than previously based on my very rare sojourns there).

Ultimately the question of a rating / moderation / ranking system is what do you want to optimise for? I'd written on this about a decade back now:

<https://web.archive.org/web/20200629055317/https://www.reddi...>

LLM AI seems like it might offer either a way of weighting individual votes in their appropriate areas of expertise, or offering its own assessment of relevance based on specific criteria (say: truth valance, significance, novelty). I still suspect it's not the sort of thing that's easily obtained. And is probably beyond the scope of an HN search tool.

But I love the suggestion.



And so long as we're all divulging secrets here ...

I've hacked the HN CSS to my own liking, links in my profile. Most of that's styling and such.

What's not included there is something I find useful: some visual tweaks to not specific contexts (users/sites) of interest.

As examples, it might be handy to recognise admin comments and posts immediately. Or YC hiring notices. Or people or sites you find particularly clueful. Or perhaps not.

I've found it useful, and a little classification goes a long way (long tails, Zipf functions, etc., etc.).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: