Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> While the professional looking text could have been already wrong, the likelihood was smaller...

I don't criticise you for it, because that strategy is both rational and popular. But you never checked the accuracy of your information before so you have no way of telling if it has gotten more or less accurate with the advent of AI. You were testing for whether someone of high social intelligence wanted you to believe what they said rather than if what they said was true.



I guess the complaint is about losing this proxy to gain some assurance for little cost. We humans are great at figuring out the least amount of work that's good enough.

Now we'll need to be fully diligent, which means more work, and also there'll be way more things to review.


There’s not enough time in the day to go on a full bore research project about every sentence I read, so it’s not physically possible to be “fully diligent.”

The best we can hope for is prioritizing which things are worth checking. But even that gets harder because you go looking for sources and now those are increasingly likely to be LLM spam.


Traditionally, humans have addressed the imbalance between energy-to-generate and energy-to-validate by building another system on top, such as one which punishes fraudsters or at least allows other individuals to efficiently disassociate from them.

Unfortunately it's not clear how this could be adapted to the internet and international commerce without harming some of the open-ness aspects we'd like to keep.


I'd argue people clearly don't care about the truth at all - they care about being part of a group and that is where it ends. It shows up in things like critical thinking being a difficult skill acquired slowly vs social proof which humans just do by reflex. Makes a lot of sense, if there are 10 of us and 1 of you it doesn't matter how smartypants you may be when the mob forms.

AI does indeed threaten people's ability to identify whether they are reading work by a high status human and what the group consensus is - and that is a real problem for most people. But it has no bearing on how correct information was in the past vs will be in the future. Groups are smart but they get a lot of stuff wrong in strategic ways (it is almost a truism that no group ever identifies itself or its pursuit of its own interests as the problem).


> I'd argue people clearly don't care about the truth at all

Plenty of people care about the truth in order to get advantages over the ignorant. Beliefs aren't just about fitting in a group, they are about getting advantages and making your life better, if you know the truth you can make much better decisions than those who are ignorant.

Similarly plenty of people try to hide the truth in order to keep people ignorant so they can be exploited.


> if you know the truth you can make much better decisions than those who are ignorant

There are some fallacious hidden assumptions there. One is that "knowing the truth" equates to better life outcomes. I'd argue that history shows more often than not that what one knows to be true best align with prevailing consensus if comfort, prosperity and peace is one's goal, even if that consensus is flat out wrong. The list is long of lone geniuses who challenged the consensus and suffered. Galileo, Turing, Einstein, Mendel, van Gogh, Darwin, Lovelace, Boltzmann, Gödel, Faraday, Kant, Poe, Thoreau, Bohr, Tesla, Kepler, Copernicus, et. al. all suffered isolation and marginalization of some degree during their lifetimes, some unrecognized until after their death, many living in poverty, many actively tormented. I can't see how Turing, for instance, had a better life than the ignorant who persecuted him despite his excellent grasp of truth.


You are thinking too big, most of the time the truth is whether a piece of food is spoiled or not etc, and that greatly affects your quality of life. Companies would love to keep you ignorant here so they can sell you literal shit, so there are powerful forces wanting to keep you ignorant, and today those powerful forces has way stronger tools than ever before working to keep you ignorant.


Socrates is also a big name. Never forget.


You're implying that there is an absolute Truth and that people only need to do [what?] to check if something is True. But that's not True. We only have models of how reality works, and every model is wrong - but some are useful.

When dealing with almost everything you do day by day, you have to rely on the credibility of the source of the information you have. Otherwise how could you know that the can of tuna you're going to eat is actually tuna and not some venomous fish? How do you know that you should do what your doctor told you? Etc. etc.


> You're implying that there is an absolute Truth and that people only need to do [what?] to check if something is True. But that's not True. We only have models of how reality works, and every model is wrong - but some are useful.

But isn't your third sentence True?


I don't know it to be True, but I know it to be useful :)


I am not sure I am following - you don't know if there is anything that is really true, but you presume there isn't and that model of "the only truth is the absence of truth" is useful to you because it allows you to ... what exactly?


My new cheap proxy to save mental cost: pay to search on kagi, sort results by tracker count. My hope is fewer trackers correlates with lower incentives to seo spam. This may change but seems to work decently for now.


In the past, with a printed book or journal article, it was safe to assume that an editor had been involved, to some degree or another challenging claimed facts, and the publisher also had an interest in maintaining their reputation by not publishing poorly researched or outright false information. You would also have reviewers reading and reacting to the book in many cases.

All of that is gone now. You have LLMs spitting their excrement directly onto the web without so much as a human giving it a once-over.


I suggest you look into how many things were published without such scrutiny, because they sold.


How do you "check the accuracy of your information" if all the other reliable-sounding sources could also be AI generated junk? If it's something in computing, like whether something compiles, you can sometimes literally check for yourself, but most things you read about are not like that.


>But you never checked the accuracy of your information before so

They didn't say that and that's not a fair or warranted extrapolation.

They're talking about a heuristic that we all use, as a shorthand proxy that doesn't replace but can help steer the initial navigation in the selection of reliable sources, which can be complemented with fact checking (see the steelmanning I did there?). I don't think someone using that heuristic can be interpreted as tantamount to completely ignoring facts, which is a ridiculous extrapolation.

I also think is misrepresents the lay of the land, which is that in the universe of nonfiction writing, I don't think that there's a fire hose of facts and falsehoods indistinguishable in tone. I think there's in fact a reasonably high correlation between the discernible tone of impersonal professional and credible information, which, again (since this seems to be a difficult sticking point) doesn't mean that the tone substitutes for the facts which still need to be verified.

The idea that information and misinformation are tonally indistinguishable is, in my experience, only something believed by post-truth "do you own research" people who think there are equally valid facts in all directions.

There's not, for instance, a Science Daily of equally sciency sounding misinformation. There's not a second different IPCC that publishes a report with thousands of citations which are all wrong, etc. Misinformation is out there but it's not symmetrical, and understanding that it's not symmetrical is an important aspect of information literacy.

This is important because it goes to their point, which is that something has changed, in the advent of LLMS. That symmetry may be coming, and it's precisely the fact that it wasn't there before that is pivotal.


Interesting points! Doesn't sound impossible with an AI that's wrong less often than an average human author (if the AIs training data was well curated).

I suppose a related problem is that we can't know if the human who posted the article, actually agrees with it themselves.

(Or if they clicked "Generate" and don't actually care, or even have different opinions)




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: