What fascinates me about your comment is that you are expressing that you trusted what you read before. For me, LLMs don't change anything. I already questioned the information before and continue to do so.
Why do you think that you could trust what you read before?
Is it now harder for you to distinguish false information, and if so, why?
In the past, you had to put a lot of effort to produce a text which seemed to be high quality, especially when you knew nothing about the subject. By the look of text and the usage of the words, you could tell how professional the writer was and you had some confidence that the writer knew something about the subject. Now, that is completely removed. There is no easy filter anymore.
While the professional looking text could have been already wrong, the likelihood was smaller, since you usually needed to know something at least in order to write convincing text.
Writing a text of decent quality used to constitute proof of work. This is now no longer the case, and we haven't adapted to this assumption becoming invalid.
For example, when applying to a job, your cover letter used to count as proof of work. The contents are less important than the fact that you put some amount of effort in it, enough to prove that you care about this specific vacancy. Now this basic assumption has evaporated, and job searching has become a meaningless two-way spam war, where having your AI-generated application selected from hundreds or thousands of other AI-generated applications is little more than a lottery.
This. I am very picky about how I use ML still, but it is unsurpassed as a virtual editor. It can clean up grammar and rephrase things in a very light way, but it gives my prose the polish I want. The thing is, I am a very decent writer. I wrote professionally for 18 years as a part of my job delivering reports of high quality as my work product. So, it really helps that I know exactly what “good” looks like by my standards. ML can clean things up so much faster than I can and I am confident my writing is organic still, but it can fix up small issues, find mistakes, etc very quickly. A word change here or there, some punctuation, that is normal editing. It is genuinely good at light rephrasing as well, if you have some idea of what intent you want.
When it becomes obvious, though, is when people let the LLM do the writing for them. The job search bit is definitely rough. Referrals, references, and actual accomplishments may become even more important.
As usual, LLMs are an excellent tool when you already have a decent understanding of the field you're interested in using them in. Which is not the case of people posting in social media or creating their first programs. That's where the dullness and noise come from.
The noise ground has been elevated 100x by LLMs. It was already bad before but it's accelerated the trend.
So, yes, we should have never been trusting anything online but before LLMs we could rely on our brains to quickly identify the bad. Nowadays, it's exhausting. Maybe we need a LLM trained on spotting LLMs.
This month, I, with decades of experience, used Claude Dev as an experiment to create a small automation tool. After countless manual fixes, it finally worked and I was happy. Until I gave thr whole thing a decent look again and realized what a piece of garbage I had created. It's exhausting to be on the lookout for these situations. I prefer to think things through myself, it's a more rewarding experience with better end results anyway.
Not to sound too dismissive, but there is a distinct learning curve when it comes to using models like Claude for code assist. Not just the intuition when the model goes off the rails, but also what to provide it in the context, how and what to ask for etc. Trying it once and dismissing it is maybe not the best experimental setup.
I've been using Zed recently with its LLM integration so assist me in my development and its been absolutely wonderful, but one must control tightly what to present to the model and what to ask for and how.
LLM's are a great onramp to filling in knowledge that may have been lost to age or updated to their modern classification. For example, I didn't know Hokkien and Haka are distinct linguistic branches within the Sino-Tibetan language and warrants more (personal) research into the subject. And all this time, without the internet, we often just colloquially called it Taiwanese.
How often do you go back to your encyclopedia hard copies only to find whatever knowledge you may have absorbed have already been deprecated? Or that information from Wikipedia may have changed at moments without notice, have never read or, dare I say, included a political bias to them?
Maybe I should have worded it better as a "beginner" or "intermediate" knowledge onramp and/or filler. For example, I have asked it on occasion to translate into traditional Mandarin in parallel for every English response. It helps tremendously in trying to rebuild that bridge that may have been burned long ago.
It is lost in a sense that you had no idea about such possibility and you did not know to search it in the first hand, while I believe that in this case LLM brought it up as a side note.
Such fortuitous stumblings happen all the time without LLMs (and in regular libraries, for those brave enough to use them). It's just the natural byproduct of doing any kind of research.
Most of my knowledge comes from physical encyclopedia and download the wikipedia text dump (internet was not readily available). You search for one thing and just explore by clicking.
This is my go-to process whenever I write anything now:
1. I use dictation software to get my thoughts out as a stream of consciousness.
2. Then, I have ChatGPT or Claude refine it into something coherent based on a prompt of what I'm aiming for.
3. Finally, I review the result and make edits where needed to ensure it matches what I want.
This method has easily boosted my output by 10x, and I'd argue the quality is even better than before. As a non-native English speaker, this approach helps a lot with clarity and fluency. I'm not a great writer to begin with, so the improvement is noticeable. At the end of the day, I’m just a developer—what can I say?
Yeah, this is how I use it too. I tend to be a very dry writer, which isn't unusual in science, but lately I've taken to writing, then asking an LLM to suggest improvements.
I know not to trust it to be as precise as good research papers need to be, so I don't take its output, it usually helps me reorder points or use different transitions which make the material much more enjoyable to read. I also find it useful for helping to come up with an opening sentence from which to start writing a section.
Great opportunity to get ahead of all the lazy people who use AI for a cover letter. Do a video! Sure, AI will be able to do that soon, but then we (not lazy people, who care) will come up with something even more personal!
> While the professional looking text could have been already wrong, the likelihood was smaller...
I don't criticise you for it, because that strategy is both rational and popular. But you never checked the accuracy of your information before so you have no way of telling if it has gotten more or less accurate with the advent of AI. You were testing for whether someone of high social intelligence wanted you to believe what they said rather than if what they said was true.
I guess the complaint is about losing this proxy to gain some assurance for little cost. We humans are great at figuring out the least amount of work that's good enough.
Now we'll need to be fully diligent, which means more work, and also there'll be way more things to review.
There’s not enough time in the day to go on a full bore research project about every sentence I read, so it’s not physically possible to be “fully diligent.”
The best we can hope for is prioritizing which things are worth checking. But even that gets harder because you go looking for sources and now those are increasingly likely to be LLM spam.
Traditionally, humans have addressed the imbalance between energy-to-generate and energy-to-validate by building another system on top, such as one which punishes fraudsters or at least allows other individuals to efficiently disassociate from them.
Unfortunately it's not clear how this could be adapted to the internet and international commerce without harming some of the open-ness aspects we'd like to keep.
I'd argue people clearly don't care about the truth at all - they care about being part of a group and that is where it ends. It shows up in things like critical thinking being a difficult skill acquired slowly vs social proof which humans just do by reflex. Makes a lot of sense, if there are 10 of us and 1 of you it doesn't matter how smartypants you may be when the mob forms.
AI does indeed threaten people's ability to identify whether they are reading work by a high status human and what the group consensus is - and that is a real problem for most people. But it has no bearing on how correct information was in the past vs will be in the future. Groups are smart but they get a lot of stuff wrong in strategic ways (it is almost a truism that no group ever identifies itself or its pursuit of its own interests as the problem).
> I'd argue people clearly don't care about the truth at all
Plenty of people care about the truth in order to get advantages over the ignorant. Beliefs aren't just about fitting in a group, they are about getting advantages and making your life better, if you know the truth you can make much better decisions than those who are ignorant.
Similarly plenty of people try to hide the truth in order to keep people ignorant so they can be exploited.
> if you know the truth you can make much better decisions than those who are ignorant
There are some fallacious hidden assumptions there. One is that "knowing the truth" equates to better life outcomes. I'd argue that history shows more often than not that what one knows to be true best align with prevailing consensus if comfort, prosperity and peace is one's goal, even if that consensus is flat out wrong. The list is long of lone geniuses who challenged the consensus and suffered. Galileo, Turing, Einstein, Mendel, van Gogh, Darwin, Lovelace, Boltzmann, Gödel, Faraday, Kant, Poe, Thoreau, Bohr, Tesla, Kepler, Copernicus, et. al. all suffered isolation and marginalization of some degree during their lifetimes, some unrecognized until after their death, many living in poverty, many actively tormented. I can't see how Turing, for instance, had a better life than the ignorant who persecuted him despite his excellent grasp of truth.
You are thinking too big, most of the time the truth is whether a piece of food is spoiled or not etc, and that greatly affects your quality of life. Companies would love to keep you ignorant here so they can sell you literal shit, so there are powerful forces wanting to keep you ignorant, and today those powerful forces has way stronger tools than ever before working to keep you ignorant.
You're implying that there is an absolute Truth and that people only need to do [what?] to check if something is True. But that's not True. We only have models of how reality works, and every model is wrong - but some are useful.
When dealing with almost everything you do day by day, you have to rely on the credibility of the source of the information you have. Otherwise how could you know that the can of tuna you're going to eat is actually tuna and not some venomous fish? How do you know that you should do what your doctor told you? Etc. etc.
> You're implying that there is an absolute Truth and that people only need to do [what?] to check if something is True. But that's not True. We only have models of how reality works, and every model is wrong - but some are useful.
I am not sure I am following - you don't know if there is anything that is really true, but you presume there isn't and that model of "the only truth is the absence of truth" is useful to you because it allows you to ... what exactly?
My new cheap proxy to save mental cost: pay to search on kagi, sort results by tracker count. My hope is fewer trackers correlates with lower incentives to seo spam. This may change but seems to work decently for now.
In the past, with a printed book or journal article, it was safe to assume that an editor had been involved, to some degree or another challenging claimed facts, and the publisher also had an interest in maintaining their reputation by not publishing poorly researched or outright false information. You would also have reviewers reading and reacting to the book in many cases.
All of that is gone now. You have LLMs spitting their excrement directly onto the web without so much as a human giving it a once-over.
How do you "check the accuracy of your information" if all the other reliable-sounding sources could also be AI generated junk? If it's something in computing, like whether something compiles, you can sometimes literally check for yourself, but most things you read about are not like that.
>But you never checked the accuracy of your information before so
They didn't say that and that's not a fair or warranted extrapolation.
They're talking about a heuristic that we all use, as a shorthand proxy that doesn't replace but can help steer the initial navigation in the selection of reliable sources, which can be complemented with fact checking (see the steelmanning I did there?). I don't think someone using that heuristic can be interpreted as tantamount to completely ignoring facts, which is a ridiculous extrapolation.
I also think is misrepresents the lay of the land, which is that in the universe of nonfiction writing, I don't think that there's a fire hose of facts and falsehoods indistinguishable in tone. I think there's in fact a reasonably high correlation between the discernible tone of impersonal professional and credible information, which, again (since this seems to be a difficult sticking point) doesn't mean that the tone substitutes for the facts which still need to be verified.
The idea that information and misinformation are tonally indistinguishable is, in my experience, only something believed by post-truth "do you own research" people who think there are equally valid facts in all directions.
There's not, for instance, a Science Daily of equally sciency sounding misinformation. There's not a second different IPCC that publishes a report with thousands of citations which are all wrong, etc. Misinformation is out there but it's not symmetrical, and understanding that it's not symmetrical is an important aspect of information literacy.
This is important because it goes to their point, which is that something has changed, in the advent of LLMS. That symmetry may be coming, and it's precisely the fact that it wasn't there before that is pivotal.
Interesting points! Doesn't sound impossible with an AI that's wrong less often than an average human author (if the AIs training data was well curated).
I suppose a related problem is that we can't know if the human who posted the article, actually agrees with it themselves.
(Or if they clicked "Generate" and don't actually care, or even have different opinions)
I think you overestimate the value of things looking professional. The overwhelming majority of books published every year are trash, despite all the effort that went into research, writing, and editing them. Most news is trash. Most of what humanity produces just isn't any good. An top expert in his field can leave a typo-riddled comment in a hurry that contains more valuable information than a shelf of books written on the subject by lesser minds.
AIs are good at writing professional looking text because it's a low bar to clear. It doesn't require much intelligence or expertise.
> AIs are good at writing professional looking text because it's a low bar to clear. It doesn't require much intelligence or expertise.
AIs are getting good at precisely imitating your voice with a single sample as reference, or generating original music, or creating video with all sorts of impossible physics and special effects. By your rationale, nothing “requires much intelligence or expertise”, which is patently false (even for text writing)
My point is that writing a good book is vastly more difficult than writing a mediocre book. The distance between incoherent babble and a mediocre book is smaller than the distance between a mediocre book and a great book. Most people can write professional looking text just by putting in a little bit of effort.
I think you underestimate how high that bar is, but I will grant that it isn’t that high. It can be a form of sophistry all of its own. Still, it is a difficult skill to write clearly, simply, and without a lot of extravagant words.
> While the professional looking text could have been already wrong, the likelihood was smaller, since you usually needed to know something at least in order to write convincing text.
Although, there were already before tons of "technical influencers" before that who excelled at writing, but didn't know deeply what they were writing about.
They give a superficially smart look, but really they regurgitate without deep understanding.
>In the past, you had to put a lot of effort to produce a text which seemed to be high quality, especially when you knew nothing about the subject. By the look of text and the usage of the words, you could tell how professional the writer was and you had some confidence that the writer knew something about the subject. Now, that is completely removed. There is no easy filter anymore.
That is pretty much true also for other media, such as audio and video. Before digital stuff become mainstream pics are developed in the darkroom, and film are actually cut with scissors. A lot of effort are put into producing the final product. AI has really commoditized for many brain related tasks. We must realize the fragile nature of digital tech and still learn how to do these by ourselves.
it's obvious when text has been produced by chatGPT with the default prompt - but there's probably loads of text on the internet which doesn't follow AI's usual prose style that blends in well.
Even when I try some other variation of prompts or writing styles there's always this sense of "perfectness", with all paragraph lengths being too perfect, length and the style of it being like that.
>> While the professional looking text could have been already wrong, the likelihood was smaller, since you usually needed to know something at least in order to write convincing text.
...or...the likelihood of text being really wrong pre-LLMs was worse because you needed to be a well-capitalized player to pay your thoughts into public discourse. Just look at our global conflicts and you see how much they are driven by well-planned lobbying, PR, and...money. That is not new.
> By the look of text and the usage of the words, you could tell how professional the writer was and you had some confidence that the writer knew something about the subject
How did you know this unless you also had the same or more knowledge than the author?
It would seem to me we are as clueless now as before about how to judge how skilled a writer is without requiring to already posses that very skill ourselves.
Reading was a form of connecting with someone. Their opinions are bound to be flawed, everyone's are - but they're still the thoughts and words of a person.
This is no longer the case. Thus, the human factor is gone and this reduces the experience to some of us, me included.
This is exactly what’s at stake. I heard an artist say one time that he’d rather listen to Bob Dylan miss a note than listen to a song that had all the imperfections engineered out of it.
They're not connecting to the autotune, but to the artist. People have a lot of opinions about Taylor Swift's music but "not being personal enough" is definitely not a common one.
If you wanna advocate for unplugged music being more gratifying, I don't disagree, but acting like the autotune is what people are getting out of Taylor Swift songs is goofy.
I have no idea about Taylor Swift so I'll ask in general: can't we have a human showing an autotuned personality? Like, you are what you are in private, but in interviews you focus on things suggested by your AI conselor, your lyrics are fine tuned by AI, all this to show a better marketable personality? Maybe that's the autotune we should worry about. Again, nothing new (looking at you, Village People) but nowadays the potential powered by AI is many orders of magnitude higher... you could say yes only until the fans catch wind of it, true, but by that time the next figure shows up and so on. Not sure where this arms escalation can lead us. Because also acceptance levels are shifting, so what we reject today as unacceptable lies could be fine tomorrow, look already at the AI influencers doing a decent job while overtly fake.
I’m convinced it’s already being done, or at least played with. Lots of public figures only speak through a teleprompter. It would be easy to put a fine tuned LLM on the other side of that teleprompter where even unscripted questions can be met with scripted answers.
I think the key thing here is equating trust and truth. I trust my dog, a lot, more than most humans frankly. She has some of my highest levels of trust attainable, yet I don’t exactly equate her actions with truth. She often barks when there’s no one at the door or at false threats she doesn’t know aren’t real threats and so on. But I trust she believes it 100% and thinks she’s helping me 100%.
What I think OP was saying and I agree with is that connection, that knowing no matter what was said or how flawed or what motive someone had I trusted there was a human producing the words. I could guess and reasons the other factors away. Now I don’t always know if that is the case.
If you’ve ever played a multiplayer game, most of the enjoyable experience for me is playing other humans. We’ve had good game AIs in many domains for years, sometimes difficult to distinguish from humans, but I always lost interest if I didn’t know I was in fact playing and connecting with another human. If it’s just some automated system I could do that any hour of the day as much as I want but it lacked the human connection element, the flaws, the emotion, the connection. If you can reproduce that then maybe it would be enjoyable but that sort of substance has meaning to many.
It’s interesting to see a calculator quickly spit out correct complex arithmetic but when you see a human do it, it’s more impressive or at least interesting, because you know the natural capability is lower and that they’re flawed just like you are.
For me, the problem has gone from “figure out the author’s agenda” to “figure out whether this is a meaningful text at all,” because gibberish now looks a whole lot more like meaning than it used to.
This has been a problem on the internet for the past decade if not more anyway, with all of the seo nonsense. If anything, maybe it's going to be ever so slightly more readable.
I don't know what you're talking about. Most people don't think of SEO, Search Engine Optimization, Search Performance, Search Engine Relevance, Search Rankings, Result Page Optimization, or Result Performance when writing their Article, Articles, Internet Articles, News Articles, Current News, Press Release, or News Updates...
Perhaps "trust" was a bit misplaced here, but I think we can all agree on the idea: Before LLMs, there was intelligence behind text, and now there's not. The I in LLM stands for intelligence, as written in one blog. Maybe the text never was true, but at least it made sense given some agenda. And like pointed out by others, the usual text style and vocabulary signs that could have been used to identify expertise or agenda are gone.
> Perhaps "trust" was a bit misplaced here, but I think we can all agree on the idea: Before LLMs, there was intelligence behind text, and now there's not. The I in LLM stands for intelligence, as written in one blog. Maybe the text never was true, but at least it made sense given some agenda.
Nope. A lot of people just wrote stuff. There were always plenty of word salad blogs (and arguably entire philosophy journals) out there.
scale makes all the difference. society without trust falls apart. it's good if some people doubt some things, but if everyone necessarily must doubt everything, it's anarchy.
A good part of society, the foundational part, is trust. Trust between individuals, but also trust in the sense that we expect things to behave in a certain way. We trust things like currencies despite their flaws. Our world is too complex to reinvent the wheel whenever we need to do a transaction. We must believe enough in a make-believe system to avoid perpetual collapse.
Perhaps that anarchy is the exact thing we need to convince everyone to revolt against big tech firms like Google and OpenAI and take them down by mob rule.
Propaganda works by repeating the same in different forms. Now it is easier to have different forms of the same, hence, more propaganda. Also, it is much easier to iinfluence whatever people write by influencing the tool they use to write.
Imagine that AI tools sway generated sentences to be slightly close, in summarisation space, to the phrase "eat dirt" or anything. What would happen?
I think it is a totally different threat. Excluding adversarial behavior, humans usually produce information with a quality level that is homogeneous (from homogeneously sloppy to homogeneously rigurous).
AI otoh can produce texts that are quite accurate globally with some totally random hallucinations here and there. It makes it quite harder to identify
There are topics on which you should be somewhat suspicious of anything you read, but also many topics where it is simply improbable that anyone would spend time maliciously coming up with a lie. However, they may well have spicy autocomplete imagine something for them. An example from a few days ago: https://news.ycombinator.com/item?id=41645282
> For me, LLMs don't change anything. I already questioned the information before and continue to do so.
I also did, but LLM increased the volume of content, which forces my brain first try to identify if content is generated by LLMs, which is consuming a lot of energy and makes brain even less focused, because now it's primary goal is skimming quickly to identify, instead of absorbing first and then analyzing info
The web being polluted only makes me ignore more of it.
You already know some of the more trustworthy sources of information, you don't need to read a random blog which will require a lot more effort to verify.
Even here on hackernews, I ignore like 90% of the spam people post. A lot of posts here are extremely low effort blogs adding zero value to anything, and I don't even want to think whether someone wasted their own time writing that or used some LLM, it's worthless in both cases.
There's a quantity argument to be made here - before, it used to be hard to generate large amounts of plausible but incorrect text. Now it easy. Similar to surveillance before/after smartphones + the internet - you had to have a person following you vs just soaking up all the data on the backbone.
Debunking bullshit inherently takes more effort than generating bullshit, so the human factor is normally your big force multiplier. Does this person seem trustworthy? What else have they done, who have they worked with, what hidden motivations or biases might they have, are their vibes /off/ to your acute social monkey senses?
However with AI anyone can generate absurd torrential flows of bullshit at a rate where, with your finite human time and energy, the only winning move is to reject out of hand any piece of media that you can sniff out as AI. It's a solution that's imperfect, but workable, when you're swimming through a sea of slop.
Debugging is harder than writing code. Once the code passed linter, compiler and test, the bugs might be more subtly logical and require more effort and intelligence.
We are all becoming QA of this super automated world.
Maybe the debunking AIs can match the bullshit generating AIs, and we will have balance in the force. Everyone is focused on the generative AIs, it seems.
No, they can't. They'll still be randomly deciding if something is fake or not, so they'll only have a probability of being correct, like all nondeterministic AI.
There was a degree of proof of work involved. Text took human effort to create, and this roughly constrained the quantity and quality of misinforming text to the number of humans with motive to expend sufficient effort to misinform. Now superficially indistinguishable text can be created by an investment in flops, which are fungible. This means that the constraint on the amount of misinforming text instead scales with whatever money is resourced to the task of generating misinforming text. If misinforming text can generate value for someone that can be translated back into money, the generation of misinforming text can be scaled to saturation and full extraction of that value.
How do you like questioning much more of it, much more frequently, from many more sources? And mistrusting it in new ways. AI and regular people are not wrong in the same ways, nor for the same reasons, and now you must track this too, increasingly.
It’s nothing to do with trusting in terms of being true or false, but whatever I read before I felt like, well, it can be good or bad, I can judge it, but whatever it is, somebody wrote it. It’s their work. Now when I read something I just have absolutely no idea whether the person wrote it, how much percent did they write it, or how much they even had to think before publishing it. Anyone can simply publish a perfectly well-written piece of text about any topic whatsoever, and I just can’t wrap my head around why, but it feels like a complete waste of time to read anything. Like… it’s all just garbage, I don’t know.
> you trusted what you read before. For me, LLMs don't change anything. I already questioned the information before and continue to do so. [...]
Why do you think that you could trust what you read before?
A human communicator is, in a sense, testifying when communicating. Humans have skin in the social game.
We try to educate people, we do want people to be well-informed and to think critically about what they read and hear. In the marketplace of information, we tend very strongly to trust non-delusional, non-hallucinating members of society. Human society is a social-confidence network.
In social media, where there is a cloak of anonymity (or obscurity), people may behave very badly. But they are usually full of excuses when the cloak is torn away; they are usually remarkably contrite before a judge.
A human communicator can face social, legal, and economic consequences for false testimony. Humans in a corporation, and the corporation itself, may be held accountable. They may allocate large sums of money to their defence, but reputation has value and their defence is not without social cost and monetary cost.
It is literally less effort at every scale to consult a trusted and trustworthy source of information.
It is literally more effort at every scale to feed oneself untrustworthy communication.
I read the original comment not as a lament of not being able to trust the content, rather, they are lamenting the fact that AI/LLM generated content has no more thought or effort put into it than a cheap microwave dinner purchased from Walmart. Yes, it fills the gut with calories but it lacks taste.
On second thought, perhaps AI/LLM generated content is better illustrated with it being like eating the regurgitated sludge called cud. Nothing new, but it fills the gut.
There were news reports that Russia spent less than a million dollars on a massive propaganda campaign targeting U.S elections and the American population in general.
Do you think it would be possible before internet, before AI?
Bad actors, poorly written/sourced information, sensationalism etc have always existed. It is nothing new. What is new is the scale, speed and cost of making and spreading poor quality stuff now.
All one needs today is a laptop and an internet connection and a few hours, they can wreak havoc. In the past, you'd need TV or newspapers to spread bad (and good) stuff - they were expensive, time consuming to produce and had limited reach.
Some woman’s cat was hiding in her basement. She automatically assumed her Haitian neighbors stole her cat and made some comment about it, which landed on Facebook, which got morphed into “immigrants eating pets” story, JD Vance picked it up, Trump mentioned it in a national debate watched by 65 million people. All of this happened in a few days. This resulted in violence in Springfield.
If you can place a rumor or lie in front of the right person/people to amplify, it will be amplified. It will spread like wildfire, and by the time it is fact checked, it will have done at least some damage.
These successful manipulation stories are extremely rare though. What usually happens is that you say your neighbour ate your cat, then everyone laughs at you.
Did the person who posted do the manipulation, or did JD Vance and Donald Trump do it?
It's that you trusted that what you read came from a human being. Back in the day I used to spend hours reading Evolution vs Creationism debates online. I didn't "trust" the veracity of half of what I read, but that didn't mean I didn't want to read it. I liked reading it because it came from people. I would never want to read AI regurgitation of these arguments.
> I already questioned the information before and continue to do so.
You might question new information, but you certainly do not actually verify it. So all you can hope to do is sense-checking - if something doesn't sound plausible, you assume it isn't true.
This depends on having two things: having trustworthy sources at all, and being able to relatively easily distinguish between junk info and real thorough research. AI is a very easy way for previously-trustworthy sources to sneak in utter disinformation without necessarily changing tone much. That makes it much easier for the info to sneak past your senses than previously.
If one spends a lot of years reading a lot of stuff, they come to this conclusion, that most of it cannot be trusted. But it takes lots of years and lots of material to see it.
Why do you think that you could trust what you read before? Is it now harder for you to distinguish false information, and if so, why?