I'd rather say that LLMs live in a world that consists entirely of stories, nothing but words and their combinations. Thy have no other reality. So they are good at generating more stories that would sit well with the stories they already know. But the stories are often imprecise, and sometimes contradictory, so they have to guess. Also, LLMs don't know how to count, but they know that two usually follows one, and three is usually said to be larger than two, so they can speak in a way that mostly does not contradict this knowledge. They can use tools to count, like a human who knows digits would use a calculator.
But much more than an arithmetic engine, the current crop of AI needs an epistemic engine, something that would help follow logic and avoid contradictions, to determine what is a well-established fact, and what is a shaky conjecture. Then we might start trusting the AI.
One night, I asked it to write me some stories, it did seem happy doing that. I just kept saying do what you want when it asked me for a choice, its a fun little way to spend a couple of hours.
this was true, but then it wasn't... the research world several years ago, had a moment when the machinery could reliably solve multi-step problems.. there had to be intermediary results; and machinery could solve problems in a domain where they were not trained specifically.. this caused a lot of excitement, and several hundred billion dollars in various investments.. Since no one actually knows how all of it works, not even the builders, here we are.
This is a misconception, we absolutely do know how LLMs work, that's how we can write them and publish research papers.
The idea we don't is tabloid journalism, it's simply because the output is (usually) randomised - taken to mean, by those who lack the technical chops, that programmers "don't know how it works" because the output is indeterministic.
This is not withstanding we absolutely can repeat the output by using not randomisation (temperature 0).
But much more than an arithmetic engine, the current crop of AI needs an epistemic engine, something that would help follow logic and avoid contradictions, to determine what is a well-established fact, and what is a shaky conjecture. Then we might start trusting the AI.