> who will have all their questions answered, correctly and in the way they can best understand
Highly unlikely as the feedback cycle used to train LLMs will choke off all future learning.
In other words if AI bots consume and regurgitate everything you publish on the internet what is the incentive to publish anything? No one will read it except the bots. The training datasets will either become stale (no longer learning anything new because nothing new and useful is published) or actively poisoned (because only bad actors will bother to publish).
And the generation constantly fed mostly correct information by AI will implicitly trust it further making poisoning of the models a high-value target.
Very few people will be left who understand how to think and have the motivation to do so. Even fewer will have the motivation and the means to publish to others.
Highly unlikely as the feedback cycle used to train LLMs will choke off all future learning.
In other words if AI bots consume and regurgitate everything you publish on the internet what is the incentive to publish anything? No one will read it except the bots. The training datasets will either become stale (no longer learning anything new because nothing new and useful is published) or actively poisoned (because only bad actors will bother to publish).
And the generation constantly fed mostly correct information by AI will implicitly trust it further making poisoning of the models a high-value target.
Very few people will be left who understand how to think and have the motivation to do so. Even fewer will have the motivation and the means to publish to others.