But nothing has changed there. People have been posting intelligent-sounding gibberish on social media and blogs for years before LLMs.
The problem with centralisation isn’t that it gobbles up data. It’s that it allows those weights to be dictated by a small few who might choose to skew the model more favourably to the messaging they’ve want to promote.
And this is a genuine concern. But it’s also not a new problem either. We already have that problem with new broadcasters, newspaper publications, social media ethics teams, and so on and so forth.
The new problem LLMs bring to human interaction isn’t any of the issues described above. It’s with LLMs replacing human contact in situations where you need something with a conscience to step in.
For example, conversations leading to AI promoting negative thoughts from people with mental health problems because the chat history starts to overwhelm the context window, resulting in the system prompt doing a poorer job of weighting the conversation away from dangerous topics like suicide.
This isn’t to say that the points which you’ve addressed aren’t real problems that exist. They definitely do exist. But they’ve also always existed, even before GPT was invented. We’ve just never properly addressed those problems because:
either there’s no incentive to. If you are powerful enough to control the narrative then why would you use that power to turn the narrative against you?
…or there simply isn’t a good way of solving that problem. eg I might hate stupid conspiracy theories, but censoring research is a much worse alternative. So we just have to allow nutters to share their dumb ideas in the hope that enough legitimate research is published, and enough people are sensible enough to read it, that the nutters don’t have any meaningful impact on society.
The problem with centralisation isn’t that it gobbles up data. It’s that it allows those weights to be dictated by a small few who might choose to skew the model more favourably to the messaging they’ve want to promote.
And this is a genuine concern. But it’s also not a new problem either. We already have that problem with new broadcasters, newspaper publications, social media ethics teams, and so on and so forth.
The new problem LLMs bring to human interaction isn’t any of the issues described above. It’s with LLMs replacing human contact in situations where you need something with a conscience to step in.
For example, conversations leading to AI promoting negative thoughts from people with mental health problems because the chat history starts to overwhelm the context window, resulting in the system prompt doing a poorer job of weighting the conversation away from dangerous topics like suicide.
This isn’t to say that the points which you’ve addressed aren’t real problems that exist. They definitely do exist. But they’ve also always existed, even before GPT was invented. We’ve just never properly addressed those problems because:
either there’s no incentive to. If you are powerful enough to control the narrative then why would you use that power to turn the narrative against you?
…or there simply isn’t a good way of solving that problem. eg I might hate stupid conspiracy theories, but censoring research is a much worse alternative. So we just have to allow nutters to share their dumb ideas in the hope that enough legitimate research is published, and enough people are sensible enough to read it, that the nutters don’t have any meaningful impact on society.