The practical concern of Linux developers regarding responsibility is not being able to ban the author, it's that the author should take ongoing care for his contribution.
In a court case the responsibility party very well could be the Linux foundation because this is a foreseeable consequence of allowing AI contributions. There’s no reasonable way for a human to make such a guarantee while using AI generated code.
It’s not about the mechanism: responsibility is a social construct, it works the way people say that it works. If we all agree that a human can agree to bear the responsibility for AI outputs, and face any consequences resulting from those outputs, then that’s the whole shebang.
Sure we could change the law. It would be a stupid change to allow individuals, organizations, and companies to completely shield themselves from the consequences of risky behaviors (more than we already do) simply by assigning all liability to a fall guy.
Right now it's very easy not to infringe on copyrighted code if you write the code yourself. In the vast majority of cases if you infringed it's because you did something wrong that you could have prevented (in the case where you didn't do anything wrong, inducement creation is an affirmative defense against copyright infringement).
That is not the case when using AI generated code. There is no way to use it without the chance of introducing infringing code.
Because of that if you tell a user they can use AI generated code, and they introduce infringing code, that was a foreseeable outcome of your action. In the case where you are the owner of a company, or the head of an organization that benefits from contributors using AI code, your company or organization could be liable.
LLMs are not persons, not even legal ones (which itself is a massive hack causing massive issues such as using corporate finances for political gain).
A human has moral value a text model does not. A human has limitations in both time and memory available, a model of text does not. I don't see why comparisons to humans have any relevance. Just because a human can do something does not mean machines run by corporations should be able to do it en-masse.
The rules of copyright allow humans to do certain things because:
- Learning enriches the human.
- Once a human consumes information, he can't willingly forget it.
- It is impossible to prove how much a human-created intellectual work is based on others.
With LLMs:
- Training (let's not anthropomorphize: lossily-compressing input data by detecting and extracting patterns) enriches only the corporation which owns it.
- It's perfectly possible to create a model based only on content with specific licenses or only public domain.
- It's possible to trace every single output byte to quantifiable influences from every single input byte. It's just not an interesting line of inquiry for the corporations benefiting from the legal gray area.
Pricing for Mythos Preview is $25/$125 per million input/output tokens. This makes it 5X more expensive than Opus but actually cheaper than GPT 5.4 Pro.
I use Claude Code extensively and haven't noticed this. But I don't have it doing long running complex work like OP. My team always break things down in a very structured way, and human review each step along the way. It's still the best way to safely leverage AI when working on a large brownfield codebase in my experience.
Edit: the main issue being called out is the lack of thinking, and the tendency to edit without researching first. Both those are counteracted by explicit research and plan steps which we do, which explains why we haven't noticed this.
This is a really good idea. Especially if the chunks are marked as read as you are guided through them and you can validate you've seen all the code by the end
Not that I'm entirely onboard with it, but often you don't have a channel to communicate with "the people who can change the machine", only the cogs in the machine.
It gives you satisfaction. That's the whole value and it can be worth a lot to not hold bitterness long after the problem has passed. I agree with your parent. The cogs are part of the machine, they don't deserve any sympathy just because they chose to do bad things for money any more than a robber deserves sympathy because he's poor.
> The cogs are part of the machine, they don't deserve any sympathy just because they chose to do bad things for money
That's a bit of a stretch saying that someone who enforces the rules around disability for a job is doing bad things for money. These same rules filter out a lot of scammers that if not stoped would mean less money going to the right people.
It's also a low skill low pay job, probably worked by a large percentage of people who are close to the poverty line and just trying to make ends meet to support a family.
Depends on your goal. If you want a better machine maybe hating the cogs doesn't help.
If you goal is to not have a machine at all for some particular thing, then potentially no one wanting to work a job that does that thing might be an effective way of abating the machine from doing that.
Although inconveniencing bureaucrats handling disability benefits is probably a poor starting point no matter what your opinion is.
I vibe coded a saas and it went nowhere because it wasn't a good enough idea to begin with. I consulted with multiple varied models along the way for competitive analysis, pricing structure etc.
AI doesn't solve for ideas and product market fit. But it did allow me to fail pretty fast before I sunk too much time into it. But also, I should have spoken to potential users earlier rather than vibe coding.
Why wouldn’t just make some AI generated user personas to talk to? Whatever their opinion is, it’s already been captured and is in the training data. You don’t need to talk to users.
Because AI is way too sycophantic still. The real audio engineers I spoke to said they run into this problem maybe once every few months and even then only when working for particular clients. Too much friction to purchase one offs, not enough pain for a monthly subscription.
Good idea, and an improvement, but you still have that fundamental issue: you don't really know what code has been written. You don't know the refactors are right, in alignment with existing patterns etc.
reply