Hacker Newsnew | past | comments | ask | show | jobs | submit | afro88's commentslogin

Same as if a regular person did the same. They are responsible for it. If you're using AI, check the code doesn't violate licenses

How could you do that though? You can’t guarantee that there aren’t chunks of copied code that infringes.

Let me introduce you to the concept of submarine patents...

But the responsible party is still the human who added the code. Not the tool that helped do so.

The practical concern of Linux developers regarding responsibility is not being able to ban the author, it's that the author should take ongoing care for his contribution.

That's not going to shield the Linux organization.

In a court case the responsibility party very well could be the Linux foundation because this is a foreseeable consequence of allowing AI contributions. There’s no reasonable way for a human to make such a guarantee while using AI generated code.

It’s not about the mechanism: responsibility is a social construct, it works the way people say that it works. If we all agree that a human can agree to bear the responsibility for AI outputs, and face any consequences resulting from those outputs, then that’s the whole shebang.

Sure we could change the law. It would be a stupid change to allow individuals, organizations, and companies to completely shield themselves from the consequences of risky behaviors (more than we already do) simply by assigning all liability to a fall guy.

What law exactly are you suggesting needs to be changed? How is this any different from what already happens right now, today?

Right now it's very easy not to infringe on copyrighted code if you write the code yourself. In the vast majority of cases if you infringed it's because you did something wrong that you could have prevented (in the case where you didn't do anything wrong, inducement creation is an affirmative defense against copyright infringement).

That is not the case when using AI generated code. There is no way to use it without the chance of introducing infringing code.

Because of that if you tell a user they can use AI generated code, and they introduce infringing code, that was a foreseeable outcome of your action. In the case where you are the owner of a company, or the head of an organization that benefits from contributors using AI code, your company or organization could be liable.


In this case, the "fall guy" is the person who actually introduced the code in question into the codebase.

They wouldn't be some patsy that is around just to take blame, but the actual responsible party for the issue.


As opposed to an irregular person?

LLMs are not persons, not even legal ones (which itself is a massive hack causing massive issues such as using corporate finances for political gain).

A human has moral value a text model does not. A human has limitations in both time and memory available, a model of text does not. I don't see why comparisons to humans have any relevance. Just because a human can do something does not mean machines run by corporations should be able to do it en-masse.

The rules of copyright allow humans to do certain things because:

- Learning enriches the human.

- Once a human consumes information, he can't willingly forget it.

- It is impossible to prove how much a human-created intellectual work is based on others.

With LLMs:

- Training (let's not anthropomorphize: lossily-compressing input data by detecting and extracting patterns) enriches only the corporation which owns it.

- It's perfectly possible to create a model based only on content with specific licenses or only public domain.

- It's possible to trace every single output byte to quantifiable influences from every single input byte. It's just not an interesting line of inquiry for the corporations benefiting from the legal gray area.


Yep, that is definitely a step change. Pricing is going to be wild until another lab matches it.

Pricing for Mythos Preview is $25/$125 per million input/output tokens. This makes it 5X more expensive than Opus but actually cheaper than GPT 5.4 Pro.

I'm just curious, where did you find this? (my memory wants to say, the leaked blog post, but, I don't trust it)


Duh, thanks :)

Important to note it's only for participants, not the general public.

I use Claude Code extensively and haven't noticed this. But I don't have it doing long running complex work like OP. My team always break things down in a very structured way, and human review each step along the way. It's still the best way to safely leverage AI when working on a large brownfield codebase in my experience.

Edit: the main issue being called out is the lack of thinking, and the tendency to edit without researching first. Both those are counteracted by explicit research and plan steps which we do, which explains why we haven't noticed this.


IIUC this doesn't make the LLM think in caveman (thinking tokens). It just makes the final output show in caveman.

As a kid who spent many hours on Vista on the Amiga 500, this has blown my mind

This is a really good idea. Especially if the chunks are marked as read as you are guided through them and you can validate you've seen all the code by the end

What good does hating the cogs do though? Make noise to the people who can change the machine.


Not that I'm entirely onboard with it, but often you don't have a channel to communicate with "the people who can change the machine", only the cogs in the machine.


When you hate the machine as a whole, the cogs are also in scope.


It gives you satisfaction. That's the whole value and it can be worth a lot to not hold bitterness long after the problem has passed. I agree with your parent. The cogs are part of the machine, they don't deserve any sympathy just because they chose to do bad things for money any more than a robber deserves sympathy because he's poor.


> The cogs are part of the machine, they don't deserve any sympathy just because they chose to do bad things for money

That's a bit of a stretch saying that someone who enforces the rules around disability for a job is doing bad things for money. These same rules filter out a lot of scammers that if not stoped would mean less money going to the right people.

It's also a low skill low pay job, probably worked by a large percentage of people who are close to the poverty line and just trying to make ends meet to support a family.


Depends on your goal. If you want a better machine maybe hating the cogs doesn't help.

If you goal is to not have a machine at all for some particular thing, then potentially no one wanting to work a job that does that thing might be an effective way of abating the machine from doing that.

Although inconveniencing bureaucrats handling disability benefits is probably a poor starting point no matter what your opinion is.


It increases costs for the machine, and eventually it realizes that cogs are cheaper when they're not getting yelled at all day.


Sounds like me with listening to AI covers. After a couple of weeks I couldn't care less. But I was so stoked in it at the start


I vibe coded a saas and it went nowhere because it wasn't a good enough idea to begin with. I consulted with multiple varied models along the way for competitive analysis, pricing structure etc.

AI doesn't solve for ideas and product market fit. But it did allow me to fail pretty fast before I sunk too much time into it. But also, I should have spoken to potential users earlier rather than vibe coding.


Why wouldn’t just make some AI generated user personas to talk to? Whatever their opinion is, it’s already been captured and is in the training data. You don’t need to talk to users.


Because AI is way too sycophantic still. The real audio engineers I spoke to said they run into this problem maybe once every few months and even then only when working for particular clients. Too much friction to purchase one offs, not enough pain for a monthly subscription.

https://claude.ai/share/be61468f-9f38-4dc0-aae3-5d758bf0f200


Hot take.

Possibly true, dependent on the service you use,

and the software space being evaluated.


Good idea, and an improvement, but you still have that fundamental issue: you don't really know what code has been written. You don't know the refactors are right, in alignment with existing patterns etc.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: