Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Except your team is full of occasionally insane "people" who hallucinate, lie, and cover things up.

We are trading the long term benefits for truth and correctness for the short term benefits of immediate productivity and money. This is like how some cultures have valued cheating and quick fixes because it's "not worth it" to do things correctly. The damage of this will continue to compound and bubble up.

 help



I agree. The further I have progressed into my career the more I have been focused on the stability, maintainability and "supportability" of the products I work on. Going slower in order to progress faster in the long run. I feel like everyone is disregarding the importance of that at the moment and I feel quite sad about it.

Not only that, there’s this immense drive for “productivity” so they have more time to… Do more work. It’s insanity.

This is a fair argument but it’s rapidly becoming a non-argument.

LLMs have come a long way since ChatGPT 4.

The idea that they’ll always value quick answers, and always be prone to hallucination seems short-sighted, given how much the technology has advanced.

I’ve seen Claude do iterative problem solving, spot bad architectural patterns in human written code, and solve very complex challenges across multiple services.

All of this capability emerging from a company (Anthropic) that’s just five years old. Imagine what Claude will be capable of in 2030.


> The idea that they’ll always value quick answers, and always be prone to hallucination seems short-sighted, given how much the technology has advanced.

It’s not shortsighted, hallucinations still happen all the time with the current models. Maybe not as much if you’re only asking it to do the umpteenth React template or whatever that should’ve already been a snippet, but if you’re doing anything interesting with low level APIS, they still make shit up constantly.


> All of this capability emerging from a company (Anthropic) that’s just five years old. Imagine what Claude will be capable of in 2030.

I don't believe VC-backed companies see monotonic user-facing improvement as a general rule. The nature of VC means you have to do a lot of unmaintainable cool things for cheap, and then slowly heat the water to boil. See google, reddit, facebook, etc...

For all we know, Claude today is the best it will ever be.


The current models had lots and lots of hand written code to train on. Now stackoverflow is dead and github is getting filled with AI generated slop so one begins to wonder whether further training will start to show diminishing returns or perhaps even regressions. I am at least a little bit skeptical of any claim that AI will continue to improve at the rate it has thus far.

If you don't really understand how LLMs of today are made possible, it is really easy to fall into the trap of thinking that it is just a matter of time and compute to attain perpetual progress..

I have not found that to be true on a personal level, but in fairness it does seem to be a widely reported problem. At its core, I think it is an issue of alignment. That is something different than skill.

I agree with you, but considering the state of modern software, I think the values "truth and correctness" have been abandoned by most developers a long time ago.

Be that as it may, we shouldn’t be striving to accelerate the decline, and be recruiting even more people who never learned those values.

It’s the Eternal September of software (lack of) quality.


> Except your team is full of occasionally insane "people" who hallucinate, lie, and cover things up.

Wait.. are we talking about LLMs or humans here?


Humans are accountable, an LLM subscription is not..

The humans operating the LLM are accountable.

That is the point. It is nonsense to delegate your responsibility to something that is neither accountable nor reliable if you care about not tanking your reputation..



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: