Hacker Newsnew | past | comments | ask | show | jobs | submit | gchamonlive's commentslogin

I think code was always expensive. If it seemed cheap, the cost was hidden somewhere else.

When I started coding professionally, I joined a team of only interns in a startup, hacking together a SaaS platform that had relative financial success. While we were very cheap, being paid below minimum wage, we had outages, data corruption, db wipes, server terminations, unresolved conflicts making their way to production and killing features, tons of tech debt and even more makeshift code we weren't aware of...

So yeah, while writing code was cheap, the result had a latent cost that would only show itself on occasion.

So code was always expensive, the challenge was to be aware of how expensive sooner rather than later.

The thing with coding agents is that it seems now that you can eat your cake and have it too. We are all still adapting, but results indicate that given the right prompts and processes harnessing LLMs quality code can be had in the cheap.


> The thing with coding agents is that it seems now that you can eat your cake and have it too. We are all still adapting, but results indicate that given the right prompts and processes harnessing LLMs quality code can be had in the cheap.

It's cheaper but not cheap

If you're building a variation of a CRUD web app, or aggregating data from some data source(s) into a chart or table, you're right. It's like magic. I never thought this type of work was particularly hard or expensive though.

I'm using frontier models and I've found if you're working on something that hasn't been done by 100,000 developers before you and published to stackoverflow and/or open source, the LLM becomes a helpful tool but requires a ton of guidance. Even the tests LLMs will write seem biased to pass rather than stress its code and find bugs.


> It's cheaper but not cheap

It's quite cheap if you consider developer time. But it's only as cheap as you can effectively drive the model, otherwise you are just wasting tokens on garbage code.

> LLM becomes a helpful tool but requires a ton of guidance

I think this is always going to be the case. You are driving the agent like you drive a bike, it'll get you there but you need to be mindful of the clueless kid crossing your path.

For some projects I had good results just letting the agent loose. For others I'd have to make the tasks more specific and granular before offloading to the LLM. I see nothing wrong with it.


> I never thought this type of work was particularly hard or expensive though.

Maybe not intrinsically hard, but hard because it's so boring you can't concentrate.

> the LLM becomes a helpful tool but requires a ton of guidance. Even the tests LLMs will write seem biased to pass rather than stress its code and find bugs.

ISTR some have had success by taking responsibility for the tests and only having the LLM work on the main code. But since I only seem to recall it, that was probably a while ago, so who knows if it's still valid.


So code was apparently cheap, but in fact it was expensive because it was low quality.

Now with LLMs, code is cheap and it also has quality, therefore "quality code can be had in the cheap".

Do you really believe this is the case? Why don't companies fire all their developers if they can have an algorithm that can output cheap and quality code?


Because cheap and quality code is only part of the story. The code needs to solve the right problem and that is a domain only a human can operate, at least for now. Back then when I was inexperienced I couldn't write good code, but I could sit with the company's CTO while he explained the domain, the challenges and the goal of the project. I could talk with domain experts and understand what the common solutions to the problems were. These are things that for an LLM to do would require untold amounts of context or a specialized model that understands the domain.

But the thing is, there are many unknowns. We humans are very capable of adapting as we go. LLMs have a fixed data they were trained on and prompt engineering can only get you so far.

I think anyone asking this with the intention of actually replacing humans with LLMs don't really understand neither humans nor LLMs. They are just talking money.


We didn’t fire all our developers when we invented compilers either, and for much the same reason we didn’t stop hiring laborers when we first built ships and established overseas trade routes: business will always expand to meet its reach

Many enterprises are currently exploring to see if they can invite developers to leverage AI tools—like they leveraged the compiler—to be more productive. To operate on a higher plane of agency, collaborating on what we should be building and not just technical execution. Those actively hostile or just checked out with the idea of relearning skills are being laid off. (Some unprofitable business sections are being swept up opportunistically too.) The idea that all developers would be fired if AI tools can write good code doesn’t meet the lessons of history


> Many enterprises are currently exploring to see if they can invite developers to leverage AI tools—like they leveraged the compiler—to be more productive. To operate on a higher plane of agency, collaborating on what we should be building and not just technical execution.

The thing is, developers have been hired to automate process, and as for any professional doing a good job, that means the output should perform reliably. But now they are forcing us to use a tools that everyone knows is not reliable, but the onus is still on us to keep the same reliability. So do you see why we are not thrilled?

It’s like providing a faulty piano (that shuffles the notes when a key is pressed) and expecting a good rendition of the Moonlight Sonata.

Or a crane that will stall and drop its load randomly. It would have been sent to the scrapyard on the first day.


> "Or a crane that will stall and drop its load randomly. It would have been sent to the scrapyard on the first day."

The only reason you have the concept that engines can "stall" is because people have bought engines that can stall by the hundreds of millions, instead of the earliest people refusing to buy them at all and all waiting for the perfect engine.

Container ships can sink with all the containers lost at sea. Still used.

Steam train engines could explode, derailing the train and killing some passengers and employees. Still used.

Buildings can collapse. Still used.

Pneumatic tyres can burst. Still used.

Here[1] is Tom Scott using a recreation walking crane from the 13th century, a technology going back to Roman times, which has no evidence that it ever had brakes on it historically. Look at that and tell me you think the rope never snappped, the wood never broke, the walker never tripped and the thing never unreeled the load back to the ground with the walker severely injured, because if it went wrong builders would refuse to use it? No chance.

Nothing functions like you're claiming; that's where we get the saying "don't let perfect be the enemy of good enough", as soon as stuff is better than not having it, people want to make use of it.

[1] https://www.youtube.com/watch?v=pk9v3m7Slv8


You forgot to address the random aspect of the failure cases.

Real world is chaotic, technology was always first about controlling, then improving said control. A lot of the risks in the situations you described have been brought down that the savings (time, money,…) are magnitude more than the cost of the failure.

I’m not asking for perfection, but something good enough that we can demonstrate the savings outweigh the costs. So far there’s none. In fact, we are increasing it. And fast.


> But now they are forcing us to use a tools that everyone knows is not reliable, but the onus is still on us to keep the same reliability. So do you see why we are not thrilled?

Why generalizing your own experience on other's?


This what I really wonder, what is even the cost of code? Or what is real code quality.

I know that things like “clean code” exists but I always felt that actual code quality only shows when you try adding or changing existing code. Not by looking at it.

And the ability to judge code quality on a system scale is something I don’t think LLMs can do. But they may support developers in their judgment.


I don't know why people think SWEs are aesthetic snobs when we talk about "clean code"--the point of code is not to be pretty, it's to be understandable and predictable.

Quality doesn't matter if you're writing throwaway code or you need your startup to find a market before you run out of cash.

But once it matters, it matters a lot.


> Why don't companies fire all their developers if they can have an algorithm that can output cheap and quality code?

Because it takes an experienced developer to get the machine to output cheap and quality code well enough to be useful.

That developer is just a whole lot more valuable now, because they can do more work at a higher quality.


I don't know if you've heard, but there have been a large number of layoffs in the tech sector recently. Whether they're actually related to AI as executives claim, and not section 174 of the US IRS tax code in the BBB, is known only to them, but if your argument hinges on people having not been fired when there have been layoffs, you may need a different one.

I think a major contributor to the layoffs is companies hiring to much people around covid[1]. I cant find good stats for the years 2019-2026 besides looking at now and the past directly. There are some data for the ukranin side djinni[1][2] and for US IT job postings[3].

I dont think AI is the reason for the layoffs. Its just easier to say "because of AI we are firing" than to say "because we overhired and its actually our fault".

[1]https://djinni.substack.com/p/2021-in-review [2]https://blog.djinni.co/post/q1-analytics-en [3]https://fred.stlouisfed.org/series/IHLIDXUSTPSOFTDEVE


As you said, it's impossible to determine how many of the current layoffs are caused by AI, they probably also have a lot to do with the broader economic downturn. But you’re still missing the point, if companies truly have a black box that can produce cheap, high‑quality code as the GP put it, why don't they just fire 95% of their developers and keep only a small core of AI orchestrators?

Who's missing who's point? You're asking why haven't they fired 95% of their people. I'm pointing at tech sector layoffs saying people are being laid off. It's not 95% which is a number you totally made up, but in the broader picture, I wouldn't say it isn't happening.

I think this was answered before, with the constraints of the architecture of the model. You can't expect something fundamentally different from an LLM, because that's how they work. It's different from other models because they were not designed for this. Maybe you were expecting more, but that's not OP's fault or demerit.

What you're saying fits my understanding/expectations. However the post and the user I am replying to seem to imply different. This makes me wonder, is my understanding incomplete or is this post marketing hype dressed up as insight? So I am asking for transparency.

It is not hype. You can try the model on huggingface yourself to see its capabilities. My reply here was clarifying that the examples we showed were ones where the model didn't make a mistake. This is intentional, because over the next few weeks, we will show how the concepts, and attribution we enable can allow you to fix this mistakes more easily. All the claims in the post are supported by evidence, no marketing here.

We are probably at the point where hype and insight aren't that much distinguishable other than what would bear fruit in the future, but I agree with you

Have you played The Talos Principle 2? Yep, games are toys! It's nothing more than that. What we fail to realise in our industrial society is that toys are a fundamental piece of our culture, they enable learning lots of different skills that wouldn't be possible in the "real world", they foster creativity, problem solving, bonding and cooperation...

Toys are just toys, and yet they are the most important things we have. I honestly think the technological progress catalyzed by games is a byproduct, a huge one, but not central to the industry. We only think technology is the most important thing because we live in a world in which overvalues technical prowess in lieu of culture.


I agree with most of what you said, but describing video games as nothing more than toys does a disservice to the medium.

Yes, video games can be educational and entertaining, just like real world toys, but they can also be artistic and communicate stories. They're the most expressive and engaging storytelling device we have ever invented.

Not all games are all of these things, and there's nothing wrong with games that only focus on entertainment, but those that combine all of these aspects successfully are far more impactful and memorable than any other piece of media.


> Yes, video games can be educational and entertaining, just like real world toys, but they can also be artistic and communicate stories.

Storytelling and art isn't exclusive to video games though. Board games for instance have tons of storytelling and are very rich in art. They are, however nothing more than toys, and they don't need to be. That's my whole point. Being "just a toy" is pejorative only in the industrial, productive society.


I suppose it's a matter of semantics and perspective. The definition of "toy" seems too narrow to me to properly encompass the complexities of board and video games. A ball is a toy, but clearly it's unable to provide the same experience as a board or video game. At a certain point these experiences can be deeply engaging in ways that simpler toys can't provide. Not necessarily better, but certainly different. Maybe it has to do with the amount of play rules, engaged senses, or brain activity... I'm not sure. But at some point a toy stops being a toy to me. :)

Though I do agree with your point. Games/toys are unfairly criticized in our society.


I stand corrected: "an object that is used by an adult for pleasure rather than for serious use". Video games, board games etc... can very well be used for serious use cases, so they don't fit the definition of a toy.

Maybe Mahjunk, am I right?

slowly lowers right hand in awkward silence


It's nostalgic, but good lord does it need a bit of contrast...

Seems pretty readable to me. The information density is high, there are slight box shadows in interactive elements. We need more like this.

A dark red LIVE against a military green background... Again, it's nostalgic, I loved steam back then, but this isn't winning any design award.

> this isn't winning any design award

Good. I prefer interfaces that care more about being concise and usable than about winning design awards.


You can be concise and have good contrast. "Winning any awards" is a manner of speech.

Try another theme, looks great I think. Black or dark blue is crisp

I had to scroll all the way down in order for the bottom panel to appear. Then I was able to change theme. Good it has themes, but it just highlighted another problem in the design of the website.

The dark blue theme is indeed neat.


Because it's never forever. It's until the corporation substitutes the market, at which point you are at their mercy.

You're always at someone's mercy in any industry with significant barriers to entry, you might as well pick a low-cost supplier.

That is such a defeatist position... How about regulation?

Ask the EU how that regulate everything policy is going for local manufacturing.

Is the answer to bad regulation to not regulate at all? How EU is regulating isn't the only way to do it. And is bad regulation that much worse than monopoly by the billionaires? There is no distinction. At least with bad regulation you always have the chance to vote better next time. Good luck dealing with oligarchs.

I wish we could regulate the oligarchs away, but it seems to me that is precisely the right amount of regulation that allows for oligarchs to thrive.

If you regulate to protect IP owners, and basically make them rentists, you create IP based monopolies and olygarchs. If you also regulate to prevent consumer, worker and industry sector abuse, you end up with a very stagnant economy a la europe.

If you don't regulate at all... I don't know what would happen, but it sure seems interesting to me.


I have the opposite impression. They thrive where regulation lacks. And I'm not in the very least interested in no regulation because we know precisely what happens. It's another 29, dot-com bubble, housing crash waiting to happen.

Edit: of course regulation isn't a panacea. If the government is already ran by an oligarchy chances are laws will favor them. I'm talking about the kind of laws produced by functional democracies. So we also need to talk about how to make democratic institutions stronger first, then we can rely on regulation.


Is China doing to DRAM what Amazon did to bookstores?

History is littered with corpses. For those willing to see them.

HN's comment section new favourite sport, trying to guess if an article was generated by LLM. It's completely pointless. Why not focus on what's being said instead?

I thought the same thing. With the rate LLMs are improving, it's not going to be too much longer before no one can tell.

I also enjoy all the "vibes" people list out for why they can tell, as though there was any rhyme or reason to what they're saying. Models change and adapt daily so the "heading structure" or "numbered list" ideas become outdated as you're typing them.


Because I find LLM-generated content very annoying to read. It's sloggish, bloated, and the speaker always has this cringe way of trying to connect to the audience.

I don't believe the story itself is made up by an LLM but I'd argue that if you have an LLM write your story then it's no problem for you to have it add a TL;DR at the top so we can skip the slop.


[flagged]


> This is an LLM-generated article, for anyone who might wish to save the "15 min read" labelled at the top. Recounts an entirely plausible but possibly completely made up narrative of incompetent IT, and contains no real substance.

Nothing in the original message refers to it being clickbait, the core complaint is the LLM-like tone and the lack of substance, which you also just threw it there without references ironically.

> What, exactly, is the problem with disclosing the nature of the article for people who wish to avoid spending their time in that way?

It's alright as long as it's not based on faith or guesswork.


It is not based on guesswork. For whatever it's worth, I have gotten 7 LLM accounts banned from HN in the past week based on accurately detecting and reporting them to moderation[1]. Many of these accounts had between dozens and 100 upvotes, some with posts rated to the top of their threads that escaped detection by others. I have not once misidentified and reported an account that was genuinely human. I am aware that other people have poorly-tuned heuristics and make false accusations, but it is possible to build the skill to detect LLM output reliably, and I have done so. In the end, it is up to you whether you believe me, but I am simply trying to offer a warning for people who dislike reading generated material, nothing more.

[1] Unlike LLM-generated articles, posting LLM-generated comments is actually against the rules.


Congrats, and thanks for your work, but you should be aware that HN comments are completely different from articles. What makes you think the skills/automations required to identify LLM generated HN comments will work seamlessly with submitted articles? You have to do a statistical analysis of this, otherwise it's just guesswork.

You also have to take into account that the medium is the message[1]. In a nutshell, the more people read LLM generated posts and interact with chatbots, the higher the influence of LLM style in their writing -- the whole "delve" comes to mind, and double dashes. So even if you have a machine that correctly identified LLM generated posts, you can't be sure it'll keep working.

[1] https://web.mit.edu/allanmc/www/mcluhan.mediummessage.pdf


Those are a lot of words to say you guessed. And the banning comment is nice I guess but pretty meaningless. Does moderation really always report back to you when you make such an accusation ? Who's to even say all the banned accounts were LLMs ? You know what would happened if i got banned because someone accused me of being a LLM ? Nothing. I'd take it as a sign to do other things.

Let's say you are the LLM detecting genius you paint yourself to be. Well guess what? You're human and you're going to make mistakes, if you haven't made a bunch of them already. So if you have nothing better to add to a post than to guess this, you probably shouldn't say anything at all. Like you said, it's not even against the rules.


This looks like complete fabrication by an AI agent.

[flagged]


@dang

Because you are highjacking a thread. Wanna trash the site's design, you should open a top level thread instead.

> Wanna trash the site's design, you should open a top level thread instead.

Or better, don't[1]:

Please don't complain about tangential annoyances—e.g. article or website formats, name collisions, or back-button breakage. They're too common to be interesting.

[1]: https://news.ycombinator.com/newsguidelines.html


Exactly, thanks

Being impossible to read is not common

Get a better browser I'd say. Firefox Reader mode makes short work of such sites, including the submission. I use it very often, so I can enjoy the content rather than get frustrated over styling issues.

Ah then I deserve it. I didn't notice from the app I was using that it wasn't all the way to the left

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: