Yeah, the stock price is still higher than any price it obtained prior to Oct 2024. I think people are just shocked that stock prices may go up and down rather than up only.
For sure. I like having discussions with nuanced takes, these are tools with strengths and weaknesses and being a good tool user includes knowing when not to pick it up.
> This is a five-alarm fire if you're a SWE and not retiring in the next couple years.
I’m sorry, but this is such a hype beast take. In my opinion this is equivalent to telling people not to learn to drive five years ago because of self driving from Tesla. How is that going?
Every single line of code produced is a liability. This idea that you’re going to have “gas town” like agents running and building apps without humans in the loop at any point to generate liability free revenue is insane to me.
Are humans infallible? Obviously not. But if you are telling me that ‘magic probability machines’ are creating safe, secure, and compliant software that has no need for engineers to participate in the output- first I’d like to see a citation and second I have a bridge to sell you.
> In my opinion this is equivalent to telling people not to learn to drive five years ago because of self driving
Self-driving has different economics. We're reading tea leaves, true, but it's also true that software has zero marginal cost and that $20K pays for an engineer-month in SF.
> Every single line of code produced is a liability.
Do you have a hard spec and rock-solid test cases? If you do, you have two options to a working prototype: 2-6 engineer-years, or $20K. The second option will greatly increase in quality and likely decrease in price over the next few years.
What if the spec and the test cases are the new software? Assembly programmers used to make an argument against compiled code that's somewhat parallel to yours: every instruction is a (performance) liability.
> without humans in the loop
There will be humans, just fewer and fewer. The spec and test cases are AI-eligible too.
> safe, secure, and compliant software
I'm not sure humans' advantage here is safe, if it even exists still.
So let’s say you fund a single engineer for an open‑source project with $20k. The outcome will be a prototype with some interesting ideas. And yes, with a few hundred bucks' worth of AI assistance that single engineer might get much further than without (but not using any of the techniques presented in this blog). People can coalesce around the project as contributors. A seed was planted and watered a bit.
In this case, the $20k has been burned and produced zero value. Just look at the repo issues: looks like someone trying to get attention by spamming the issue tracker and opening hundreds of PRs. As an open source project, it’s a dead end.
So it doesn’t matter that this is “likely decrease in price over the next few years”? The value is zero, so even if superintelligence can produce this in an instant at zero cost in six months, the outcome is still worth zero.
You’re assuming a kind of inverse relationship between production cost and value.
In terms of quality, to anyone using those coding agents, it should be clear by now that letting them run autonomously and in parallel is a bad idea. That’s not going to change unless you believe LLMs will turn into something entirely different over time.
Note that what works with humans—social interaction creating some emergent properties like innovation—doesn’t translate to LLM agents for a simple reason: they don’t have agency, shared goals, or accountability, so the social dynamics that generate innovation can’t form.
I agree that there's not a lot of value in your example, but it's the wrong example. AI writing code and humans refining it and maintaining it is probably an inferior proposition, more so if the project is FOSS.
The model I'm referring to is: "if it walks like software and quacks like software, it's software." Its writers and maintainers are AI. It has a commercial purpose. Its value comes from fulfilling its requirements.
There will be human handlers, including some who will occasionally have to dig through the dung and fix AI-idiosyncratic bugs. Fewer Ferrari designers, more Cuban 1956 Buick mechanics. It's an ugly approach, but the conjecture that, economically _or_ technically, there must be something fundamentally broken with it is very hand-wavy and dubious.
I agree that there will be less code-level innovation overall, just like artistic value production took a big hit when we went from portraits to photographs.
> its value comes from fulfilling its requirements.
The requirements will have to come from somewhere, and they will have to be quite precise although probably higher-level than code written today. You're talking about just a new kind of software engineer. The kind of stuff described at https://martin.kleppmann.com/2025/12/08/ai-formal-verificati... (note the "the challenge will move to correctly defining the specification")
Unless what you have in mind is some sort of Moltbook add-on that the bots would write for themselves.
Now more will be using a combination of OpenTofu and Terraform, and there will probably be some tacit endorsement of OpenTofu by Hashicorp folks in their communication with those who are using both. Good to see!
You can just switch from `count = 1` to `enabled = true` (or vice-versa, works back-and-forth) for a resource and tofu will automatically move it next time you apply.
I’m not sure I understand. You refer to the conditional resource fields normally - without list indices. You just have to make sure the object isn’t null.
There’s some samples in the docs[0] on safe access patterns!
And you don't get the annoying array form for the resulting resource with the `enabled` syntax, right?
EDIT: Oh just realized the sibling asked the same, but the doc doesn't state that clearly, although it seems to me that the doc implies that yeah, it doesn't use the array form anymore.
Worth switching to Opentofu only for this, then! I fuckin hate the count pattern for conditional present/not present that leads to an array of size == 1.
reply