Hacker Newsnew | past | comments | ask | show | jobs | submit | smithcoin's commentslogin

Evolution typically happens on the scale of a million years, not a couple generations of human behavior.

You can speed it up.

AI didn’t kill SAAS- the overinflated P/E values are just returning to something vaguely resembling sanity


It’s not an AI bubble - it’s an inflated P/E bubble.


Yeah, the stock price is still higher than any price it obtained prior to Oct 2024. I think people are just shocked that stock prices may go up and down rather than up only.


No, we just want to point out not everybody utilizing agents ends up like LeBron or Jordan - most are Brian Scalabrine.


For sure. I like having discussions with nuanced takes, these are tools with strengths and weaknesses and being a good tool user includes knowing when not to pick it up.


> This is a five-alarm fire if you're a SWE and not retiring in the next couple years.

I’m sorry, but this is such a hype beast take. In my opinion this is equivalent to telling people not to learn to drive five years ago because of self driving from Tesla. How is that going?

Every single line of code produced is a liability. This idea that you’re going to have “gas town” like agents running and building apps without humans in the loop at any point to generate liability free revenue is insane to me.

Are humans infallible? Obviously not. But if you are telling me that ‘magic probability machines’ are creating safe, secure, and compliant software that has no need for engineers to participate in the output- first I’d like to see a citation and second I have a bridge to sell you.


> In my opinion this is equivalent to telling people not to learn to drive five years ago because of self driving

Self-driving has different economics. We're reading tea leaves, true, but it's also true that software has zero marginal cost and that $20K pays for an engineer-month in SF.

> Every single line of code produced is a liability.

Do you have a hard spec and rock-solid test cases? If you do, you have two options to a working prototype: 2-6 engineer-years, or $20K. The second option will greatly increase in quality and likely decrease in price over the next few years.

What if the spec and the test cases are the new software? Assembly programmers used to make an argument against compiled code that's somewhat parallel to yours: every instruction is a (performance) liability.

> without humans in the loop

There will be humans, just fewer and fewer. The spec and test cases are AI-eligible too.

> safe, secure, and compliant software

I'm not sure humans' advantage here is safe, if it even exists still.


So let’s say you fund a single engineer for an open‑source project with $20k. The outcome will be a prototype with some interesting ideas. And yes, with a few hundred bucks' worth of AI assistance that single engineer might get much further than without (but not using any of the techniques presented in this blog). People can coalesce around the project as contributors. A seed was planted and watered a bit.

In this case, the $20k has been burned and produced zero value. Just look at the repo issues: looks like someone trying to get attention by spamming the issue tracker and opening hundreds of PRs. As an open source project, it’s a dead end.

So it doesn’t matter that this is “likely decrease in price over the next few years”? The value is zero, so even if superintelligence can produce this in an instant at zero cost in six months, the outcome is still worth zero.

You’re assuming a kind of inverse relationship between production cost and value.

In terms of quality, to anyone using those coding agents, it should be clear by now that letting them run autonomously and in parallel is a bad idea. That’s not going to change unless you believe LLMs will turn into something entirely different over time.

Note that what works with humans—social interaction creating some emergent properties like innovation—doesn’t translate to LLM agents for a simple reason: they don’t have agency, shared goals, or accountability, so the social dynamics that generate innovation can’t form.


I agree that there's not a lot of value in your example, but it's the wrong example. AI writing code and humans refining it and maintaining it is probably an inferior proposition, more so if the project is FOSS.

The model I'm referring to is: "if it walks like software and quacks like software, it's software." Its writers and maintainers are AI. It has a commercial purpose. Its value comes from fulfilling its requirements.

There will be human handlers, including some who will occasionally have to dig through the dung and fix AI-idiosyncratic bugs. Fewer Ferrari designers, more Cuban 1956 Buick mechanics. It's an ugly approach, but the conjecture that, economically _or_ technically, there must be something fundamentally broken with it is very hand-wavy and dubious.

I agree that there will be less code-level innovation overall, just like artistic value production took a big hit when we went from portraits to photographs.


> its value comes from fulfilling its requirements.

The requirements will have to come from somewhere, and they will have to be quite precise although probably higher-level than code written today. You're talking about just a new kind of software engineer. The kind of stuff described at https://martin.kleppmann.com/2025/12/08/ai-formal-verificati... (note the "the challenge will move to correctly defining the specification")

Unless what you have in mind is some sort of Moltbook add-on that the bots would write for themselves.

I'm talking software providing value to humans.


We use OpenTofu it’s pretty seamless


Now more will be using a combination of OpenTofu and Terraform, and there will probably be some tacit endorsement of OpenTofu by Hashicorp folks in their communication with those who are using both. Good to see!


Does it do ephemeral values yet?


Yep, as of yesterday’s 1.11 release it’s supported!

That also includes a new “enabled” meta argument, so you don’t have to hack around conditional resources with count = 0.

[0]: https://opentofu.org/blog/opentofu-1-11-0/

Disclaimer: affiliated with the project


How do you migrate from count/for_each to `enabled` ?


You can just switch from `count = 1` to `enabled = true` (or vice-versa, works back-and-forth) for a resource and tofu will automatically move it next time you apply.

It's pretty seamless.


That's cool! We'll still need to change all of the references to `resource[0]`, right? Or does tofu obviate that need as well?


I’m not sure I understand. You refer to the conditional resource fields normally - without list indices. You just have to make sure the object isn’t null.

There’s some samples in the docs[0] on safe access patterns!

[0]: https://opentofu.org/docs/language/meta-arguments/enabled/


And you don't get the annoying array form for the resulting resource with the `enabled` syntax, right?

EDIT: Oh just realized the sibling asked the same, but the doc doesn't state that clearly, although it seems to me that the doc implies that yeah, it doesn't use the array form anymore.


Yes indeed! It does not use the annoying array form.


Worth switching to Opentofu only for this, then! I fuckin hate the count pattern for conditional present/not present that leads to an array of size == 1.


Amazing. Good work !


Damn, might finally be able to use it. The lack of ephemeral values was a major blocker.


It doesn't work for me on safari either.


works fine on safari desktop for me


37signals?


Not to go full tin-foil hat - but how do we know it isn't?


The fact that the CIA hasn't black-hole classified it is a good start.


IMHO for a comment of this level of vitriol you should probably cite some sources rather than rely on anecdotal evidence.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: