Hacker Newsnew | past | comments | ask | show | jobs | submit | iainctduncan's commentslogin

And this is better how exactly? If you're running a business, do you not want to catch employees mistakes as early as possible? Most ideas are crap. I'd way rather they get elimated after someone spent an hour making slides than a day vibecoding a prototype.

And then there is the problem that vibecoding is addictive so the more one has done of it on the prototype, the worse one's judgement of whether it's actually something worth building...


he exited!!!

... "so far."

Programmers are kidding themselves if they think they are not susceptible to this. It may be more subtle, but interacting with a human-sounding echo chamber IS going to screw with your judgement.

This thread is entirely people bemoaning how this happens to other people and how the commenter has a work around so it never happens to them. Unless your workaround is “don’t use LLMs you have to pay for,” it’s happening to you.

You could do the sane thing and not use AI, at all. That does the trick pretty well.

There are an awful lot of programmers here essentially mocking this person for being naive and gullible, and yet the things I read programmers who are all in on vibe coding say are not that different, just a little less extreme. I'm seeing cases online nearly daily of people thinking their app is ground breaking or amazing when it's honestly a piece of barely thought out garbage and if they hadn't made it in a rush of "OMG I'm a genius with this tool" they'd know it.

I think coders ignore the insidious mental effects of these things at their peril and we would do well to ask ourselves if we are not likewise having our judgment altered by the intoxicating rush of LLM work and the subtle syncophancy of LLMs making them feel "insanely productive".

Cocaine and meth are also real productivity enhansers in the short term, but it doesn't mean they're a good fucking idea. There was a time when big companies were trying to convince everyone and their dog that life would be better, faster, and more productive with a little coke in the mix. Hell, I even saw more than a few people wreck themselves that way in the first dotcom era. :-/


HN has a 10X persona bias. (A bias. There are many personalities etc.) In turn one of the recurring memes is the AI-enabled Senior Developer who gets superpowers based on their experience. The junior developer, curiously, does not get superpowers, because they just lean on the machines and learn nothing. But the senior developer by the power of pre-AI experience (doing stuff) gets wings to fly with.

Regular people are just, I don’t know, I guess they are token whales waiting to get washed ashore.

Born just in the right time to both get experience doing stuff and also to experience wearing their wings. It’s that simple.

That’s the biggest thing for HN folks to at least be aware of.


Chat-GPT is the worst for sycophancy, but even Claude responds to me thinking about or asking fairly obvious things with praise for how insightful I am to notice that and how this pinpoints the very fundamental essence of asynchronous CRUD operations or whatever.

I'm subscribed through work and haven't used it to make a personal project, but I imagine being told every decision you make is brilliant and revolutionary has some effect over a long period of exposure, unless you're very deliberately skeptical about it. If you started out thinking you're an exceptionally smart and insightful person, you're probably doomed.


The difference between those developers and this man is 100k.

You know they are burning money dangerously when they decide to focus on the area in which they are getting their asses kicked...

Yeah, I thought it was strange too. I thought OpenAI could meaningfully differentiate by being something more like a “Social Media AI”.

I feel like they are sailing into a red ocean with what look more like copycat tactics than innovation (e.g., Codex v Claude Code; Astral v Bun)


To me me he's just another example of a very smart programmer being really bad at seeing big pictures, imagining from other perspectives, and generally having people/systems/economics/philosophy wisdom.

Unfortunately it seems endemic in our field. So many great coders have this laughably naive belief that, because they are good at something that makes them feel like a genius when they solve problems, they are in fact geniuses at solving all problems.

Even more unfortunately, AI seems to ramp that up to 10x along with the code generation.

I'm willing to be the public perception of programmers in general is going to be a lot workse five years from now....


There's a tangentially related concept to this where Nobel Prize winners have held, shall we say, rather questionable beliefs in areas outside their domain of expertise.

https://en.wikipedia.org/wiki/Nobel_disease


It's definitely a broader thing you see not unfrequently in academia too (I'm a very mature phd student right now in fact!), but man, programmers seem to have it real bad.

Exactly. A big issue with "engineers" who are taught in school to be "problem-solvers" using a definite set of tools that they have to master, rather than "question-askers" with a mentality of poets and researchers, like we're taught in university

And yet ... all the software I use online still sucks. Honestly, the only stuff in my daily drive that I truly admire are the desktop apps that were years or decades in the making. If everyone is hella productive and 10xing it why are the cloud offerings still such buggy pieces of shit?

Things are too weird now. Alomst makes me want to just teach music lessons again for way less money. Sigh


It’s incredible to come online every day and read breathless articles about how the future is here on the same 5 apps, using the same crappy sources, with the same horrible ad content. Say what you want about the dotcom bubble at least websites were new. Besides the chatbots nothing feels new or better at all

Everytime I read articles here describing the LLM prompt engineering workflow, all I can think is, "This sounds like such a fucking awful job".

I imagine I will greatly reduce my job prospects as a hold out, but honestly, from what I've read I think I'd rather take a hefty pay hit and not go there. It sounds like a mental heath disaster and fast track to serious burnout.

YMMV, I realize I'm in the minority, this is unproductive ranting, yada yada yada


It'll be a pay hit initially, but eventually the bottom will fall out and those of us who actually know how to write a computer program will be like COBOL programmers, with salaries to match. Stay strong and keep your powder dry!


It seems to me like any other tech: how you use it is up to you. You don’t have to run 10 agents simultaneously, etc.

I use them when I find them helpful, and that’s the case in plenty of situations. Figuring out architecture and design, finding bugs, analyzing and explaining a codebase, writing little scripts and utilities (especially in areas where you lack familiarity), etc. are all pure wins, imo. They increase my productivity and quality of output without any real downside.

When it comes to writing the bulk of a codebase or doing ongoing maintenance on a nontrivial system, a lot of ymmv comes into play. There’s no real reason (yet!) to believe that if you’re not committing 10k lines of generated slop per day, you’re going to be left behind. People doing that are on a bleeding edge that may have already cut them deeper than they realize.

In short, there’s an enormous middle ground between Yegge’s Gas Town and “I refuse to use LLMs for development”. I’m enjoying working in that middle ground. It’s interesting and stimulating, it makes a lot of things easier and quicker, and I’m growing and learning. If that stops, I’ll just change what I’m doing.


And we haven't even started to see the security ramifications... my money is on the black hats in this race.


We are starting to see them, also the bugs too.

But to your point I think this year it's quite likely we'll see at least 1 or 2 major AI-related security incidents..


I've been predicting a "challenger disaster" moment: https://simonwillison.net/2026/Jan/8/llm-predictions-for-202...


My money is on a lot more than 1 to 2!


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: