Do we even want a bunch of people contributing slop upstream when (assuming it does anything worthwhile in the first place) somebody has to actually review/correct/document that code?
A handful of well intentioned slop piles might be manageable, but AI enables spewing garbage at an unprecedented scale. When there's a limited amount of resources to expend on discussing, reviewing, fixing, and then finally accepting contributions a ton of AI generated contributions from random people could bring development to a halt.
Tailwind works for your team? Go for it.
Inline CSS for your solo project? By all means.
Still stuck on SASS? It'll keep working just fine.
All in on modern CSS? More power to ya!
Go for it if Tailwind works for your team.
Inline CSS for your solo project? Chase your dream.
Still stuck on SASS? It'll work just fine.
All in on modern CSS? Go ahead and shine.
> In a world where we can type anything into a text box and get the information back instantly we are circumventing the need to visit websites altogether.
This is purely anecdotal, but the only people in my extended circle making this transition (to any extent) are the technically savvy; everyone else is slowly realizing how awful AI tools and "AI-first experiences" can be and are actively trying to avoid them.
I've noticed this bimodal distribution of perception too, and my hypothesis is that's it's hugely driven by the difference of "who is in the driver's seat".
Your tech-savvy AI early adopters are discerning between tools, the deployments and environments, and are willing and able to change things to extract the highest output from current capabilities. For instance, re-architecting a codebase to make it easier for agents to contribute to it.
The rest are having AI hypeware shoved upon them, often as a cost cutting measure, and lack the agency to influence outcomes. When agents misbehave, they only have the option to "Press 0 to speak with a Human" and hope that works.
I suspect this is a big factor in the divide we're seeing, and might result in your median adult being ambushed by recent gains in capabilities.
I'm technical, and I use AI tools but only for basic technical tasks such as finding information, summarizing simple topics, etc. For everything else AI is too inconsistent/inconvenient/unnatural. While it works fine as a demo, real world applications of AI are still far from anything useful in most areas.
I read the first line and thought - this guy gets it.
The read the second line and erm.... maybe not. The whole Agents thing has been pushed for almost a year now and it hasn't disrupted the profession of engineers on a noticeable scale.
My favorite conspiracy theory is that these projects/blog posts are secretly backed by big-AI tech companies, to offset their staggering losses by convincing executives to shovel pools of money into AI tools.
They have to be. And the others writing this stuff likely do not deal with real systems with thousands of customers, a team who needs to get paid, and a reputation to uphold. Fatal errors that cause permanent damage to a business are unacceptable.
Designing reliable, stable, and correct systems is already a high level task. When you actually need to write the code for it, it's not a lot and you should write it with precision. When creating novel or differently complex systems, you should (or need to) be doing it yourself anyway.
Is it really a secret, when Anthropic posted a project of building a C compiler totally from scratch for $20k equivalent token spend, as an official article on their own blog? $20k is quite insane for such a self-contained project, if that's genuinely the amount that these tools require that's literally the best possible argument for running something open and leveraging competitive 3rd party inference.
Provided the sponsored content is labelled "sponsored content" this is above board.
If it's not labelled it's in violation of FTC regulations, for both the companies and the individuals.
[ That said... I'm surprised at this example on LinkedIn that was linked to by the Washington Post - https://www.linkedin.com/posts/meganlieu_claudepartner-activ... - the only hint it's sponsored content is the #ClaudePartner hashtag at the end, is that enough? Oh wait! There's text under the profile that says "Brand partnership" which I missed, I guess that's the LinkedIn standard for this? Feels a bit weak to me! https://www.linkedin.com/help/linkedin/answer/a1627083 ]
The implication of "you have to have spent $1000 in tokens per engineer, or you have failed" is that you must fire any engineer who works fine by themselves or with other people and who doesn't require LLM crutch (at least if you don't want to be "failed" according to some random guy's opinion).
Getting rid of such naysayers is important for the industry.
Slop influencers like Peter Steinberger get paid to promote AI vibe coding startups and the agentic token burning hype. Ironically they're so deep into the impulsivity of it all that they can't even hide it. The latest frontier models all continue to suffer from hallucinations and slop at scale.
- Factory, unconvinced. Their marketing videos are just too cringe, and any company that tries to get my attentions with free tokens in my DMs reduce my respect for them. If you're that good, you don't need to convince me by giving me free stuff. Additionally, some posts on Twitter about it have this paid influencer smell. If you use claude code tho, you'll feel right at home with the [signature flicker](https://x.com/badlogicgames/status/1977103325192667323).
+ Factory, unconvinced. Their videos are a bit cringe, I do hear good things in my timeline about it tho, even if images aren't supported (yet) and they have the [signature flicker](https://x.com/badlogicgames/status/1977103325192667323).
I don't think that's really a conspiracy theory lol. As long as you're playing Money Chicken, why not toss some at some influencers to keep driving up the FOMO?
In the last 6 months we've seen no fewer then a dozen vibe coded/AI assisted open source, self hosted projects launch that complete against ours. So far all but one has fizzled out, with the same pattern each time: announcement, repo with 1 giant commit, 2-4 months of feature releases, loss of interest from the author, and finally abandonment.
I expect once users get burnt enough time, they'll stop adopting the new cool thing until it's been out long enough with consistent releases.
> A coding agent allows one to feel the raw productive power a great programmer can tap into. It allows one to feel like the “10× programmers” they’ve sat next to in the open office for ten years, whose skills they never quite achieved themselves.
I fear this is the trap that most "new" developers will fall into over the next few years. I'm also worried the "great programmer" will cease to exist as the current greats retire, and the potential greats will never reach that level due to their reliance on LLMs.
I mean, Im 10ish years in, so probably have another couple decades at least, and I never use AI assistants for anything. Also one of the two highest performing team members, and the other doesn't use it either.
Its a great time to be a non-AI user, and even better to have never been one, because its easier now than ever to differentiate oneself from those who are reliant on it and, over the long run, much less effective because of it.
> Parlour, of Seacroft, Leeds, who called for an attack on a hotel housing refugees and asylum seekers on Facebook, became the first person to be jailed for stirring up racial hatred during the disorder.
> Kay was convicted after he used social media to call for hotels housing asylum seekers to be set alight.
It's fascinating - I seem to remember seeing this interaction happen time and time again with GP. I wonder why they keep leaving out the calls for arson.
reply