I personally found out that knowing how to use ai coding assistants productively is a skill like any other and a) it requires a significant investment of time b) can be quite rewarding to learn just as any other skill c) might be useful now or in the future and d) doesn't negate the usefulness of any other skills acquired on the past nor diminishes the usefulness of learning new skills in the future
Agreed, my experience and code quality with claude code and agentic workflows has dramatically increased since investing in learning how to properly use these tools. Ralph Wiggum based approaches and HumanLayer's agents/commands (in their .claude/) have boosted my productivity the most. https://github.com/snwfdhmp/awesome-ralphhttps://github.com/humanlayer
As much as i loved the relation of vibe coding to slots and their related flow states in this article, I also think what you are stating is the exact reason these tools are not the same as slots, the skill gap is there and its massive.
I think there are a ton of people just pulling the lever over and over, instead of stepping back and considering how they should pull the lever. When you step back and consider this, you are for sure going to end up falling deeper into the engineering, architecture realm. Ensuring that continually pulling the lever doesn't result in potential future headaches.
I think a ton of people in this community are struggling with the lose of flow state, and attempting to still somehow enter it through prompting. The game in my view has just changed, its more about just generating the code, and being thoughtful about what comes next, its rapid usage of a junior to design your system, and if you overdue the rapidness the junior will give you headaches.
> I think there are a ton of people just pulling the lever over and over, instead of stepping back and considering how they should pull the lever
There are deeper considerations like why pull the lever, or is it the correct lever? So many api usages is either seeing someone using a forklift to go the gym (bypassing the point), using it to lift a cereal box (overpowered), or using it to do watchmaking (very much the wrong tool).
Programming languages are languages, yes. But we only use them for two reasons. They can be mapped down to hardware ISA and they’re human shaped. The computer doesn’t care about the wrong formula as long as they can compute it. So it falls on us to ensure that the correct formula is being computed. And a lot of AI proponents is trying to get rid of that part.
On the using AI assistants I find that everything is moving so fast that I feel constantly like "I'm doing this wrong". Is the answer simply "dedicate time to experimenting? I keep hearing "spec driven design" or "Ralph" maybe I should learn those? Genuine thoughts and questions btw.
More specifically regarding spec-driven development:
There's a good reason that most successful examples of those tools like openspec are to-do apps etc. As soon as the project grows to 'relevant' size of complexity, maintaining specs is just as hard as whatever other methodology offers. Also from my brief attempts - similar to human based coding, we actually do quite well with incomplete specs. So do agents, but they'll shrug at all the implicit things much more than humans do. So you'll see more flip-flopped things you did not specify, and if you nail everything down hard, the specs get unwieldy - large and overly detailed.
> if you nail everything down hard, the specs get unwieldy - large and overly detailed
That's a rather short-sighted way of putting it. There's no way that the spec is anywhere as unwieldly as the actual code, and the more details, the better. If it gets too large, work on splitting a self-contained subset of it to a separate document.
> There's no way that the spec is anywhere as unwieldly as the actual code, and the more details, the better.
I disagree - the spec is more unwieldy, simply by the fact of using ambiguous language without even the benefit of a type checker or compiler to verify that the language has no ambiguities.
People are keen to forget that programming languages are specs. And a good technique for coding is to build up you own set of symbols (variables, struct, and functions) so that the spec become easier to write and edit. Writing spec with natural language is playing russian roulette with the goals of the system, using AI as the gun.
Everybody feels like this, and I think nobody stays ahead of the curve for a prolonged time. There's just too many wrinkles.
But also, you don't have to upgrade every iteration. I think it's absolutely worthwhile to step off the hamster wheel every now and then, just work with you head down for a while and come back after a few weeks. One notices that even though the world didn't stop spinning, you didn't get the whiplash of every rotation.
I don’t think Ralph is worthwhile, at least the few times I’ve tried to set it up I spent more time fighting to get the configuration right than if I had simply run the prompt. Coworkers had similar experiences, it’s better to set a good allowlist for Claude.
I think find what works for you, and everything else is kind of noise.
At the end of the day, it doesn’t matter if a cat is black or white so long as it catches mice.
——
Ive also found that picking something and learning about it helps me with mental models for picking up other paradigms later, similar to how learning Java doesn’t actually prevent you from say picking up Python or Javascript
The addictive nature of the technology persists though. So even if we say certain skills are required to use it, then also it must come with a warning label and avoided by people with addictive personalities/substance abuse issues etc.
It's addictive because of a hypothesis I have about addiction. I have no data to back it up other than knowing a lot of addicted people and I have studied neuroscience, yet I still think there's something to it. It's definitely not fully true though.
Addiction occurs because as humans we bond with people but we also bond with things. It could be an activity, a subject, anything. We get addicted because we're bonded to it. Usually this happens because we're not in fertile grounds to bond with what we need to bond with (usually a good group of friends).
When I look at addicted people a lot of them bond with people that have not so great values (big house, fast cars, designer clothing, etc.), adopt those values themselves and get addicted to drugs. This drugs is usually supplied by the people they bond with. However, they bond with those people in the first place because of being aimless and receiving little guidance in their upbringing.
I'm just saying all that to make it more concrete with what I mean about "good people".
Back to LLMs. A lot of us are bonding with it, even if we still perceive it as an AI. We're bonding with it because when it comes to certain emotional needs, they're not being fulfilled. Enter a computer that will listen endlessly to you and is intellectually smarter than most humans, albeit it makes very very dumb mistakes at times (like ordering +1000 drinks when you ask for a few).
That's where we're at right now.
I've noticed I'm bonded with it.
Oh, and to some who feel this opinion is a bit strong, it is. But consider that we used to joke that "Google is your best friend" when it just came out and long thereafter. I think there's something to this take but it's not fully in the right direction I think.
> knowing how to use ai coding assistants productively is a skill like any other
No, it's different from other skills in several ways.
For one, the difficulty of this skill is largely overstated. All it requires is basic natural language reading and writing, the ability to organize work and issue clear instructions, and some relatively simple technical knowledge about managing context effectively, knowing which tool to use for which task, and other minor details. This pales in comparison with the difficulty of learning a programming language and classical programming. After all, the entire point of these tools is to lower the required skill ceiling of tasks that were previously inaccessible to many people. The fact that millions of people are now using them, with varying degrees of success for various reasons, is a testament of this.
I would argue that the results depend far more on the user's familiarity with the domain than their skill level. Domain experts know how to ask the right questions, provide useful guidance, and can tell when the output is of poor quality or inaccurate. No amount of technical expertise will help you make these judgments if you're not familiar with the domain to begin with, which can only lead to poor results.
> might be useful now or in the future
How will this skill be useful in the future? Isn't the goal of the companies producing these tools to make them accessible to as many people as possible? If the technology continues to improve, won't it become easier to use, and be able to produce better output with less guidance?
It's amusing to me that people think this technology is another layer of abstraction, and that they can focus on "important" things while the machine works on the tedious details. Don't you see that this is simply a transition period, and that whatever work you're doing now, could eventually be done better/faster/cheaper by the same technology? The goal is to replace all cognitive work. Just because this is not entirely possible today, doesn't mean that it won't be tomorrow.
I'm of the opinion that this goal is unachievable with the current tech generation, and that the bubble will burst soon unless another breakthrough is reached. In the meantime, your own skills will continue to atrophy the more you rely on this tech, instead of on your own intellect.
> The fact that millions of people are now using them, with varying degrees of success for various reasons, is a testament of this.
I do agree with you that by design this new tool lowers the bar to entry etc.
But I just want to state the obvious: billions of kids are playing with a ball; it's not that hard. Yet far less people are good soccer players.
> The goal is to replace all cognitive work. Just because this is not entirely possible today, doesn't mean that it won't be tomorrow.
> [..]
> I'm of the opinion that this goal is unachievable with the current tech generatiom
> [..]
> In the meantime, your own skills will continue to atrophy the more you rely on this tech [..]
Here I don't quite follow.
I agree that if this tech is ready to completely replace you, you won't need to use your brain.
But provided it is not there yet (like, at all), your intellect is needed quite a lot to get out of it anything more than toys.
The question is: do you benefit from using it or not? can you build faster or better by applying these tools in the appropriate way or should you just ignore it and keep doing things the way things used to be done up until a few months ago?
This is a legit question.
My point is: in order to anwswer this question I cannot base my intuition only on some vague first principles on what this tech stack ought to be able to do, or what other people say it's able to do, or what I suspect it will never be able to do: I need to touch it, to learn how to use it, just like every other tool. That's the only way I can truly get a sensible answer. And like any other skill, I'm fully aware that I can't devote just a few minutes trying it out and then reaching any conclusion.
EDIT: I do share a general concern about how new generations are going to achieve the full-picture understanding if they get exposed to these tools as the main approach towards software production. I come to this after a long career in system programming, so I don't personally see this as a threat to atrophy my own skills; but I do share a quite undefined sense of concern about where this is going
> billions of kids are playing with a ball; it's not that hard. Yet far less people are good soccer players.
I agree, but I don't see how that negates what I said.
Following your analogy, what's currently happening is that kids playing with a ball are now allowed to play in the major leagues. Good soccer players still exist, and their performance has arguably improved as well, but kids are now entering spaces that were previously inaccessible to them. This can be seen as both a good or a bad thing, but I would argue that it will mostly have bad consequences for everyone involved, including the kids.
> The question is: do you benefit from using it or not? can you build faster or better by applying these tools in the appropriate way or should you just ignore it and keep doing things the way things used to be done up until a few months ago?
That's a false dichotomy. I would say that the answer is somewhere in the middle.
These new tools can assist with many tasks, but I'm still undecided whether they're a net long-term benefit. On one hand, sure, they enable me to get to the end result quicker. On the other, I have less understanding of the end result, hence I can't troubleshoot any issues, fix any bugs, or implement new features without also relying on the tool for this work. This ultimately leads to an atrophy of my skills, and a reliance on tools that are out of my control. Even worse: since the tools are far from reliable yet, they provide a false sense of security.
But I also don't think it's wise to completely ignore this technology, and continue working as it didn't exist.
So at this point, the smartest approach to me is conservative adoption. Use vibe coding for things that you don't care about, that won't be published, and will only be used by yourself. Use assisted coding in projects that might be published and have other users, but take time and effort to guide the tool, and understand and review the generated code. Use classical programming for projects you care about, critical software, or when you want to actually learn and improve your skills.
I doubt this approach will be adopted by many, and that's the concerning part, since the software they produce will inevitably be forced on the rest of us.
What's really surprising to me is how many experienced programmers are singing the praises of this new way of working. How what they really enjoy is "building", but find the classical process of "building" tedious. This goes against most of the reasons I got into and enjoy working in this industry to begin with. Delivering working software is, of course, the end goal. But the process itself, pushing electrons to arrange bits in a useful configuration, in a way that is interesting, performant, elegant, or even poetic, learning new ways of doing that and collaborating with like-minded people... all of that is why I enjoy doing this. A tool that replaces that with natural language interactions, that produces the end result by regurgitating stolen data patterns in configurations that are sometimes useful, and that robs me from the process of learning, is far removed from what I enjoy doing.
I got your poor attempt at sarcasm. I just don't think it's a good argument.
The person who understands how lower levels of abstraction work, will always run circles technically around those who don't. Besides, "AI" tools are not a higher level of abstraction, and can't be compared to compilers. Their goal is to replace all cognitive work done by humans. If you think programming is obsolete, the same will eventually happen to whatever work you're doing today with agents. In the meantime, programmers will be in demand to fix issues caused by vibe coders.
And I got your cheeky, dismissive attitude which completely misses the forest for the trees.
> In the meantime, programmers will be in demand to fix issues caused by vibe coders.
Yes, I agree. They’ll be lower on the totem pole than the vibe coders, too. Because the best vibe coders have the same skill set as you - years of classical engineering background. So how can one differentiate themself in the new world? I aspire to move up the totem pole, not down, and leaning into AI is the #1 way to do that. Staying a “bug fixer” only is what will push you out of employment.