I wrote a book about software engineering mental models that barely mentions AI on purpose. Only in the Appendix section I explain the rationale for not talking about AI, pasting it here:
---
Thoughts on AI
I consciously decided not to talk about AI throughout this book. Not because I don't believe in the benefits of using AI... I do.
I believe AI will keep bringing a lot of value to society.
I believe AI will keep changing our profession in many profound ways.
But whatever happens, I believe the principles here are still going to be incredibly valuable, even if the software engineer profession ceases to exist with its current name.
We might be working at a totally different level of abstraction, but the values, principles, mental models, patterns of communication and behavior described in this book will still make a huge difference. I believe they are atemporal.
Having said that, here are my recommendations regarding AI:
- Think of AI as another tool you have to create value, just like your IDE, the code you write or the emails you send.
- Keep yourself long enough in the problem space before throwing an AI api to solve a problem that might not exist
- Unless you are working at the frontier of AI development, ignore the noise. AI is now the shiny object everyone wants to be up to date with. JavaScript frameworks were used as a joke between value-driven software engineers because there would be a new one every 6 months and a lot of people would move to it without the real need. AI changes and improves every day or week. Being on top of it is probably a full time job. Instead of following all the news on a daily basis, level up your AI game in batches every 3 or 6 months. It will be more than enough.
- The most interesting use of AI for me has been in finding out my unknowns-unknowns. One question I frequently ask is "What are the building blocks of this problem/system/piece of knowledge?"
> It’s often a smart political tactic to make your work sound slightly more complicated than it really is. Otherwise you risk falling into the “you made it look easy, therefore we didn’t need to pay you so much” trap. But it’s foolish to actually do unnecessarily complicated work. Software is hard enough as it is.
Totally agree here with the part where "it’s foolish to actually do unnecessarily complicated work"!
But I'd say the ultimate job security is being known as someone who creates real value to the organization, and the way you do that is by putting in extra effort into understanding and communicating the business value of your work.
for example: the latency reduction is not just a latency reduction, it's an increase in x% in revenue.
Programming with the use of AI can still be seen as programming but at a different level of abstraction.
Depending on your "job to be done", you will prefer one level of abstraction or another.
Example from before AI: I've always hated javascript frameworks like nest.js because they were doing too much magic under the hood. But for a simple CRUD application in MVP, I'd might use it.
From the purely utilitary position maybe you are right, but my point was about arts and entertainment. Using the content generation tools might be useful in certain situations, but it's not joyful when the robots do the job for you.
Imaging that you purchase a video game, but then use LLM to play this game for you. Another example is Chess. Stockfish is more efficient than most Chess players, but playing Chess using programming assistant (even a little bit) is no longer a sport competition.
I also agree that not everyone like programming, and see it just as a job to be done.
ow I see the arts and entertainment point, and it makes sense!
But I'm now wondering why programming with AI doesn't feel like art (which is totally true to me at this point)?
For me I'd say the answer is related to feeling in control. Either:
- I don't know enough about using AI to code and feel I'm in control or;
- the tools are not at the level I need to feel like I'm in control
I wrote High Output Software Engineering, a book about the skills (decision-making and communication) engineers must focus on to create real value for organizations.
The title is a reference to High Output Management, by Andrew Grove, because the skills and mental models laid out in the book should enhance "Output" (value being created) not necessarily "activity" (code being produced).
My goal now is to spread the word about the book and put it in the hands of people or companies that will benefit from it. I've heard it's being specially helpful for startup teams.
---
Thoughts on AI
I consciously decided not to talk about AI throughout this book. Not because I don't believe in the benefits of using AI... I do.
I believe AI will keep bringing a lot of value to society.
I believe AI will keep changing our profession in many profound ways.
But whatever happens, I believe the principles here are still going to be incredibly valuable, even if the software engineer profession ceases to exist with its current name.
We might be working at a totally different level of abstraction, but the values, principles, mental models, patterns of communication and behavior described in this book will still make a huge difference. I believe they are atemporal.
Having said that, here are my recommendations regarding AI:
- Think of AI as another tool you have to create value, just like your IDE, the code you write or the emails you send.
- Keep yourself long enough in the problem space before throwing an AI api to solve a problem that might not exist
- Unless you are working at the frontier of AI development, ignore the noise. AI is now the shiny object everyone wants to be up to date with. JavaScript frameworks were used as a joke between value-driven software engineers because there would be a new one every 6 months and a lot of people would move to it without the real need. AI changes and improves every day or week. Being on top of it is probably a full time job. Instead of following all the news on a daily basis, level up your AI game in batches every 3 or 6 months. It will be more than enough.
- The most interesting use of AI for me has been in finding out my unknowns-unknowns. One question I frequently ask is "What are the building blocks of this problem/system/piece of knowledge?"