> Look closely at every major breakthrough, even those in AI-driven medicine. It’s still humans pointing the AI down the right paths. Human creativity is the spark.
In the short term yes, but we're already seeing nearly autonomous agents get impressive results. It won't be very long until the average person can be that guiding hand, rather than a software engineer who knows how to code by hand and design software. This is good for the world, terrible for the software dev
I call bullshit. This is like saying an average person can be the driving hand for legal documents or medical diagnosis.
The whole point is that as a specialist you vouch for what has been created. Yes, your time moves away from writing code to reviewing it, but it still requires competence to figure out whether what code is doing is exactly what is supposed to be doing.
Software is a bit unique in that the vouching process is really worth nothing at all. A licensed structural engineer, attorney, or doctor has professional liability for acts of negligence and malfeasance. The last time I checked, most commercial software is expected to have large numbers of defects. There are some costly products I can think of that are barely fit for purpose, and yet somehow the bad actors responsible for them aren’t sued out of existence or prohibited by law from practice.
I think if the industry trend is toward paying developers to verify or certify programs are logically sound and fit for purpose, then users will be getting a lot more value for the cost of developers’ time.
Reliability is worth everything. You never ever want to do work with unreliable people. If someone is convincingly lying without any incentive, you have to check every single thing they do and this is even more difficult than doing the work yourself.
I agree here. The magic word you’re looking for is liability. The world will always need people to hold accountable for when things don’t meet expectations… and while they seem to be pretty good at holding their own in court, I wouldn’t count on OpenAI’s or MSFT’s legal reps sitting in on your behalf when your chatGPT-food-critic startup, coded with co-pilot, tells someone with a shellfish allergy to “try the shrimp”.
In the short term yes, but we're already seeing nearly autonomous agents get impressive results. It won't be very long until the average person can be that guiding hand, rather than a software engineer who knows how to code by hand and design software. This is good for the world, terrible for the software dev