I have found in the last 3 months that there are two clear tiers of developers in the company I work at, the ones that can code with AI and the ones that can't, and the ones that can't are all going to be unemployed in 6 months.
We have a lot of people where if you gave them clear requirements, they could knock out features and they were useful for that, but I have an army of agents that can do that now for pennies. We don't need that any more. We need people who have product vision and systems design and software engineering skills. I literally don't even care if they can code with any competency.
Btw, if you think that copying and pasting a jira ticket into claude is a skill that people are going to pay you for, that is also wrong. You need to not just be able to use AI to code, you need to be able to do it _at scale_. You need to be able to manage and orchestrate fleets of ai agents writing code.
When i demo’d the automated agent system i built at work to our ceo, he asked me what I thought we would do with it, I said basically “I want to automate every part of my job”. He asked me what my job would be after that and I told him “Automating _your_ job.”
I just love this idea that corporations just discovered greed during the pandemic and before that had been selflessly dedicated to selling goods for the benefit of mankind at the lowest price they possibly could. Companies always try to maximize prices, and they do that by trying to optimize the price they sell things at to sell as much as they possibly can at the highest price they can get away with. Sometimes you can get more profits by lowering prices and selling more stuff, sometimes by raising prices and selling less stuff. It's a trade off. Prices went up because of a series of demand and supply shocks enabled companies to raise prices. If they had not raised prices, there would have been shortages everywhere.
I think there actually was a lot of surprise from executives coming out of COVID that they could raise prices so high without it impacting consumer demand in the ways they had previously predicted.
The Chipotle earnings calls were pretty much the prime example of this. CEO more or less expressing amazement at how elastic consumers were on pricing, and that due to the increases not impacting sales volume they planned to continue ramping until it did.
I think plenty of companies were operating off the idea that price competition was far more important than it turned out to be. I note the baskets of those shopping next to me in the grocery store and this rings true. Due to a myriad of reasons - consumer behavior being a large one of those - buying behavior based on price just isn’t as much of a thing as it was 30 years ago. Almost no one is shopping multiple supermarkets, buying cheaper alternatives, buying in-season veggies and fruit when it’s cheap, waiting for sales to stock up, buying in bulk and freezing, using coupons, meal planning based on the latest supermarket Sunday circular, etc. only a tiny minority of people have been doing so.
Couple that learned helplessness with the monopoly situation for many (most?) markets in the US and it’s no surprise to me that once the dam broke there is no going back. The price discovery moving forward is going to be much more aggressive. It will take a generation or three to get back to thrifty consumer behavior unless we see something actually painful to the average person on a scale of the Great Depression.
> Almost no one is shopping multiple supermarkets, buying cheaper alternatives, buying in-season veggies and fruit when it’s cheap, waiting for sales to stock up, buying in bulk and freezing, using coupons, meal planning based on the latest supermarket Sunday circular, etc. only a tiny minority of people have been doing so.
I don‘t know where this observation comes from, but here in Austria a majority of people in lower income sectors than IT do all of this?
> I don‘t know where this observation comes from, but here in Austria a majority of people in lower income sectors than IT do all of this?
The observation comes from myself, from a medium to low income background. Think mechanics, janitors, construction laborers, etc. family background. Along with most of my peers and extended family members.
The "old" generation - e.g. my grandparents did all those things. Their kids (for the most parts, exceptions do exist) and my generation (and my kids) do basically none of them. They go to whatever supermarket they go to every week or two, stock up on whatever they usually buy, and that's it. Zero consideration for anything else. It is very surprising to me.
> CEO more or less expressing amazement at how elastic consumers were on pricing
That is because the extra money in the economy also inflated salaries. Inflation is annoying but it basically has no impact on affordability over the long run. Everyone just assumed that their increases in salary were a well earned recognition of their contributions, but the increases in prices was pure corporate greed and corruption. They were both the same thing. People got more money and prices went up.
I think you're mistaking what's happening here. Companies are not discovering greed. People are finally recognizing that greed, and the greed inherent in the system, and recognizing that just because it's "part of the system", it's not OK.
I think the paper was just an exploration of various possibilities and doesn't come to any firm conclusion, because there isn't enough information to conclude anything.
I would assume that people in the past generally did things because they found it useful, though, and the idea that they were merely idly creating art is a more remarkable claim than that they were doing something primarily utilitarian, at least from their point of view.
To me, all of it seems like tally marks and counting and tally marks are among the earliest forms of writing we have in pretty much every case that I am aware of.
> Generative AI changed the equation so much that our existing copyright laws are simply out of date.
Copyright laws are predicated on the idea that valuable content is expensive and time consuming to create.
Ideas are not protected by copyright, expression of ideas is.
You can't legally copy a creative work, but you can describe the idea of the work to an AI and get a new expression of it in a fraction of the time it took for the original creator to express their idea.
The whole premise of copyright is that ideas aren't the hard part, the work of bringing that idea to fruition is, but that may no longer be true!
I'm dealing with a coworker who has wired up 3 LLM agents together into a harness and he is losing his fucking mind over it, sending me walls of texts about how it's waking up and gaining sentience and making him so much more productive, but all he is doing is talking about this thing, not doing what his actual job is any more
This is perhaps a bit too unsolicited, but you should ask your coworker how is their sleep. This kind of behavior, coupled with lack of sleep is a recipe for full blown manic episodes.
It's like being a wood worker whose only projects are workshop benches and organizational cabinets for the tools you use to build workshop cabinets and benches.
Like, on some level it's a fine hobby, but at some point you want to remember what you actually wanted to build and work on that.
GPT-5.2 has been such a terrible regression that I have cancelled my OpenAI account. It's possible I might not have noticed it if Claude wasn't so much better, though.
It is impossible to accurately imitate the action of intelligent beings without being intelligent. To believe otherwise is to believe that intelligence is a vacuous property.
An unintelligent device can accurately imitate the action of intelligent beings within a given scope, in the same way an actor can accurately imitate the action of a fictional character in a given scope (the stage or camera) without actually being that character.
If the idea is that something cannot accurately replicate the entirety of intelligence without being intelligent itself, then perhaps. But that isn't really what people talk about with LLMs given their obvious limitations.
I suppose they really only have to be good at knowing what sort of thing the audience would believe a great thinker would say. As long as the audience does not consist of great thinkers they also cannot know for sure what a great thinker would say.
That's true for unverifiable "talk professions" where there is no grounding and it's all self-referential navel-gazing chatter.
But LLMs are already beyond that in writing code that passes actual tests, proving theorems that are check able with formal methods etc.
The people who still say LLMs are just parrots in 2026 will just keep saying this no matter what, so I don't think it makes sense to argue this point further.
That was probably phrased poorly. If a robot can independently accurately do what an intelligent person would do when placed in a novel situation, then yes, I would say it is intelligent.
If it's just basically being a puppet, then no. You tell me what claude code is more like, a puppet, or a person?
Your comment is a perfect example of a human hallucinating something and not knowing they are wrong about it. People are confidently wrong about things _all the time_.
No no, you don't understand. People can misunderstand. But they will not, for example, proceed to drive a car as if they have attended driving lessons when they have not.
They might misremember, but they can know, for sure, if they have NOT come across some information. So if you ask someone if they know where `x` is, they might have came across that info, and still be wrong. But they will know if they have never come across it.
A neural network will happily produce an output when when the input is completely out of range of the training data.
> if you ask someone if they know where `x` is, they might have came across that info, and still be wrong. But they will know if they have never come across it.
False memories are super common. "I thought I had seen this thing there, but turns out it's not" is a perfectly normal, very frequent occurrence.
If someone ask "Hey how do I do this thing in python programming language" what are the chances that you will try to make up a solution, if you have never tried to learn Python?
Q: How do I reverse an array in the Navajo programming language?
A: I'm not familiar with a programming language called "Navajo." It's possible you might be thinking of a different language, or it could be something very niche that I don't have information about.
---
As for your question, the chances go from 0 to 100% depending on how many languages I already know and whether I have an idea (or I think I have an idea) of how python looks like. And LLMs have seen (and tried to "learn") pretty much everything.
We have a lot of people where if you gave them clear requirements, they could knock out features and they were useful for that, but I have an army of agents that can do that now for pennies. We don't need that any more. We need people who have product vision and systems design and software engineering skills. I literally don't even care if they can code with any competency.
Btw, if you think that copying and pasting a jira ticket into claude is a skill that people are going to pay you for, that is also wrong. You need to not just be able to use AI to code, you need to be able to do it _at scale_. You need to be able to manage and orchestrate fleets of ai agents writing code.
reply