> We're really getting close to the point where local models are good enough to handle practically every task that most people need to get done.
After trying to implement a simple assistant/helper with GPT-4.1 and getting some dumb behavior from it, I doubt even proprietary models are good enough for every task.
I remember vividly that the focus on GPT-4.1 to speak more humane and be more philosophical when speaking. I remember something like that. That model is special and is not meant like a next generation of their other models like 4o and o3.
After trying to implement a simple assistant/helper with GPT-4.1 and getting some dumb behavior from it, I doubt even proprietary models are good enough for every task.