I've been able to get Gemini flash to be nearly as good as pro with the CC prompts. 1/10 the price 1/10 the cycle time. I find waiting 30s for the next turn painful now
Interesting, what exactly do you need to make this work? There seem to be a lot of prompts and Gemini won't have the exact same tools I guess? What's your setup?
Yeah, you do want to massage them a bit, and I'm on some older ones before they became so split, but this is definitely the model for subagents and more tools.
The market in the EU is strange, it doesn't matter where you live. Every role is being advertised as a remote one, over 200+ applicants and it's virtually impossible to get noticed.
I blame this on people spamming fake AI CVs 24/7, no one is going to review hundreds of CVs.
Unfortunately... I want to mirror this sentiment. I interviewed a lot of candidates (and worked with many teammates) in my last few roles and I saw some pretty worrying trends...
The only way this happens is if models that are specifically made to do certain kinds of coding start to exist. Then this would start to become an issue, yes, until those models are distilled into smaller models.
You mean 35B A3B? If this is shit, this is some of the best shit out I've seen yet. Never in a million years did I think I'd have an LLM running locally, actually writing code on my behalf. Accurately too.
I'm using Qwen 3.5 27b on my 4090 and let me tell you. This is the first time I am seriously blown away by coding performance on a local model. They are almost always unusable. Not this time though...
reply