Hacker Newsnew | past | comments | ask | show | jobs | submit | dimgl's commentslogin

Agreed, I have not been impressed with Kagi at all.

Yeah this looks like OpenCode. I've never gotten good results with it. Wild that it has 120k stars on GitHub.

OpenClaw has 308k stars. That metric is meaningless now that anyone can deploy bots by the thousands with a single command.

Does Claude Code's system prompt have special sauces?

Yes, very much so.

I've been able to get Gemini flash to be nearly as good as pro with the CC prompts. 1/10 the price 1/10 the cycle time. I find waiting 30s for the next turn painful now

https://github.com/Piebald-AI/claude-code-system-prompts

One nice bonus to doing this is that you can remove the guardrail statements that take attention.


Interesting, what exactly do you need to make this work? There seem to be a lot of prompts and Gemini won't have the exact same tools I guess? What's your setup?

Yeah, you do want to massage them a bit, and I'm on some older ones before they became so split, but this is definitely the model for subagents and more tools.

Most of my custom agent stack is here, built on ADK: https://github.com/hofstadter-io/hof/tree/_next/lib/agent


Thanks for the link. Very helpful to understanding what’s going on under the hood.

Which are better and free software?

None exist yet, but that doesn't mean OpenCode is automatically good.

Didn't mean to imply OpenCode was any good... was honestly looking for a recommendation.

Twitter.

Are you based in the U.S.?


EU/UK


Have you considered roles in the EU countries that have been going gangbusters for US offshoring (Poland, Bulgaria, Romania, Ukraine, Slovakia)?


Do you think they would like to consider $60k annual salary there?


The market in the EU is strange, it doesn't matter where you live. Every role is being advertised as a remote one, over 200+ applicants and it's virtually impossible to get noticed.

I blame this on people spamming fake AI CVs 24/7, no one is going to review hundreds of CVs.


Unfortunately... I want to mirror this sentiment. I interviewed a lot of candidates (and worked with many teammates) in my last few roles and I saw some pretty worrying trends...


I actually had to do this exact thing with my game recently in order to create interesting AI patterns during combat.


The only way this happens is if models that are specifically made to do certain kinds of coding start to exist. Then this would start to become an issue, yes, until those models are distilled into smaller models.


You mean 35B A3B? If this is shit, this is some of the best shit out I've seen yet. Never in a million years did I think I'd have an LLM running locally, actually writing code on my behalf. Accurately too.


I'm using Qwen 3.5 27b on my 4090 and let me tell you. This is the first time I am seriously blown away by coding performance on a local model. They are almost always unusable. Not this time though...


122b is probably better; especially on a mac with 128gb memory.

localllama thread on this: https://www.reddit.com/r/LocalLLaMA/comments/1rk01ea/qwen351... (see comments for actual real usage rather thank benchmarks)

But for nvidia gpus 27b on a 3090 or similar is where it's at for sure.


27B dense model is probably the best in the 3.5 lot, not absolutely but for perf:size. Its also pretty good at prose, which is a rarity for a Qwen.


You don't need a coding version of model from Qwen? the 3.5 works?


That's funny. I wrote a blog post about something very similar.

https://dextermiguel.com/posts/codex-helped-me-recover-lost-...


Seems like a similar case indeed, I'm glad you got your files back :).


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: