Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

[dead]


This sounds exactly like Claude wrote it. I've noticed Claude saying "genuinely" a lot lately, and the "real killer feature" segue just feels like Claude being asked to review something.

> The fact that you're getting 15-30 tok/s for text gen on phone hardware is wild — that's basically usable for real conversations.

Wild how bad it is compared to, say, Russet for iOS/ipadOS, which runs these same models at 110 tps.


I've added a section for recommended models. So basically you can chose from there.

I'd recommend going for any quantized 1B parameter model. So you can look at llama 3.2 1B, gemma3 1B, qwen3 VL 2B (if you'd like vision)

Appreciate the kind words!


> that's basically usable for real conversations.

That's using the word "real" very loosely.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: