Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

There are competent open source LLMs out today. They are not highly centralized.


There's one at the top of Hacker News right now, Qwen3-Coder-Next: https://news.ycombinator.com/item?id=46872706


A 80B MoE model with 3B params per activation is not a competent model regardless of what their cherry-picked benchmarks say. This reminds me of back when every other llama-7b finetune was claiming to be "GPT-4 quality".




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: