Hacker News
new
|
past
|
comments
|
ask
|
show
|
jobs
|
submit
login
vFunct
7 months ago
|
parent
|
context
|
favorite
| on:
Qwen3-Coder: Agentic coding in the world
Do we know if the full model is FP8 or FP16/BF16? The hugging face page says BF16:
https://huggingface.co/Qwen/Qwen3-Coder-480B-A35B-Instruct
So likely it needs 2x the memory.
danielhanchen
7 months ago
[–]
I think it's BF16 trained then quantized to FP8, but unsure fully - I was also trying to find out if they used FP8 for training natively!
jychang
7 months ago
|
parent
[–]
Qwen uses 16bit, Kimi and Deepseek uses FP8.
danielhanchen
7 months ago
|
root
|
parent
[–]
Oh ok cool thanks!
Guidelines
|
FAQ
|
Lists
|
API
|
Security
|
Legal
|
Apply to YC
|
Contact
Search:
So likely it needs 2x the memory.