Hacker News
new
|
past
|
comments
|
ask
|
show
|
jobs
|
submit
login
misiti3780
29 days ago
|
parent
|
context
|
favorite
| on:
Google releases Gemma 4 open models
what HW are you running them on ? are you using OLLAMA ?
vunderba
29 days ago
[–]
I'm using the default llama-server that is part of Gerganov's LLM inference system running on a headless machine with an nVidia 16GB GPU, but Ollama's a bit easier to ease into since they have a preset model library.
https://github.com/ggml-org/llama.cpp
Consider applying for YC's Summer 2026 batch! Applications are open till May 4
Guidelines
|
FAQ
|
Lists
|
API
|
Security
|
Legal
|
Apply to YC
|
Contact
Search: