Hacker Newsnew | past | comments | ask | show | jobs | submit | taf2's commentslogin

I did this with qwen 3.5 - tool calling was the biggest issue but for getting it to work with vllm and mlx I just asked codex to help. The bulk of my the time was waiting on download. For vllm it created a proxy service to translate some codex idioms to vllm and vice versa. In practice I got good results on my first prompt but followup questions usually would fail due to the models trouble with tool calling - I need to try again with gemma4

I switched off claude when they nerfed opus 4.5 in August 2025, since then codex has clearly produced better code with fewer bugs. Opus 4.6 was more a temporary de-nerf of 4.5 but did not materially improve. codex has now a proven track record of producing stable results while introducing far fewer bugs.

I don't understand who's still using anthropic? The model produces more bugs and agrees to solutions that are clearly wrong at a much higher rate then codex. Codex produces significantly better code with fewer bugs and far less oversight. with /fast on codex it's not even slower then claude and consider it implements working code more reliably you have to use it less anyway. Beside anthropic appears to be more focused on fear mongering and other types of FUD and is a more closed solution I do not understand why so many people still appear to care what anthropic does and have not already moved on? </rant>

Bro Linux gaming is where it’s at - windoze is cooked


avoid bun is my take away... if anthropic decides you're a competitor and with the way AI is evolving you will be a competitor soon - don't rely on any anthropic tools or models.


Why should anybody avoid bun? Just fork it if it ever changes license. In fact, I'm 100% sure it would be instaforked if Anthropic ever tried anything


I hope you are right


I forked and added tool calling by running another llm in parallel to infer when to call tools it works well for me to toggle lights on and off.

Code updates here https://github.com/taf2/personaplex


Cool approach. So basically the part that needs to be realtime - the voice that speaks back to you - can be a bit dumb so long as the slower-moving genius behind the curtain is making the right things happen.


Yes exactly- one part I did not like is we have to also separately transcribe because it does not also provide what the person said only what the ai said


what do you mean "infer"? how does the LLM get anything it of this as input?


Considering these max out at 128GB of unified ram my guess is the hope of an M5 Ultra with 1TB of unified ram is unlikely to come true... Super disappointing.


Am i reading this right - it was like impossible to get an api key for gemini but actually i could have just grabbed an API key from someone's google maps site and gotten started right away?


Great question- rather then having to push a change to GitHub to see the format changes you can just mdvi it now. Iterate locally is nice


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: