Hacker Newsnew | past | comments | ask | show | jobs | submit | jckahn's commentslogin

Personally I just adopt an attitude of utter nihilism and fatalism. I remind myself regularly that I'm going to die someday, like a mantra. Everything I think or care about won't ultimately matter. The only rational choice is to try and make the most of today.

Because productivity doesn't scale linearly with quantity of developers. There's a point where it plateaus or even descends as quantity of developers goes up.

The quantity of devs is not changing though. If AI really makes the existing devs more productive then the plateau should become higher.

> I know I'm tired of reading them, but don't people get bored of writing them?

Look, it's either this or a dozen articles a day about Claude Code.


I've come accept that producing code I'm truly proud of is now my hobby, not my career. The time it takes to write Good Code is unjustifiable in a business context and I can't make the case for it outside of personal projects.


Yeah I don't understand why everyone seems to have forgotten about the Gemini options. Antigravity, Jules, and Gemini CLI are as good as the alternatives but are way more cost effective. I want for nothing with my $20/mo Google AI plan.


Yeah I'm on the $20/mo Google plan and have been rate limited maybe twice in 2 months. Tried the equivalent Claude plan for a similar workload and lasted maybe 40 minutes before it asked me to upgrade to Max to continue.


> Yeah I'm on the $20/mo Google plan and have been rate limited maybe twice in 2 months. Tried the equivalent Claude plan for a similar workload and lasted maybe 40 minutes before it asked me to upgrade to Max to continue.

The TLDR: The $20/40m cost is more reflective of what inference actually costs, including the amortised cost of the Capex, together with the Opex.

The Long Read:

I think the reason is because Anthropic is attempting to run inference at a profit and Google isn't.

Another reason could be that they don't own their cost centers (GPUs are from Nvidia, Cloud instances are from AWS, data centers from AWS, etc); they own only the model but rent everything else needed for inference so pay a margin for all those rented cost centers.

Google owns their entire vertical (GPUs are google-made, Cloud instances and datacenters are Google-owned, etc) and can apply vertical cost optimisations, so their final cost of inference is going to be much cheaper anyway even if they were not subsidising inference with their profits from unrelated business units.


Well said.

It's for exactly this reason that I believe Google will win the AI race.


It's crazy that we're having such different experiences. I purchased the Google AI plan as an alternative to my ChatGPT (Codex) daily driver. I use Gemini a fair amount at work, so I thought it would be a good choice to use personally. I used it a few times but ran into limits the first few projects I worked on. As a result I switched to Claude and so, far, I haven't hit any limits.


Google has uncertain privacy settings, there is no declaration they won't train their LLM on your personal/commercial code.


https://macaron.im/blog/ai-assistant-privacy-comparison#:~:t...

All providers are opt-out. The moat is the data, don't pretend like you don't know.


per my previous research there is no opt out for gemini cli.


Just goes to show that attention is all you need.


A statement which goes to show that confusing correlation with causation is all you need.


> One of the things that makes Clawdbot great is the allow all permissions to do anything.

Is this materially different than giving all files on your system 777 permissions?


It's vastly different.

It's more (exactly?) like pulling a .sh file hosted on someone else's website and running it as root, except the contents of the file are generated by a LLM, no one reads them, and the owner of the website can change them without your knowledge.


> Is this materially different than giving all files on your system 777 permissions?

Yes, because I can't read or modify your files over the internet just because you chmod'ed them to 777. But with Clawdbot, I can!


From what I've read, OpenClaw only truly works well with Opus 4.5.


The latest Kimi model is comparable in performance at least for these sorts of use cases, but yes it is harder to use locally.


> harder to use locally

Which means most people must be using OpenClaw connected to Claude or ChatGPT.


It's like the ice bucket challenge but with rusty nails



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: