Hacker Newsnew | past | comments | ask | show | jobs | submit | more skybrian's commentslogin

I see it has a ChatGPT median query, but for those of us using coding agents this isn't so relevant.

Here's a post that makes an estimate:

https://www.simonpcouch.com/blog/2026-01-20-cc-impact/

> So, if I wanted to analogize the energy usage of my use of coding agents, it’s something like running the dishwasher an extra time each day, keeping an extra refrigerator, or skipping one drive to the grocery store in favor of biking there. To me, this is very different than, in Benjamin Todd’s words, “a terrible reason to avoid” this level of AI use. These are the sorts of things that would make me think twice.


I end up shrugging. For a Claude Code power user, today, a day's use uses less electricity than a morning commute in an electric car. To say nothing of the costs to keep your workstation running, your building heated or cooled, etc. Not quite a rounding error, but a relatively minor component of overall usage.


At least for programming usage the power usage seems worth it. For starting up 1 million bots to argue with each other on facebook it's obviously a total waste.

At any rate, the power usage will become more apparent when these products stop being subsidised, where power usage is being charged to the end user.


Thanks for sharing this.

I also was under the impression that queries cost were mostly meaningless, but it seemed only is true for fresh sessions and short queries. I have to say, the result is less dramatic than I expected but still more significant for heavy users (such as myself).


I'm not sure it's even a particularly relevant comparison to an hour of use of various other electronic devices. I'm sure the median user is running a lot fewer queries than a Claude Code power-user, but I would guess it's still more than one in a typical session.


The database actually is named "Dolt." It's their own take on "git."

https://www.dolthub.com/


nit, Dolt is more a relational database with some git addons, i.e. it started from SQL to make it better, rather than starting from git and adding SQL


Yeah, I meant the name is a play on 'git'.


How so? They only share a final letter in common



hmm, perhaps this is the underpinnings of why I stopped using dolt (trying to be too clever makes things harder in the long run)


another winner from the school of naming that brought us "The GIMP", then


I don't know what apps you run, but I'm typing this from an M2 Mac with 8 GB, running Tahoe. Performance is fine. It's always been fine.


If you do decide to get in touch again, you can ask the agent for assistance understanding the code and you'll get a lot of help. Seems like it's easier than ever?

It's not like working with legacy codebases used to be.


It all depends. Yes, something like that happened with Uber, but computers and consumer electronics have Moore's law working for them, so prices usually go down. (With occasional shortages like we see now with RAM - not for the first time, but it's usually temporary.)

My guess is that AI will be more like consumer electronics than like Uber.


I agree that consumer goods normally get cheaper over time. Software that becomes commercialized, or sees a surge in enterprise demand, tends to go the other way. Splunk, Elasticsearch, and Slack for example.


I've already switched to Sonnet 2.6 by default. It seems okay for the coding I do (working on a personal website) and it's 40% cheaper.

Businesses will pay more since they can justify the cost. That seems fine?


Maybe we should think of each study as a data point? With enough studies, perhaps we'll get an idea of how much it varies.

Does anyone collect them?


To what end? It varies a lot. Between 0% and 100%+ of the cost could be passed onto the consumer (100%+ when every distribution tier passes along what came to it and then marked it up). Maybe you can create a statistical distribution with mean/median and standard deviation, but that tells you nothing about what might happen when you next institute another tariff.


Most of economics is educated guessing, and having more data (hopefully) leads to better guesses.

With this study there's plenty of leads to follow. Does the ABV have an impact? What about the base price? France vs Italy?


I start with a conversation and then ask the coding agent to write a design doc. It might go through several revisions. The implementation might also be a bit different if something unexpected is found, so afterwards I ask the agent to update it to explain what was implemented.

This naturally happens over several commits. I suppose I could squash them, but I haven't bothered.


I use the same approach, derived off of https://boristane.com/blog/how-i-use-claude-code/.


Doesn't Go qualify? It has a fast compiler and it was designed that way.


Go is pretty minimalistic as a language though. At least I don't feel that it has the same expressiveness as either Zig or Rust.


The iPad mini is great for reading books (what I bought it for) and if you don't have an iPhone, any other iOS apps you want to run. I also use Chrome a lot for general web browsing.

Also. I inherited an older, full size iPad that I plan to leave on my piano for sheet music.


For reading I've honestly found e-ink way better. Not just on the eyes (neither blinded at night nor makes me squint if I'm outside or by a window) but the UI. The iPad gets the job done but it just feels like Apple doesn't want people to be using it that way and god Fitbit if you do something the non-Apple way.

For browsing the web, yeah I think my comment reflects that experience. But I won't go to places like HN because typing is just a shitshow on iOS. Don't get me started on swipe... and how is it 2026 and there isn't a universal gesture for back?

Idk, anyone with a remarkable browse the web? How is it? I'd get one in a heartbeat but that price is outrageous


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: