Hacker Newsnew | past | comments | ask | show | jobs | submit | croemer's commentslogin

One can nicely see the bridge across the river that was burned in the recent attack in Berlin. https://openinframap.org/#15.16/52.425587/13.307235

Everything on the left thereof was then without power for multiple days as this was a single point of failure.

See thread: https://news.ycombinator.com/item?id=46487404


I immediately went and looked for it too!

I also tried to see any vulnerable sabotage spots that would put my electricity out, but that seems harder.


I didn't realize level 1 gave me 11 (eleven) walls at first. I thought it stood for II = roman 2. Maybe use a font that makes the difference between 1 and I clearer.


You're forgetting a very important problem: hard to implement. Sugar in drinks and CO2 emissions are easily measured. The definition of what's an ad is much harder.


>what's an ad is much harder.

Not really that much harder, and would immediately cover the worst offenders. I mean we already have disclosure laws on product placements and ads.


What's the source for the energy per token? I guess this? https://www.theguardian.com/technology/2025/aug/09/open-ai-c... 18Wh/1000t is on the high end of estimates. But even if it's 10x less I agree this is pretty crazy usage.


"The University of Rhode Island based its report on its estimates that producing a medium-length, 1,000-token GPT-5 response can consume up to 40 watt-hours (Wh) of electricity, with an average just over 18.35 Wh, up from 2.12 Wh for GPT-4. This was higher than all other tested models, except for OpenAI's o3 (25.35 Wh) and Deepseek's R1 (20.90 Wh)."

https://www.tomshardware.com/tech-industry/artificial-intell...

https://app.powerbi.com/view?r=eyJrIjoiZjVmOTI0MmMtY2U2Mi00Z...

https://blog.samaltman.com/the-gentle-singularity


These numbers don't pass sanity check for me. With 4x300W cards you can get a 1K token DeepSeek R1 output in about 10 seconds. That's just 3.3Wh right? And that's before you even consider batching.


Indeed, it appears that the limited scope meant the juicy stuff could not be tested. Like exfiltrating other users' data.


Which is stupid as those are the vulnerabilities worth determining if they exist.

I can understand in a heavily regulated industry (e.g. Medical) that a company couldn't due to liability give you the go ahead to poke into other user's data in attempt to find a vulnerability, but they could always publish a dummy account detail that can be identified with fake data.

Something like:

It is strictly forbidden to probe arbitrary user data. However, if a vulnerability is suspected to allow access to user data, the user with GUID 'xyzw' is permitted to probe.

Now you might say that won't help. The people who want to follow the rules probably will, and the people who don't want to won't anyways.


I happily did not detect strong signs of LLM writing. Fun read, thanks!


Agreed. It almost feels like the majority of the top articles reek of LLM writing in bad ways.


Tell the AI to keep your comment shorter next time ;)


Disagree, it can be learning as long as you build out your mental model while reading. Having educational reading material for the exact thing you're working on is amazing at least for those with interest-driven brains.

Science YouTube is no comparison at all: while one can choose what to watcha, it's a limited menu that's produced for a mass audience.

I agree though that reading LLM-produced blog posts (which many of the recent top submissions here seem to be) is boring.


Don't worry, it's an LLM that wrote it based on the patterns in the text, e.g. "Starting a new project once felt insurmountable. Now, it feels realistic again."


That is a normal, run of the mill sentence.


Yes, for an LLM. The good thing about LLMs is that they can infer patterns. The bad thing about LLMs is that they infer patterns. The patterns change a bit over time, but the overuse of certain language patterns remains a constant.

One could argue that some humans write that way, but ultimately it does not matter if the text was generated by an LLM, reworded by a human in a semi-closed loop or organically produced by human. The patterns indicate that the text is just a regurgitation of buzzwords and it's even worse if an LLM-like text was produced organically.


I can't prove it of course but I stand by it.


Claiming that use of more complicated words and sentences is evidence of LLM use is just paranoia. Plenty of folk write like OP does, myself included.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: