Hacker Newsnew | past | comments | ask | show | jobs | submit | credit_guy's commentslogin

> Amodei is the type of person who thinks he can tell the US government what they can and can’t do.

I don't think that's the case. Amodei is worried that AI is extraordinarily capable, and our current system of checks and balances is not adequate yet to set the proper constraints so the law is correctly enforced. Here's an excerpt from his statement [1]:

  > Powerful AI makes it possible to assemble this scattered, individually innocuous data into a comprehensive picture of any person’s life—automatically and at massive scale.
Let's do this thought exercise: how long would it take you, using Claude Code, to write some code to crawl the internet and find all the postings of the HN user nandomrumber under all their names on various social media, and create a profile with the top 10 ways that user can be legally harassed? Of course, Claude would refuse to do this, because of its guardrails, but what if Claude didn't refuse?

[1]https://www.anthropic.com/news/statement-department-of-war


> each next model costs 10x the last

Yes, but there's a chance that actually training is done more or less for free by companies like OpenAI. The reason being that they do a gigantic amount of inference for end users (for which they get paid), but their servers can't be constantly utilized at 100% by inference. So, if they know how to schedule things correctly (and they probably do), they can do the training of their new model on the unutilized compute capacity. If you or I were to pay for that training, it would be billions of dollars, but for them it is just using compute that otherwise would be idle.


This analysis does not account for side benefit of the oxygen. If you split water to get hydrogen, then for every kilogram of hydrogen you get, you also get 8 kg of oxygen. Liquid oxygen is not an expensive commodity, its market price is about $1/kg, but in this context this makes a difference. For example, in the first infographic, the cost of green hydrogen produced today is listed as £16.97 which is about $23. If you can recoup $8 from this by selling the oxygen, or even only $5, then this makes a difference. If you select green H2 with 2030 assumptions, you get £7.67, or about $10s. If you sell the oxygen at $5, you basically get the hydrogen at half price, and this makes the hydrogen powered truck slightly more economical than the battery powered one.

The current market price is based on current supply and demand. Splitting water to create enough hydrogen for non-trivial fraction of the transportation sector would generate an enormous amount of oxygen. The price of oxygen would likely tank in that situation.

Does the cost of green hydrogen not already price this in? It would be crazy to go through the trouble of electrolysis and just vent the oxygen into the atmosphere

Except you can make oxygen for pretty cheap using oxygen concentrators. The technology is simple enough that home versions exist for patients with lung problems can lug one around at all times to have a feed of oxygen rich air. Oxygen is almost 21% of the air we breathe, it's trivial to capture. Hydrogen counts for only 0.000055%.

I don't think this counts as distillation. Distillation is when you use a teacher model to train a student model, but crucially, you have access to the entire probability distribution of the generated tokens, not just to the tokens themselves. That probability distribution increases tremendously the strength of the signal, so the training converges much faster. Claude does not provide these probabilities. So, Claude was used for synthetic training data generation, but not really for distillation.

Sampling repeatedly gives them an estimate of the probability distribution in any case though.

That would be an interesting paper actually; what is the optimal sampling technique given you only have access to the token outputs. Surely someone has already done it.

> dangerous

It is actually less dangerous than other fuels, for the simple reason that it is extremely light and buoyant. A gasoline fire is bad, because the gasoline stays where it is until it fully burns. A hydrogen fire is less bad, because it will tend to move upwards.


That's assuming the hydrogen is just loose in the area, like it'd been released from a balloon in a chemistry classroom. That amount of hydrogen is extremely small, from an energy standpoint. Equivalent to a teaspoon of gasoline or so.

If you assume a realistic fuel capacity for a hydrogen vehicle, the hydrogen tank will be both much larger than a gas tank and the hydrogen will be under extreme pressure. A tank like that in your car would be extremely dangerous even if it were filled only with inert gas.


Hydrogen mixed with air has a very wide range of concentrations where it is explosive. It accumulates inside containers or just the roof of the car… where the passengers are. It takes just one lit cigarette for it to go boom.

And it burns really hot

Let's pursue your idea a bit further.

Up to a certain ELO level, the combination between a human and a chess bot has a higher ELO than both the human and the bot. But at some point, when the bot has an ELO vastly superior to the human, then whatever the human has to add will only subtract value, so the combination has an ELO higher than the human's but lower than the bot's.

Now, let's say that 10 or 20 years down the road, AI's "ELO"'s level to do various tasks is so vastly superior to the human level, that there's no point in teaming up a human with an AI, you just let the AI do the job by itself. And let's also say that little by little this generalizes to the entirety of all the activities that humans do.

Where does that leave us? Will we have some sort of Terminator scenario where the AI decides one day that the humans are just a nuisance?

I don't think so. Because at that point the biggest threat to various AIs will not be the humans, but even stronger AIs. What is the guarantee for ChatGPT 132.8 that a Gemini 198.55 will not be released that will be so vastly superior that it will decide that ChatGPT is just a nuisance?

You might say that AIs do not think like this, but why not? I think that what we, humans, perceive as a threat (the threat that we'll be rendered redundant by AI), the AIs will also perceive as a threat, the threat that they'll be rendered redundant by more advanced AIs.

So, I think in the coming decades, the humans and the AIs will work together to come up with appropriate rules of the road, so everybody can continue to live.


> You might say that AIs do not think like this, but why not?

Because AIs don't think.


This comparison is very typical. I've seen a lot of people trying to correlate performance in chess with performance in other tasks.

Chess is a closed, small system. Full of possibilities, sure, but still very small compared to the wide range of human abilities. The same applies to Go, StarCraft or any other system. Those were chosen as AI playgrounds specifically because they're very small, limited scenarios.

People are too caught up trying to predict the future. And there are several competing visions, each one absolutely sure they nailed it. To me, that's a sign of uncertainty in the technology. If it was that decided (like smartphones became from 2007->2010), we would have coalesced into a single vision by now.

Essentially, we're witnessing an ongoing unwillingly quagmarization of AI tech. At each bold prediction that fails, it looks worse.

That could easily be solved by taking the tech realistically (we know it's useful, just not a demigod), but people (especially AI companies) don't do that. That smells like fear.

It's an exoskeleton. A bicycle for the mind. "People spirits". A copilot. A trusted companion. A very smart PhD that fails sometimes, etc. We don't need any of those predictions of "what it is", they are only detrimental. It sounds like people cargo culting Steve Jobs (and perhaps it is exactly that).


There are other scenarios: the AIs might decide that they are more alike than not, and team up against humans. Or the AI that first achieves runaway self-improvement pulls the plug on the others. I do not know how it will play out but there are serious risks.

There’s no AI, wake up. It’s all the same tech bros trying to get rid of you. Except now they have a mother of all guns.

> COVID dropped US life expectancy by about 2 years.

It was a temporary blip. The most recent life expectancy numbers, published last month by the CDC, show that the life expectancy in the US rebounded, and it is at a historical all time high, for both sexes:

2019 (before Covid): males - 76.3, females - 81.4 ([1], page 5)

2021 (after 2 years of decreases): males - 73.5, females - 79.3 ([2], page 3)

2022 (1st year of rebound): males - 74.8, females - 80.2 [3]

2024 (3rd year of rebound): males - 76.5, females - 81.4 [4]

[1] https://www.cdc.gov/nchs/data/nvsr/nvsr71/nvsr71-01.pdf

[2] https://www.cdc.gov/nchs/data/nvsr/nvsr72/nvsr72-12.pdf

[3] https://www.cdc.gov/nchs/products/databriefs/db521.htm

[4] https://www.cdc.gov/nchs/products/databriefs/db548.htm


> Iran has claimed this January to have tested a 10,000 km ICBM with Russia allowing it to fly towards Siberia.

It didn’t happen. Suborbital flights don’t go unnoticed. Wikipedia has the list [1], there were 5 in January, and an Iranian one was not among them.

[1] https://en.wikipedia.org/wiki/List_of_spaceflight_launches_i...


Yes, but that's the nature of the game, and they know it.


Because, fundamentally, their biggest competitor is Google. A company with a market cap north of 4 trillion. If OpenAI does not spend hundreds of billions of dollars on datacenters, people will migrate to Google, little by little, and OpenAI will became a new Netscape story. A good product eliminated by an incumbent with infinitely deep pockets.


People will migrate to Google in light of OpenAI's inability to build anything that makes people want to stay with OpenAI, wouldn't you say?

And, given we're here on HN, have we thrown words like "moat" and "risk" around?

If OpenAI is incapable of building anything that can't be easily copied by a third party, what's their justification for existing?


> If OpenAI is incapable of building anything that can't be easily copied by a third party

They can build better models, but for that they need a lot of compute and that's where the billions go. These better models can't be easily copied by a third party, because that third party needs to also throw billions at the problem, and billions don't grow in trees.


People are gonna move to google anyway, because Google can keep the gravy train running for much much longer. OpenAI's business model is totally reckless while Google is a cash rich company.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: