Hacker Newsnew | past | comments | ask | show | jobs | submit | lebovic's commentslogin

I used to work at Anthropic, and I wrote a comment on a thread earlier this week about Anthropic's first response and the RSP update [1][2].

I think many people on HN have a cynical reaction to Anthropic's actions due to of their own lived experiences with tech companies. Sometimes, that holds: my part of the company looked like Meta or Stripe, and it's hard not to regress to the mean as you scale. But not every pattern repeats, and the Anthropic of today is still driven by people who will risk losing a seat at the table to make principled decisions.

I do not think this is a calculated ploy that's driven by making money. I think the decision was made because the people making this decision at Anthropic are well-intentioned, driven by values, and motivated by trying to make the transition to powerful AI to go well.

[1]: https://news.ycombinator.com/item?id=47174423

[2]: https://news.ycombinator.com/item?id=47149908


My lived experience with tech companies is that principles are easy when they're free - i.e., when you're telling others what to do, or taking principled stances when a competitor is not breathing down your neck.

So, with all respect, when someone tells me that the people they worked with were well-intentioned and driven by values, I take it with a grain of salt. Been there, said the same things, and then when the company needed to make tough calls, it all fell apart.

However, in this instance, it does seem that Anthropic is walking away from money. I think that, in itself, is a pretty strong signal that you might be right.


HN is pretty polarised about this - they are either “the good guys” or “doing it for positive marketing”.

I’m on the camp “the world is so fucked up, take the good when you can find it”.

Beggars can’t be choosers when it comes to taking a stand against dictatorships.


Yeah, the alternative is be OK with their product being used for surveillance.

Not sure why it's controversial that they said no, regardless of the reasoning. Yeah there's a lot of marketing speak and things to cover their asses. Let's call them out on that later. Right now let's applaud them for doing the right thing.

FWIW I do not think they are the "good guys" (if I had a dollar for every company that had a policy of not being evil...). But they are certainly not siding with the bad guys here.


> Let's call them out on that later. Right now let's applaud them for doing the right thing.

Yes, yes, yes. When I first read the stuff about this yesterday, my immediate thought was "wait, these are the only two things they have a problem with?"

But they made a stand, and that still matters. We shouldn't let the perfect be the enemy of the good. At least it's not Grok.


If one really wants to take a stand against this crazy administration, they shouldn’t start it by referring to Hegseth with his assumed title.

I thought that too, but then wondered if they thought better of deliberately antagonizing a very powerful bully.

> the alternative is be OK with their product being used for surveillance.

Their statement didn't indicate they object to their product being used for surveillance, just for domestic surveillance of U.S. citizens


It's gotta be thus.

For if you don't the next step is cynicism maximally operationlized: what you're not doing game/political BS to get ahead? What are you? A chump? An idiot?

That kind of stuff spreads like wild fire making corporate America ... something else to put it politely.

Doing the right thing has cost me big time here and there. I don't care. Simultaneously orgs are not all bad; thats another distortion we can do without.


No surprises many people on YCs site align with Sam Altmans view of the world - right or wrong.

I’m just surprised the alignment guy is struggling with alignment. Dodged a bullet I guess.

If I remember my D&D, Lawful Evil is an alignment.

I think it's definitely true that you should never count on a company to do principled things forever. But that doesn't mean that nothing is real or good.

Like Google's support for the open web: They very sincerely did support it, they did a lot of good things for it. And then later, they decided that they didn't care as much. It was wrong to put your faith in them forever, but also wrong to treat that earlier sincerity as lies.

In this case, Anthropic was doing a good thing, and they got punished for it, and if you agree with their stand, you should take their side.


Google's support for the open week is a great example because it was obviously a good thing but also obviously built into their business model that they'd take that position. That made them a much more trustworthy company in those days, because abandoning that position would have required not just losing money for a while but changing their internal structure.

How much value is there in individual values?

Many of us remember that OpenAI was also started by people with strong personal values. Their charter said that they would not monetize after reaching AGI, their fiduciary duty is to humanity, and the non-profit board would curtail the ambitions of the for-profit incentives. Was this not also believed by a sizeable portion of the employees there at the time? And what is left of these values after the financial incentives grew?

The market forces from the huge economic upside of AI devalues individual values in two ways. It rewards those that choose whatever accelerates AI the most over any individuals who are more careful and act on individual values--the latter simply loses power in the long run until their virtue has no influence. As Anthropic says in their mission statements, it is not of much use to humanity to be virtuous if you are irrelevant. The latter, as is true for many technologies, is that economic prosperity is deeply linked to human welfare. And slowing or limiting progress leads to real immediate harm to the human population. And thus any government regulations which are against AI progress will always be unpopular, because those values which are arguing future harm of AIs is fighting against the values of saving people from diseases and starvation today.


> However, in this instance, it does seem that Anthropic is walking away from money.

The supply chain risk designation will be overturned in court, and the financial fallout from losing the government contracts will pale in comparison to the goodwill from consumers. Not to mention that giving in would mean they lose lots of their employees who would refuse to work under those terms. In this case, the principles are less than free.


> ...the financial fallout from losing the government contracts will pale in comparison to the goodwill from consumers.

In fact, a friend heard about this and immediately signed up for a $200/year Claude Pro plan. This is someone who has been only a very occasional user of ChatGPT and never used Claude before.

I told my friend "You could just sign up for the free plan and upgrade after you try it out."

"No, I want to send them this tangible message of support right now!"


Still, you’d need a million people to do that to compensate the $200M military contract.

As an aside, there are probably lots of companies that serve the government seriously considering cutting the government as a customer.

Simply because the money/efficienct they will lose from cutting Claude will surpass the revenue they get from the gov


Does the military pay $200m per month?

As the parent stated, the Claude Pro plan is $200 per year, not per month.

Gotcha, mixed it up with the Max plan.

Is the government contract 200m per year? Or for a longer period?

Not all that many people

I don't think it's easy to compare how this might affect their bottom line.

Anthropic may gain customers, but OpenAI may lose customers also (or they may even gain customers).

Maybe OpenAI also has to pay their employees more now for "moral flexibility". Or maybe right-wing devs are more inclined to work there, I don't know.


I'm seeing a lot of "QuitGPT" posts. It seems your friend has friends.

Unclear how much damage the designation will do to their dealmaking ability in the meantime. How long will it take for the court to reverse order?

The longer it takes, the better the impact on their reputation.

The consumer goodwill is working then - it pushed me to upgrade my plan on march 1st... (do they bill on rolling 30 day cycle ? or calendar-month to calendar-month?)

It’s not rolling 30 days. Lost 2 days of use by subscribing in February.

Thanks! I appreciate the heads up!

> The supply chain risk designation will be overturned in court,

I'm honestly uncertain how the courts will rule. You could be right, but it isn't guaranteed. I think a judicial narrowing of it is more likely than a complete overturn.

OTOH, I think almost guaranteed it will be watered-down by the government. Because read expansively, it could force Microsoft and AWS to choose between stopping reselling Claude vs dropping the Pentagon as a customer. I don't think Hegseth actually wants to put them in that position – he probably honestly doesn't realise that's what he's potentially doing. In any event, Microsoft/AWS/etc's lobbyists will talk him out of it.

And the more the government waters it down, the greater the likelihood the courts will ultimately uphold it.

> and the financial fallout from losing the government contracts will pale in comparison to the goodwill from consumers.

Maybe. The problem is B2B/enterprise is arguably a much bigger market than B2C. And the US federal contracting ban may have a chilling effect on B2B firms who also do business with the federal government, who may worry that their use of Claude might have some negative impact on their ability to win US federal deals, and may view OpenAI/xAI (and maybe Google too) as safer options.

I guess the issue is nobody yet knows exactly how wide or narrow the US government is going to interpret their "ban on Anthropic". And even if they decide to interpret it relatively narrowly, there is always the risk they might shift to a broader reading in the future. Possibly, some of Anthropic's competitors may end up quietly lobbying behind the scenes for the Trump admin to adopt broader readings of it.


> OTOH, I think almost guaranteed it will be watered-down by the government. Because read expansively, it could force Microsoft and AWS to choose between stopping reselling Claude vs dropping the Pentagon as a customer.

A tweet does not have the force of law. Being designated a supply chain risk does not mean that companies who do business with the government cannot do business with Anthropic. Hegseth just has the law wrong. The government does not have the power to prevent companies from doing business with Anthropic.


The issue is, even if the Trump admin is misrepresenting what the law actually says, federal contractors may decide it is safer to comply with the administration’s reading. The risk is the administration may use their reading to reject a bid. And even if they could potentially challenge that in court and win, they may decide the cheaper and less risky option is to choose OpenAI (or whoever) instead

They would have a very good case against the government if that were to happen. I suspect that the supply chain risk designation will not last long (if it goes into effect).

Some vendors will decide to sue the government. Others may decide that switching to another LLM supplier is cheaper and lower risk.

And I'm not sure your confidence in how the courts will rule is justified. Learning Resources Inc v Trump (the IEEPA tariffs case) proves the SCOTUS conservatives – or at least a large enough subset of them to join with the liberals to produce a majority – are willing sometimes to push back on Trump. Yet there are plenty of other cases in which they've let him have his way. Are you sure you know how they'll judge this case?


> Are you sure you know how they'll judge this case?

I'm not even sure it will get that far. There's a million different ways that this could go that mean it won't ever come before the supreme court. The designation isn't even in effect yet.

I do think if it goes into effect it will be eventually overturned (Supreme Court or otherwise) There just isn't a serious argument to make that they qualify as a supply chain risk and there is no precedent for it.


I call this being ethically convenient. I think anthropic is playing to the crowd. This admin will be gone soon enough so no need dragging the brand into mud. Just need to hold out. They have enough money that walking away from the money isnt impressive. But pissing off the gov is pretty fun and far more interesting.

That's what worries me so much about the development that OpenAI is stepping in. OpenAI's claim is that they have the same principles as Anthropic, but that claim is easy because it's free now because the govt wants to sell the "old bad, new good" story.

Surely OpenAI cannot but notice that those values, held firmly, make you an enemy of the state?


My reading is that OpenAI is paying lip service. Altman is basically saying "OF COURSE we don't want to spy on Americans or murderdrone randos, but OF COURSE the government would never do that, they just told me so (except for the fact that they just cut ties with Anthropic because Anthropic wouldn't let them do that)"

Its much simpler than that. OpenAI is losing significant market share and this is a Hail Mary that the government will forcr troves of companies to leavr Anthropic

principles are easy when they're free

Indeed. If everything is a priority, nothing is a priority; you only know that something is a real priority when you get an answer to the question "what will you sacrifice for this".


If you're going to be cynical, at least credit them with some brains:

MAGA isn't going to last forever, and when it collapses, the ones who publicly stood up to it will be better positioned to, I don't know, not face massive legal problems under whatever administration comes next. A government elected by middle-aged moms who use "Fuck ICE" as a friendly greeting isn't going to have any incentives to go easy on Palantir and Tesla.


Cynical or not, I think it was an absolutely brilliant move: "Mass domestic surveillance of Americans constitutes a violation of fundamental rights". I think they placed their bets on Sama signing a contract with the DoD and here we are, one day later the news that OpenAI signed a contract is out. An absolute PR disaster for OpenAI. And an absolute PR victory for Anthropic.

I think OpenAI's IPO will be interesting. Not even the conservative media will be happy about mass surveillance of Americans.

For non-Americans not much change, they don't really care about your rights more than about a pile of dog poo.


I applaud Anthropic choice. Choosing principle over money is a hard choice. I love Anthropic's products and wish them success!

You applaud anthropic's choice to enhance mass surveillance of non-US people? If anthropic want mass surveillance, they should limit it to their own country, not to all other countries IMO.

Anthropics principles are extraordinarily weak from an absolute point of view.

Don't surviel the US populace? Don't automate killing, make sure a human is in the loop? No, sorry, don't automate killing yet.

Yeah dude, I'm sure just about any burglar I pull out of prison will agree.

Listen yes, it's good compared to like 99% of US companies. But that really speaks more to the absolute moral bankruptcy of most companies, and not to Anthropics principles.

That being said, yes we should applaud anthropic. Because yes this is rare and yes this is a step in the right direction. I just think we all need to acknowledge where we are right now, which is... not a good place.


> Because yes this is rare and yes this is a step in the right direction

ehh.. I'd say it's a smaller step in the wrong direction than it could be.


> I think the decision was made because the people making this decision at Anthropic are well-intentioned, driven by values, and motivated by trying to make the transition to powerful AI to go well.

The entire problem is that this lasts as long as those people are in charge. Every incentive is aligned with eventually replacing them with people who don’t share those values, or eventually Anthropic will be out-competed by people who have no hesitation to put profit before principle.


I mean, yah. How else could it be? Xerox, GE, IBM (1990 Gerstner) and a zillions of other rock stars fell hard. And had to be over hauled. Thats why continuous improvement is a thing, and why a platonic take on the world was never a thing.

The funniest or perhaps saddest (depending on your view) is that the "principles" we're talking about and apparently celebrating here are that they don't want to do DOMESTIC surveillance, and they don't want FULLY autonomous kill bots... Yet, because according to the CEO the models aren't there yet.

Meaning, they're a-okay with:

- Mass surveillance of non-US peoples (and let's be completely real here, they're in bed with Palantir already, so they're obviously okay with mass surveillance of everyone as long as they're not the ones that will be held culpable)

- Autonomous murder bots. For now they want a human in the loop to rubberstamp things, but eventually "when the models improve" enough, they're just fine and dandy with their AIs being used as autonomous weapons.

What the fuck are the principles we're talking about here? Why are they being celebrated for this psychotic viewpoint, exactly?


I also think this will ultimately benefit anthropic in the long run. Outlined in this article: https://open.substack.com/pub/zeitgeistml/p/murder-is-coming...

This is an absolute rarity these days. Very appreciative of the true leadership on display here

Why did they work with Palantir then, which is the integrator in the DoD? It does not take a genius to figure out where this was going.

I don't know why a personal testimony to the effect that "these are the good guys" needs to be at the top of every Anthropic thread. With respect to astroturfing and stealth marketing they are clearly the bad guys.


Anthropic's stance is "we believe in the use of our tools, with safeguards, to assist the defense of the US".

So of course they would work with Palantir to deploy those tools.

The issue we're seeing is because the DoW decided they no longer like the "with safeguards" part of the above and is trying to force Anthropic to remove them.


They are pretty clear about this:

> the mass domestic surveillance of Americans

This they say they don’t like. The qualifiers tell you they’re totally fine with mass surveillance of Palestinians, or anyone else really, otherwise they could have said “mass surveillance”.

> fully autonomous weapons

And they’re pretty obviously fine with killing machines using their AI as long as they’re not fully autonomous (at the moment, they say the tech is not there yet).

All things considered they’re still a bit better than their competitors, I suppose.


Others have addressed the first half of your comment, so I'll focus on the astroturfing claim.

While I've talked a lot about Anthropic this week, if I was astroturfing for a positive image, I'd be very bad at it [1][2][3].

[1]: https://news.ycombinator.com/item?id=47150170

[2]: https://news.ycombinator.com/item?id=47163143

[3]: https://news.ycombinator.com/item?id=47174814


It doesn’t seem like anybody has addressed “If they are the good guys with principles why did they work with Palantir?”

There’s a comment that’s sort of handwaving and saying “because America”, but I would imagine that someone with direct knowledge of the people involved would have something more substantive than “thems the breaks” when it comes to working with Palantir


Anthropic makes it kind of clear in all of their statements that they are not opposed to working with the surveillance state, with the military industrial complex, etc. Their central philosophy, it seems, is not incongruent with working with entities, public or private, that can be construed as imperialist or capitalistic or a combination of both. I actually appreciate their honesty here.

They exist within the regime of capital and imperialism that all of us who are American citizens exist within. This isn't a cop-out or cope. It's just the reality of the world that we live in. If you are an American and somehow above it, let me know how you live.


The further away from God, the more need to believe there are good guys.

God has been used as a justification for a lot of human suffering.

My personal belief is that the closer to god you are; the more easily you can justify evil. How could you not? If my entire belief system is derived from faith, then there are *no* conclusions I could not come to, and therefore anything can be justified.


>further away from god

What is that? Some new bit you're working on?


>driven by values

Would the people who have invested in the company like that? Or would they like the company to make some money? Are they going to piss off their investors by being "driven by values"?

I mean, please explain it to me how "driven by values" can be done when you are riding investor money. Or may be I am wrong and this company does not take investments.

So in the end you are either

1. funding yourselves, then you are in control, so there is at least a justification for someone to believe you when you say that the company is "driven by values".

2. Or have taken investments, then you are NOT in control, then anyone who trusts you when you say the company is "driven by values", is plain stupid.

In other words, when you start taking investment, you forego your right to claim virtuous. The only claim that you can expect anyone to believe is "MY COMPANY WILL MAKE A TRUCKLOAD OF MONEY !!!!"


As an investor in Anthropic, I'd say that anyone who wasn't aware of where they stood on various values issues the whole time should not have been putting money in, it was not hidden.

How much is your investment (you don't have to be exact)?

The bottom line is that if the investment is not profitable, then there will be less and less investment, because only fewer and fewer can afford to lose money and stick to their values, until no one will be investing; how ever high your values might be...

Sticking to your values when it cost growth is not sustainable for publicly traded companies...


Anthropic is a public benefit corporation. Investors who put money in knew this. It's in the corporate charter. The corporate charter is a public document.

Fiduciary duty means the board and officers must act in accordance with the governing documents of the corporation.

Even a regular corporation doesn't need to be just for the purpose of "money goes up". The board has discretion on how they create value.


> public benefit corporation > The board has discretion on how they create value.

It does not make much of a difference. If the investors don't get their investment returned with interest (as $$$), the majority of them are not going to invest further. That is from the set of investors who invest based on the companies ethical stand, which is probably only a small fraction of all the investments it has received.


So many tech companies have the "high values" screed that it really just seems like a standard step in the money plan.

Practically the entire tech industry, including many of the higher ups currently camping out on the right, used to be firmly in a sort of centrist-with-social-justice-characteristics camp. Then many of those same people enthusiastically stood with Trump at his inauguration. It's completely reasonable that people have their doubts now.

It's also completely reasonable to expect that if Anthropic is the real deal and opposed to where the current agenda setters want to take things, they'll be destroyed for it.


Destroyed? No. But a new sharif is gonna show up while the existing exit stage left with big bags of nuts.

> enthusiastically stood with Trump

I think "enthusiastically" looks different. They had to choose between kissing Trumps butt to make good business for 4 years or see their companies at a severe disadvantage. I'm not saying what they did was good, nor do I support it. But from a business angle it's not hard to see why they chose to do that. If you'd ask them privately off the record then I'm sure most of them would tell you that Trump is an idiot and dangerous.


Mark Zuckerberg was in a big hurry to call Trump a "badass" in the wake of the Butler hoax, and is clearly trying to appeal to the right with his cultivated jiu jitsu Chad image. It doesn't mean a damn thing what these CEOs are willing to say behind closed doors when their public decisions are to remain in lockstep with the agenda and fire anyone who asks questions about whether it's the right one.

[flagged]


All corporations are to an extent. It’s a question of magnitude, not absolutes.

You, too, are driven by money. Yet I’m certain you maintain a set of principles and values. Let’s keep the discussion productive yeah?


Sure, where is your productive output? Cause that's drivel.

Anthropic kept referring to Hegseth as "Secretary of War" and the DoD as "Department of War". Which is horseshit. This whole thing is Anthropic flailing.


Come on. That is because this is a negotiation between Anthropic and the DoD and they understandably don't want to burn bridges.

Do you just expect Anthropic to totally blow up all bridges to the government? What do you actually want them to do?

Reading your comment history I'm not sure they could do anything to satisfy you.


I'm not the one claiming they have principles so.. No? I expect them to do whatever they think they need to at any given moment to enrich themselves.

Their "moat" is nothing more than momentum at this point. They are AOL on an accelerated timeline.


Even as someone pretty staunchly opposed to this stupid "Gulf of America" Jahr Null bullshit from the Trump administration, I actually think the new labels are more honest about these institutions and their intended purpose.

This is a pretty classic mistake most people who are in high-profile companies make. They think that some degree of appealing to people who were their erstwhile opponents will win them allies. But modern popular ethics are the Grim Trigger and the Copenhagen Interpretation of Ethics. You cannot pass the purity test. One might even speculate that passing the purity test wouldn't do anything to get you acceptance.

Personally, I wish that the political alignment I favour was as Big Tent as Donald Trump's administration is. I think he can get Zohran Mamdani in the room and say "it's fine; say you think I'm a fascist" and then nonetheless get what he wants. But it just so happens that the other side isn't so. So such is life. We lose and our allies dwindle since anyone who would make an overture to us, we punish for the sin of not having been born a steadfast believer.

Our ideals are "If you weren't born supporting this cause, we will punish you for joining it as if you were an opponent". I don't think that's the path to getting what one wants.


> political alignment I favour was as Big Tent as Donald Trump's administration is

I'm not sure how accurate this sentiment is. Your desire is to embrace your enemy without resolving the differences, and get what you want. It's not clear here if you're advocating compromise and negotiation, or just embracing for the sake of embracing while just doing what you wanted all along.

And evaluating Trump's actions against this sentiment doesn't seem to be the negotiation and compromise interpretation. Given the situation with tariffs and ICE enforcement, there is no indication of negotiation or compromise other than complete fealty/domination.

So as grandiose and noble your sentiment is, Donald Trump is hardly the epitome of it as you seem to suggest.


I think the differences in this situation were that I do not want AI used in domestic surveillance or autonomous weapons, and Anthropic holds to that position.

I think Donald Trump has pretty much let Zohran Mamdani operate without applying the kind of political pressure he has applied to other people, notably his predecessor Eric Adams. Also, I think saying "let people be your allies when they take your position" is less "grandiose and noble" than demanding someone agree on all counts before you will accept any political alignment. But it's fine if everyone else disagrees. It's possible there really just isn't a political group which will accept my views and while that's unfortunate because it means I can't get all that I want, I think it'll be okay.

One could reasonably argue that the meta-position is to either join the Republicans full-bore (somewhat unavailable to me) or to at least play the purity test game solely because that's the only way to have any influence on outcomes. If it comes to that, I'll do it.


You are making a mistake in thinking that Trump thinks of these things in political terms. Trump sees a charismatic and popular politician and he wants to associate with them on that basis alone, because Trump has a deep psychological need to be liked. Mamdani understands his psychology and is able to exploit it well by playing his own attributes to his advantage.

Politically, it's not like Trump tolerates dissent within the Republican party, he constantly threatens and berates anyone who shows defiance into submission. It's precisely because Mamdani is not in his tent and not really much of a threat to his power that he is willing to deal with him that way.


I don't understand, your position is the same as Anthropic, yet you disagree with their stance?

And I wouldn't take the case of Trump and Mamdani as the exemplar of Trump's overall behavior towards opponents. The amount of evidence to the contrary is overwhelming.


Anthropic's adherence to their stated principles was never tested and their willingness to work with DoD made it seem like they didn't stand by them strongly so I wasn't happy with that. This action shows that they are willing to lose big contracts in order to stand by their stated principles. I like that.

In any case, I think I've said all there is for me to say on the subject and everyone seems to disagree. I'll take the hint.


Zohran Mamdani has yet to demonstrate that he poses any serious impediment to Trump and the agenda of Trump's owners.

I think there is a marked difference in Trump's rhetoric v Mamdani prior to the meeting at the White House and after.

I think you are extrapolating a bit too far from an outlier data point. Trump has had several other meetings (eg. Zelenskyy) go sideways for no apparent reason.

and he has had several meetings change his opinion of the other party for no apparent reason (eg zelensky

extrapolation is futile


Your contention that Trump's administration is big tent is risible.

Political witch hunts, women and minorities forced out of the military, and kicking out all the allied countries that used to be in the tent with us?

Bullshit of the finest caliber.


Yes, the Trump administration is big tent of politicians who hold incompatible opinions and are allowed to stay as long as they display personal allegiance to Trump.

> he can get Zohran Mamdani in the room and say "it's fine; say you think I'm a fascist" and then nonetheless get what he wants.

Is your perception that warped? Mamdani is the one who knows how to play Trump as a fiddle, and the one who walks away with something from the exchange.


“I think the decision was made because the people making this decision at Anthropic are well-intentioned, driven by values, and motivated by trying to make the transition to powerful AI to go well.”

Every single one of these CEOs happily pirated unimaginable amounts of copyrighted content. That directly hurt millions of real human beings. Not just the prior creators but also crushing the future potential for success of future ones.

https://www.susmangodfrey.com/wins/susman-godfrey-secures-1-...


I used to work at Anthropic, and I wrote a comment on a thread earlier this week about the RSP update [1]. It's enheartening to see that leaders at Anthropic are willing to risk losing their seat at the table to be guided by values.

Something I don't think is well understood on HN is how driven by ideals many folks at Anthropic are, even if the company is pragmatic about achieving their goals. I have strong signal that Dario, Jared, and Sam would genuinely burn at the stake before acceding to something that's a) against their values, and b) they think is a net negative in the long term. (Many others, too, they're just well-known.)

That doesn't mean that I always agree with their decisions, and it doesn't mean that Anthropic is a perfect company. Many groups that are driven by ideals have still committed horrible acts.

But I do think that most people who are making the important decisions at Anthropic are well-intentioned, driven by values, and are genuinely motivated by trying to make the transition to powerful AI to go well.

[1]: https://news.ycombinator.com/item?id=47145963#47149908


I've had so much abuse thrown at me on here for saying this very thing over the last few years. I used to be friends with Jack back in the day, before this AI stuff even all kicked off, once you know who people really are inside, it's easy to know how they will act when the going gets rough. I'm glad they are doing the right thing, but I'm not at all surprised, nor should anyone be. Personally I believe they would go to jail/shut down/whatever before they do something objectively wrong.

> I used to be friends with Jack back in the day, before this AI stuff even all kicked off, once you know who people really are inside, it's easy to know how they will act when the going gets rough.

This sounds quite backwards to me. It's been abundantly clear in today's times that, in fact, you only really know who somebody really is when they're under stress. Most people, it seems, prefer a different facade when there is nothing at stake.


I don't know most people, so I can't speak to that. I do know Jack, and I knew how he was under stress long before any of this AI stuff. Jack Clark might very well be the most steady hand in the valley right now to be quite frank.

That is a good LinkedIn endorsement of ever I saw one!

Hm, I think you kinda know what people are like by seeing what they do when they’re under no stress and feel like they are free from consequences. When they have total power in a situation. The façade drops because it’s not necessary.

If someone is in an environment where they have to do XYZ or die, their choice to do XYZ might not reflect their personality, but the environment where they have to do XYZ or die.

But if you were watching them, was there really no freedom from consequences? At least there was the risk of you thinking less of them.

I think that really cruel people want you to know when they can act with impunity, it's part of the appeal to some. The Anthropic people don't seem like that sort, at least. But plenty of horrible people have still not been that sort.


> But if you were watching them, was there really no freedom from consequences?

Ah, so I think you may have done a little hop and a jump over a critical, load-bearing term which is “feel like”. You get to observe people who feel like there are no consequences. Their feelings may or may not be accurate.

You can sometimes see people who treat service workers, servants, or subordinates poorly because they feel like it’s permitted and free from consequence. You can also sometimes see people reveal things about themselves when playing games. It’s kind of a cliché that people find out that they’re transgender at the D&D table, and it happens because it’s a “consequence-free way” to act out a different gender role.

Or we can talk about that magic ring that makes you invisible. You know, the ring of Gyges, or that of Sauron. People can’t actually become invisible, but you can sometimes catch them in a situation where they think they can do something wrong and not get caught.


Free from consequence. In other words, free of any stakes. Zero stress low stakes environments enable larping.

Exactly

Not all of us know who Dario, Jared, Sam and Jack are. Some clarification is helpful. That's all, no hidden agenda!

Well I can only speak to Jack Clark. Jack was a reporter who covered my startup and then became my friend. Over the last.. I dunno, 13 year or something, we've had long deep talks about lots of things, pre-ai world: what it takes to build a big business, will QC ever become a thing, universal basic human love, kids, life, family. He is brilliant. The business I worked on that he covered went through a lot of shit that he knew about. We talked about power in business, internal politics, how things actually get built...all that stuff. Then... attention is all you need, bunch of folks grok it, he got interested... got to talking to these folks starting some little research lab to see how NN scales, so joined that lab, first 5/10 or so iirc...to head AI policy. That little lab grew, stuff happened, the next part isn't mine to share but so much as to say: Anthropic was basically born out of the expectation that this moment would come and more...extremely human focused...voices should be at the table, that is Anthropic, that idea, they left their jobs at the aforementioned lab - and started their own startup to make sure a certain tone/voice/idea was always represented. Around the summer 2024, although at this point we didn't discuss any specifics of the work at his "startup", I said to him: what comes next is going to be super hard and I know this is going to sound really stupid, but you're all going to need to be Jesus for real. I'm a Buddhist and it wasn't a literal religious comment about Christianity as a denomination, so much as... the very basics of the stuff the dude Jesus Christ espoused. He knew, they knew, that I suppose, was always the plan? So it was never unexpected to me they would act this way, that is what Anthropic is all about. Here we are.

Hah, you're right, I meant Dario Amodei, Jared Kaplan, and Sam McCandlish.

They're all cofounders of Anthropic. Dario is the CEO, Jared leads research, and Sam leads infra. Both Jared and Sam were the "responsible scaling officer", meaning they were responsible for Anthropic meeting the obligations of its commitments to building safeguards.

I think neom is referring to Jack Clark, another one of the seven cofounders.


I almost downvoted you, because this is a pretty classic LMGTFY (or now, LMLLMTFY), but on second thought, you're right. The "Dario" is clear, he's the author of TFA, but for other execs, Anthropic's fans on here should spell out their full names. Dropping all these first names feel like "inside baseball" at best, mildly culty at worst, and here outside the walls of Anthropic, we're going to see those names and think of Kushner(??), Altman, and maybe Dorsey, and get confused.

FWIW, I agree strongly w/ lebovic's toplevel take above, that Anthropic's leaders are guided by their values. Many of the responses are roughly saying, "That can't be true, because Anthropic's values aren't my values!" This misses the point completely, and I'm astounded that so many commenters are making such a basic error of mentalization.

For my part, I'm skeptical of a lot of Anthropic's values as I perceive them. I find a lot of the AI mysticism silly or even harmful, and many of my comments on this site reflect that. Also, like any real-world company, Anthropic has values that are, shall we say, compatible with surviving under capitalism -- even permitting them to steal a boatload of IP when they scanned those books!

Nonetheless, I can clearly see that it's a company that tries to stand by what it believes, and in the case of this spat with Dep't of War, I happen to agree with them.


I can agree that I thought it was jack dorsey but it looks like we are talking about jack clark [https://en.wikipedia.org/wiki/Jack_Clark_(AI_policy_expert)]

It would be better if people could name them with their full names to avoid any confusion.


[flagged]


Please don't do this here.

> it's easy to know how they will act when the going gets rough

Even if you went to burning man and your souls bonded, you only know a person at a particular point in time - people's traits flanderize, they change, they emphasize different values, they develop different incentives or commitments. I've watched very morally certain people fall to mania or deep cynicism over the last 10 years as the pillars of society show their cracks.

That said, it is heartening to know that some would predict anyone in Silicon Valley would still take a moral stance. But it would do better if not the same day he fires 4000 people to do the "scary big cut" for a shift he sees happening. I guess we're back to Thatcherisms, where "There Is No Other Option" to justify our conservatism.


Your comment reminds me of a story. John Adams and Lafayette met in Massachusetts something like ~49 years after the revolution. (Lafayette went on a US tour to celebrate the upcoming 50 year anniversary of independence.) Supposedly after the meeting Adams said "this was not the Lafayette I knew" and Lafayette said "this was not the Adams I knew".

In these days of the Epstein mails, it's worth remembering one thing that's become clear: Epstein was an extremely nice guy. He seemed kind, sincere, interested in what you were doing, civilized etc.

But to quote Little Red Riding Hood in Stephen Sondheim's musical: Nice is different than good. It's hard to accept if people you really like do horrible things. It's tempting to not believe what you hear, or even what you see. And Epstein was good at getting you to really like him, if he wanted to.

That doesn't mean we should be suspicious of niceness. It just means that we should realize, again, nice is different than good.


In German you say „Nett ist die kleine Schwester von Scheisse“ which means „Nice is the polite version of being an asshole“. And this is how I cope with what decision-makers say. Zuckerberg was also „nice“ for a long time.

Anyone who's grown up around the upper class social strata understands this to be true.

"people's traits flanderize": nice

>Even if you went to burning man and your souls bonded ...

I'll take: List of places I never want to bond my soul with someone at for one thousand, please.


They get an air conditioned trailer and pay "sherpas" to do their chores, so its basically just a hotel suite

Oh, that's the best place for souls to bond.

Bond to what -- that's the real question

Playa dust. It's certainly permanently bonded to my car.

This is insanely naive

Cynicism isn't always correct.

[flagged]


Huh? Why would they be in prison??

> they have also threatened to designate us a “supply chain risk”—a label reserved for US adversaries

They are US adversaries if they don’t give to USA what they want… so as an adversary that doesn’t do what’s told to fit in line… you must go to prison.


This is silly. No one at anthropic is going to prison for this. It only hurts their ability to do business with US government customers which is a net negative for all. Anthropic will come around.

The nature of evil is that it's straight down the road paved with good intentions.

You're kidding

> I have strong signal that Dario, Jared, and Sam would genuinely burn at the stake before acceding to something that's a) against their values,

I am sure you think they are better than the average startup executive, but such hyperbole puts the objectivity of your whole judgement under question.

They pragmatically changed their views of safety just recently, so those values for which they would burn at the stake are very fluid.


> They pragmatically changed their views of safety just recently, so those values for which they would burn at the stake are very fluid.

Yes it was a pragmatic change, no it was not a change in their values. The commentary here on HN about Anthropic's RSP change was completely off the mark. They "think these changes are the right thing for reducing AI risk, both from Anthropic and from other companies if they make similar changes", as stated in this detailed discussion by Holden Karnofsky, who takes "significant responsibility for this change":

https://www.lesswrong.com/posts/HzKuzrKfaDJvQqmjh/responsibl...

> I strongly think today’s environment does not fit the “prisoner’s dilemma” model. In today’s environment, I think there are companies not terribly far behind the frontier that would see any unilateral pause or slowdown as an opportunity rather than a warning.

> What I didn’t expect was that RSPs (at least in Anthropic’s case) would come to be seen as hard unilateral commitments (“escape clauses” notwithstanding) that would be very difficult to iterate on.


> Yes it was a pragmatic change, no it was not a change in their values. The commentary here on HN about Anthropic's RSP change was completely off the mark. They "think these changes are the right thing for reducing AI risk, both from Anthropic and from other companies if they make similar changes", as stated in this detailed discussion by Holden Karnofsky, who takes "significant responsibility for this change":

Can you imagine a world where Anthropic says "we are changing our RSP; we think this increases AI risk, but we want to make more money"?

The fact that they claim the new RSP reduces risk gives us approximately zero evidence that the new RSP reduces risk.


Well, the original claim of risk was also evidence-free.

It’s fair because the folks who are making the claim never left the armchair.


That misses my point: the evidence is the extensive argumentation provided for why it reduces risk. To quote Karnofsky:

> I wish people simply evaluated whether the changes seem good on the merits, without starting from a strong presumption that the mere fact of changes is either a bad thing or a fine thing. It should be hard to change good policies for bad reasons, not hard to change all policies for any reason.


Yea, that Sam only does this because, "he loves it." They're not in it for the money.

Sorry, I meant a different Sam – Sam McCandlish, not Sam Altman.

Wasn't expecting this post to get so much attention.


That's not fair, Sam can love money too and there is no conflict here.

It's good to be driven by ideals, but: https://en.wikipedia.org/wiki/The_purpose_of_a_system_is_wha...

I think avg(HN) is mostly skeptical about the output, not that the input is corrupt or ill-meaning in this case. Although with other companies, one can't even take their claims seriously.

And in any case, this is difficult territory to navigate. I would not want to be in your spot.


Come On, Obviously The Purpose Of A System Is Not What It Does

https://www.astralcodexten.com/p/come-on-obviously-the-purpo...


I don't think that article makes a strong case; it deliberately phrases examples in the most ridiculous ways and pretends that this is a damning criticism of the phrase itself; it's 'you're telling me a shrimp fried this rice' but with a pretence of rationality.

I think it makes a pretty compelling case that most invocations of the statement are either blindingly obvious or probably false. Can you give a counterexample?

> most invocations of the statement are either blindingly obvious or probably false

So straightaway, you've walked significantly back from the claim in the headline; now half of the time it's 'blindingly obvious' that the statement is correct. That already feels like a strong counterexample to me, and it's the article's own first point.

Secondly, look at this one specifically:

> The purpose of the Ukrainian military is to get stuck in a years-long stalemate with Russia.

Firstly, this isn't obviously false. It's an unfair framing, but I think the Ukrainian military would agree that forcing a stalemate when attacked by a hostile power is absolutely part of their purpose.

Secondly, it is an unfair framing that deliberately ignores that all systems are contextual. A car's purpose is transport, but that doesn't mean it can phase through any obstacle.

The article makes an entirely specious argument, almost an archetypal example of a strawman. It can't sustain its own points over a few hundred words without steadily retreating, and that is far more pointless than the maxim it criticises.

I'm reminded of an XKCD comic [1] about smug miscommunication. Of course any principle is ridiculous when you pretend not to understand it.

[1] https://xkcd.com/169/


How do you reconcile the fact that many people in Anthropic tried to hide the existence of secret non-disparagement agreements for quite some time?

It’s hard to take your comment at face value when there’s documented proof to the contrary. Maybe it could be forgiven as a blunder if revealed in the first few months and within the first handful of employees… but after 2 plus years and many dozens forced to sign that… it’s just not credible to believe it was all entirely positive motivations.


Saying an entity has values doesn't mean the entity agrees with every single one of your values.

The desire to force new employees to sign agreements in total secrecy, without even being able to disclose it exists to prospective employees, seems like a pretty negative “value” under any system of morality, commerce, or human organization that I can think of.

That's a perfectly fine belief to have. I might even agree with you. But you're not really advancing a discussion thread about a company's strong ideals by pointing out some past behavior that you don't like. This is especially true when the behavior you're bringing up is fairly common, if perhaps lamentable, among U.S. corporations. Anthropic can be exceptional in some ways while being ordinary in the rest.

(I have no horse in this race. But I remain interested in hearing about a former employee's experience and impressions about the company's ideals, and hope it doesn't get lost in a side discussion about whether NDAs are a good thing.)


You dont believe it increases the probability that Anthropic may be hiding other unsavory things too?

I can see a very charitable person only seeing a small increase, but a literally zero change, and therefore zero relevance, seems absurd.



Are you confusing me with someone else’s comment?

This doesn’t address my question on what you believe.


Read the beetle example in that article. It's exactly on point.

You believe Anthropic is a rare subspecies of beetle (an "unsavory" company) based on a certain pattern on its back (certain NDA-related behavior). I and several others here have noted that lots of companies have that pattern on their backs. Which means that you are basing your conclusion on weak evidence. If you use Bayes Theorem to calculate the actual probability, you'll find that "[trying] to hide the existence of secret non-disparagement agreements" barely moves the needle at all. Does it move the needle? Sure. But much less than you think.


Even if it only moved the needle a tiny amount… that’s still a non-zero amount?

And therefore a non-zero amount of relevance?


Your original point carries an infinitesimal amount of weight. Yes, you win.

Win what? You haven’t even advanced a coherent argument yet… hence the original reply.

Lots of companies do it. Doesn't make it right, but HR has kind of become a pretty evil vocation, these days. I don't believe that they necessarily reflect the values of their corporations. They tend to follow their own muse.

Okay — but if Anthropic is typical banal evil in that regard, why should we believe they didn’t also compromise in other areas?

The exact point is that Anthropic is unexceptional and the same as other corporations.


The problem with companies, you see, is that they are a separate entity than their founders, shareholders or current leadership. A Company has no soul or unchangeable intentions. Claude’s SOUL.md is just an IP that can be edited at any time.

>I's enheartening to see that leaders at Anthropic are willing to risk losing their seat at the table to be guided by values.

I'm concerned that the context of the OP implies that they're making this declaration after they've already sold products. It specifically mentions already having products in classified networks. This is the sort of thing that they should have made clear before that happened. It's admirable (no pun intended) to have moral compunctions about how the military uses their products but unless it was already part of their agreement (which i very much doubt) they are not entitled them to countermand the military's chain of command by designing a product to not function in certain arbitrarily-designated circumstances.


Where are you getting that from?

The article is crystal clear that these uses are not permitted by the current or any past contract, and the DoW wants to remove those exceptions.

> Two such use cases have never been included in our contracts with the Department of War, and we believe they should not be included now

It also links to DoW's official memo from January 9th that confirms that DoW is changing their contract language going forwards to remove restrictions. A pretty clear indication that the current language has some.


I think it largely hinges on what they mean by "included"; does that mean it was specifically excluded by the terms of the contract or does it mean that it's not expressly permitted? I doubt the DoD is used to defense contractors thinking they have the right to dictate policy regarding the use of their products, and it's equally possible that anthropic isn't used to customers demanding full control over products (as evidenced by how many chatbots will arbitrarily refuse to engage with certain requests, especially erotic or politically-incorrect subject-matters). Sometimes both parties have valid cases when there's a contract disagreement.

>A pretty clear indication that the current language has some.

Or alternatively that there is some disagreement between the DoD and Anthropic as to how the contract is to be interpreted and that the DoD is removing the ambiguity in future contracts.


This is all just completely wrong. Anthropic explicitly stated in their usage use of their products is not permitted in mass-surveillance of American citizens and fully automated weapons, in the contract that DoW signed. Anthropic then asked DoW if these clauses were being adhered to after the US’ unlawful kidnapping of Maduro. DoW is now attempting to break the contract that they signed and threatening them because how dare a company tell the psycho dictators what to do.

> US’ unlawful kidnapping of Maduro.

The what now?

Maduro is being prosecuted and there was a warrant out for his arrest. There is no magic soil exemption if you commit a crime against the United States and flee to another country.


What on earth does "Two such use cases have never been included in our contracts with the Department of War" mean? Did they specifically forbid it in the contract or was it literally just not included? Because I can tell you that if it's the latter that does not generally entitle them to add extra conditions to the sale ex post facto.

>threatening them because how dare a company tell the psycho dictators what to do.

Dude it's a private defense contractor leveraging its control over products it has already installed into classified systems to subvert chain of command and set military doctrine. That's not their prerogative. This isn't a "psycho dictator" thing.


They have always maintained an acceptable use policy forbidding these things. It was not controversial, because the Pentagon claims they have no interest in doing them in the first place, until a regime-aligned executive at Palantir decided to curry favor by provoking a conflict.

Well was that in the contract or not? Because the closest OP gets to saying that is that it was "not included".

“AI chips are like nuclear weapons” (paraphrasing [1]) and “I should be in charge of it” (again paraphrasing) is just not a serious position regardless of intentions.

[1]: https://www.axios.com/2026/01/20/anthropic-ceo-admodei-nvidi...


There's a simpler explanation than "billionaires with hearts of gold" here. If:

(1) this is a wildly unpopular and optically bad deal

(2) it's a high data rate deal--lots of tokens means bad things for Anthropic. Users which use their product heavily are costing more than they pay.

(3) it's a deal which has elements that aren't technically feasible, like LLM powered autonomous killer robots...

then it makes a whole lot of sense for Anthropic to wiggle out of it. Doing it like this they can look cuddly, so long as the Pentagon walks away and doesn't hit them back too hard.


guess it didn't work, whiskey pete did the thing: https://xcancel.com/SecWar/status/2027507717469049070

All excellent points to add to the motivation to hold the line just where it has been.

This last development is much to the honor of Anthropic and Amodei and confirms what you're saying.

What I don't get though is, why did the so-called "Department of War" target Anthropic specifically? What about the others, esp. OpenAI? Have they already agreed to cooperate? or already refused? Why aren't they part of this?


> What I don't get though is, why did the so-called "Department of War" target Anthropic specifically?

Because Anthropic told them no, and this administration plays by authoritarian rules - 10 people saying yes doesn’t matter, one person saying no is a threat and an affront. It doesn’t matter if there’s equivalent or even better alternatives, it wouldn’t even matter if the DoD had no interest in using Anthropic - Anthropic told them no, and they cannot abide that.


More importantly, Anthropic has the best model by a golden country mile and the US military complex wants it.

This administration^Wregime has a lot of experience pressuring publicly with high stakes followed up by making backroom deals that would even make Jared Kutcher blush.

This is protection racketeering 101! So much so, that if any form of a functioning US judicial systems makes it past 2028, I’m willing to put money on that more than a handful of people in the upper echelons of today’s administration will end up getting slapped with the RICO Act.


I'm a bit underwhelmed tbh. Here is Anthropic's motto:

"At Anthropic, we build AI to serve humanity’s long-term well-being."

Why does Anthropic even deal with the Department of @#$%ing WAR?

And what does Amodei mean by "defeat" in his first paragraph?


DoD and American exceptionalists also believe American foreign policy is in service of humanity’s long term well being

It is all for the benefit of man. We even get to see the man himself daily on television.

I think the last few months have shown pretty clearly in whose service this policy is. If China went to attack Taiwan, west has no moral high ground left.

Yeah, I don't think so any more. The sort of lofty Cold War rhetoric about leading the world, if it was ever legitimately believed by the people spouting it, is gone. A very different attitude has taken hold, which puts a zero sum ethnonationalism at the core.

One of the hallmarks of fascist thinking is the dehumanizing of opponents and minorities, so within their own messed up framework, they might even mean it.

There was a time (1943?) when dealing with the US department of war meant serving for humanity's long-term well being.

Look I'm not going to disagree, obviously - but even in those times, you could argue that helping the department of war in some ways will contribute to deaths you might not necessarily want to be a part of. Bombing of Hiroshima and Nagasaki is still widely discussed today for a myriad of reasons, as is conventional bombing of cities in both Nazi Germany and Japan. We can both agree that fighting nazis is a good thing while at the same time have a moral objection to participating in the war effort.

And I think the stakes have changed today - it's one thing to be making bombs which might or might not hit civilians, it's another to be making an AI system that gives humans a "score" that is then used by the military to decide if they live or die, as some systems already do("Lavender" used by the IDF is exactly this).

Even with the best intentions in mind, you don't know how the systems you built will be used by the governments of tomorrow.


> you could argue that helping the department of war in some ways will contribute to deaths you might not necessarily want to be a part of.

Of course.

> Even with the best intentions in mind, you don't know how the systems you built will be used by the governments of tomorrow.

All technology and labour can be abused, yes. All the more reason to ensure a strong system of law so that the government can't just seize businesses or their technology on a whim. Back in WW2 such seizures happened, but not too often because it was not popular.

But then the United Mine Workers coal miners went on strike in 1943, and the War Labor Disputes Act was created (even overriding an FDR veto), threatening to nationally seize the mines and conscript the miners with the Selective Service Act. Thankfully cooler heads prevailed. The US populace turned against unions due to the popularity of the war effort, and the miners went back to work after getting assurances that their pay demands would be negotiated.

Ultimately I think we're far away from this in today's era (though the US or Canadian governments forcing back-to-work legislation is increasingly normal), but the point is, pacifists have limited options in wartime if a majority of public opinion is in support of the war effort.


//but even in those times, you could argue

This is the oft-spoken fallacy of the benefit of hindsight. Folks in that situation 80 years ago did what they had to do, to stop Japan from continuing to rape and murder hundreds of thousands of people in southeast Asia. But of course, you would have found a better option. How's the view, standing on the shoulders of giants?


I feel like my argument flew so high above your head it literally touched the clouds.

Brave words coming from a sockpuppet.

Look up when Anthropic signed a contract with Palantir and then look up what Palantir does if you want an even better reality check on following the ideals. I chuckle every time.

And nobody knows what he means by "defeat" because no journalist interrogates or pushes back on his grand statements when they hear it. Amodei has a history of claiming they need to "empower democracies with powerful AI" before [China] gets to it first but he never elaborates on why or what he expects to happen if the opposite comes to pass. I am assuming he means China will inevitably wage cyberwar on the US unless the US has a "nuclear deterrent" for that kind of thing. But seeing how this administration handles its own AI vendors, I am currently more afraid of such "empowered democracy" than China. Because of Greenland, because of "our hemisphere". Hard nope to that.

Oh, btw, Dario isn't against the DoD using Claude for mass surveillance outside of the US; he basically says it outright in the text. Humanity stops at Americans.


Anthropic can serve its models within the security standards required to handle classified data. The other labs do not yet claim to have this capability.

Even if they do, I assume the other labs would prefer to avoid drawing the ire of the administration, the public, or their employees by choosing a side publicly.


But how can they avoid it, why are they not asked?

Anthropic is already cooperating with the DoD, presumably fulfilling all the conditions and the DoD likes their stuff so much it wants to use it more broadly, so they want to change the terms of the agreement(s). Anthropic disagrees on some points; DoD wants to force them to agree.

> Many groups that are driven by ideals have still committed horrible acts.

Sometimes, it's even a very odd prerequisite.


Don't attribute to ideals what is simple self-preservation.

No sane person wants to become a legitimate military target. They want to sleep in their own beds, at home, without risking their families lives. Just like the rest of us.


> Something I don't think is well understood on HN is how driven by ideals many folks at Anthropic are

After 20 years of everyone in this industry saying "we want to make the world a better place" and doing the opposite, the problem here is not really related to people's "understanding".

And before the default answer kicks in: this is not cynicism. Plenty of folks here on HN and elsewhere legitimately believe that it's possible to do good with tech. But a billion dollar behemoth with great PR isn't that.


Exactly. At this level you don't just put out a statement of your personal opinion. This is run through PR and coordinated with the investors. Otherwise the CEO finds himself on the street by tomorrow. Whatever their motives are, it is aligned with VC, because if it is not then the next day there is another CEO. As the parent stated, this is not cynicism. I see this just rather factual, it is simply the laws of money.

I am suspicious the whole thing is a PR stunt to build public trust.

In none of their statements do they say they won't do the things:

> we cannot in good conscience accede to their request.

That's very specifically worded to not say "under no circumstances will we do this".

> Two such use cases have never been included in our contracts with the Department of War, and we believe they should not be included now

Is not saying they won't eventually be included.

They've left themselves a backtrack, and with the care there this statement has been crafted, that's surely deliberate.


This. This is a public misdirection. They already signed a new deal. It may be to their disliking but nothing in the statement prevents them from moving forward.

That is speculation. You might be correct but this statement could simply be a strong signal to the administration to back down. A hail Mary.

Isn't that what we're all doing in this thread? We could certainly take the document at face value but as a parent commenter said, almost every company starts off with "don't be evil" then goes and does evil things.

Is anthropic different? Maybe. But personally I don't see any indication to give them the benefit of the doubt.


> They've left themselves a backtrack, and with the care there this statement has been crafted, that's surely deliberate.

What's worse, someone in their PR department will read this thread and be disappointed that the spin didn't work.


I mean that’s just adulthood.

There are outcomes where the US government seizes the company. Not super likely, not impossible.

It would be naive to write a statement that a future event will never happen, under any circumstances. People who make that mistake get lambasted for hypocrisy when unforeseen circumstances arise.

I see recognition that making absolute statements about the future is best left to zealots and prophets. Which to me speaks of maturity, not duplicity.


> There are outcomes where the US government seizes the company. Not super likely, not impossible.

Are there historical examples in the US specifically where we've nationalized a business?

Because we've certainly invaded countries and assassinated leaders over exactly the same.

ETA: I could have answered my own question with two minutes of research. Yes, we have: https://thenextsystem.org/history-of-nationalization-in-the-...


This. I don't get why you are getting downvoted. The statement literally says:

  Two such use cases have never been included in our contracts with the Department of War, and we believe they should not be included now:
Last word is very important: "now".

I'm not saying whether or not they're planning to back down, but this sentence doesn't imply that. The "now" is clearly meant to be in reference to the fact they've not in the past.

Being a tech forum centered around VC funding means we have a TON of tech bros (derogatory) here, who believe in nothing beyond getting their own piles of money for doing literally anything they can be paid to do. If you offered these guys $20 to murder a grandmother they'd ask if they have to cover the cost of the murder weapon or if that's provided.

I get it to a degree, people gotta eat, and especially right now the market is awful and, not to mention, most hyperscaler businesses have been psychologically obliterating people for a decade or more at this point. Why not graduate to doing it with weapons of war too? But, personally, I sleep better at night knowing nothing I've made is helping guide missiles into school busses but that's just me.


I share this sentiment.

In general - I don’t know if it’s a coincidence but here on HN for example, I’ve noticed an increasing amount of comments and posts emphasizing the narrative of how “well- intended” Anthropic is.


Feel free to judge them by their actions rather than intentions. This situation being an example.

I'd love to see the financial model that offsets losing your single biggest customer and substantial chunk of your annual revenue with some vague notion of public trust.

This is so short sighted. We are so early into this AI revolution, and this administration is obviously in a tailspin, with the only folk left in charge being the least capable ones we have seen in a decade

Imagine what the conversation would be like if Mattis, a highly decorated and respected leader were still the SecDef. Instead we are seeing bully tactics from a failed cable news pundit who has neither earned nor deserved any respect from the military he represents.

We are two elections and a major health issue away from a complete change of course.

But short sightedness is the name of the quarterly reporting game, so who knows.


> We are so early into this AI revolution…

I keep hoping it’s almost over.

Not trying to be the Luddite. Had multiple questions to AI tools yesterday, and let Claude/Zed do some boilerplate code/pattern rewriting.

I’ve worked in software for 35 years. I’ve seen many new “disruptive” movements come and go (open source, objects, functional, services, containers, aspects, blockchains, etc). I chose to participate in some and not in others. And whether I made the wrong choices or not, I always felt like I could get a clear enough picture of where the bandwagon was going that I could jump in, or hold back, or kind of. My choices weren’t always the same as others, so it’s not like it was obvious to everyone. But the signal felt more deterministic.

With LLM/agents, I find I feel the most unease and uncertainty with how much to lean in, and in what ways to lean in, than I ever have before. A sort of enthusiasm paralysis that is new.

Perhaps it’s just my age.


Didn't we go through this same kind of uncertainty with PCs, the internet, and smartphones? It's early and we're all noodling around.

I'm seriously worried there won't be more elections. Not hyperbole at all.

> I'm seriously worried there won't be more elections. Not hyperbole at all.

Why? That's a an unrealistic fear, driven by the insanely overwrought political rhetoric of 2026. Think about it: elections will be the absolute last thing to go.

If you want something to worry about, worry about this:

> And the stakes of politics are almost always incredibly high. I think they happen to be higher now. And I do think a lot of what is happening in terms of the structure of the system itself is dangerous. I think that the hour is late in many ways. My view is that a lot of people who embrace alarm don’t embrace what I think obviously follows from that alarm, which is the willingness to make strategic and political decisions you find personally discomfiting, even though they are obviously more likely to help you win.

> Taking political positions that’ll make it more likely to win Senate seats in Kansas and Ohio and Missouri. Trying to open your coalition to people you didn’t want it open to before. Running pro-life Democrats.

> And one of my biggest frustrations with many people whose politics I otherwise share is the unwillingness to match the seriousness of your politics to the seriousness of your alarm. I see a Democratic Party that often just wants to do nothing differently, even though it is failing — failing in the most obvious and consequential ways it can possibly fail. (https://www.nytimes.com/2025/09/18/opinion/interesting-times...)


It's not an unrealistic fear. Trump has been making noises about "taking over elections." Abolishing elections wholesale is very unlikely, sure, but a sham election rigged by a corrupt government? That's standard fare for authoritarians. And there's evidence of voting anomalies in swing states in the 2024 election.

https://www.theguardian.com/us-news/2026/feb/27/trump-voting...

https://electiontruthalliance.org/


Yeah, Russia still has "elections" for all the good that does them.

Trump _says_ lots. Most of it doesn't come true.

FYI, even though you have a new account, you were banned from your first comment and all your comments automatically show up as hidden-by-default to most users.

It's not who votes that counts, but who counts the votes.

(Attributed to Stalin, but likely comes from a despot earlier in the history.)


Authoritarian nations continue to have elections, turnout is near 100%, and Dear Leader wins with 90% of the vote.

I don't think it's crazy to worry that, but elections are run by the states, there are over 100,000 poling places nationally, and people are pissed. On Jan 3, the entire current House of Representatives terms end; Democratic governors will still hold elections, and if there haven't been elections in GOP-led states, they're out of representation. There are so many hurdles in the way of the fascists canceling or heavily interfering in elections, and they're all just so stupid.

WaPo headline “Administration plans to declare emergency to federalize election rules.” https://www.washingtonpost.com/politics/2026/02/26/trump-ele...

Yeah, they can plan whatever they want. No such authority exists, and it must really be emphasized that they're all so stupid.

Stupid and effective are not mutually exclusive.

I do agree with you that no such authority exists, but this administration seems to get away with a lot of things they have no authority to do.


If you think they're pissed now, just wait to see how they react to election interference.

I recently read up on how the House of Representatives renews itself and quite frankly it's one of the most beautiful processes I've seen, completely removing the influence of the prior congress.


Putin crushes every election he has. Of course there would be more elections.

Mattis- the same highly decorated and respected leader that was on the board of directors at Theranos... edit: added Mattis

a bit of casual research will show you hegseth is much more than just a fox pundit.

Their whole strategy is that the lack of a legal moat protecting their product is an existential threat to human life. They are the only moral AI and their competitors must be sanctioned and outlawed. At which point they can transition from AI as commodity to “value” based pricing.

It’s not going to work, but I can’t blame Amodei and friends for trying to make themselves trillionaires.


I'd love to see any evidence that this single biggest customer is provably and irreversibly lost on all levels of scrutiny as a result of this attempt at building public trust.

$200M is >2% ARR at the last numbers we got from them, and would take them back... checks notes... literally only a few days of ARR growth.

This is why we should be skeptical of companies that want to tie themselves to the military industrial complex in the first place.

The rest of the world moves to using you?

It absolutely is a PR stunt. And the media is cheering.

It's absurd.

It's simple: If you do not like working with the military, cancel your contract with the military and pay the penalties.

They are explicitly not doing that.


This effectively is cancelling, isn't it?

You're implying cancelling quietly would be better. But the department would just use a different supplier. This seems like the action someone would take if they cared about the issue.


> If you do not like working with the military, ...

Eh? But they do like to work with the military. How else are you going to "defend the United States and other democracies, and to defeat our autocratic adversaries"?

They want to work with the military, with just two additional guardrails.


> it is simply the laws of money

The First Law of Money: Money buys the Law.


To quote Brennan Lee Mulligan, "Laws are threats made by the dominant socioeconomic ethnic group in a given nation."

The full[1] quote is:

> “Laws are a threat made by the dominant socioeconomic ethnic group in a given nation. It’s just the promise of violence that’s enacted, and the police are basically an occupying army, you know what I mean?”

...Which is funny, but technically speaking, it's (more or less) a paraphrasing/extrapolation of the very serious political science definition of a state, “a monopoly over the legitimate use of violence in a defined territory”

[1] Minus the last line, which I will allow others to discover for themselves


Certainly pre-democracy, other than the ethnic group bit.

That's maybe the second law. The first one is: money is always finite.

Look at how Elon Musk behaved. Do you think VC gladly approved what he did with Twitter? They might want to keep chasing quarterly results - but sometimes, like with Zukerberg, they can't. Not enough money. Similar examples with Google rounds or how much more financially backed politician loses rather often to a competitor. Or, if you will, Vladimir Putin's idea that he can buy whatever results he wants - and that guy is a very wealthy person. There are always limits, putting the money law to the second place. We might argue that often the existing money is enough... but in more geopolitical, continuum-curving cases there are other powerful forces.


The Twitter acquisition wasn't funded by venture capital, so your question about VC approval doesn't apply.

If you're using VC as a general term for "investor" (inaccurately), then the answer to your question is that the major investors, such as Larry Ellison and the Saudi monarchy, wanted political control of Twitter, which meant that they did (apparently) approve what Musk did with it.


You're missing the point. It matters little where exactly money to pay for acquisition of Twitter came from. What matters is that nobody expected Twitter to lose employees and users in such numbers. So, whoever gave the money, was still limited in ensuring the results are "fully enough" in line with their wishes. Because money is always finite.

FWIW, I don’t actually know if board of Anthropic has actual power to replace its CEO or if Dario has retained some form of personal super-control shares Zuckerberg style.

At some level of growth, the dynamics between competent founders and shareholders flip. Even if the board could afford to replace a CEO, it might not be worth it.


I'd counter that at this level of capital, if the CEO doesn't well align with the capital, then super-control shares will be overpowered by super-lawyers and if there is need some super-donations. OpenAI was a public interest company...

Not at all. Especially at that level of capital. It’s the equity equivalent of „if you owe a bank a million dollars, you’re in trouble. If you owe a bank a billion dollars, the bank is in trouble”.

Capital is extremely fungible. Typically extremely overleveraged. Lawyers are on the other hand extremely overprotective. They won’t generally risk the destruction of capital, even in slam-dunk cases. Vide WeWork.


This is fundamentally incorrect.

Anthropic has an odd voting structure. While the CEO Dario Amodei holds no super-voting shares, there are special shares controlled by a separate council of trustees who aren't answerable to investors and who have the power to replace the Board. So in practice it comes down to personal relationships.

Surely you mean the laws of shareholder capitalism. There are many things you can do with money, and only some of them are legally backed by rules that ensure absolute shareholder power.

> everyone in this industry

So in the last 20 years there is nothing good coming out of the software industry (if this is the industry you mention) ?

I find it somehow ironic, because this type of generalization is for me the same issue that some of the people saying "they want to make a better place" have: accept reality is complex.

There were huge benefits for society from the software industry in the last 20 years. There were (as well!) huge downsides. Around 2000 lots of people were "Microsoft will lock us in forever". 20 years later, the fear "moved" to other things. Imagining that companies can last forever seems misguided. IBM, Intel, Nokia and others were once great and the only ones but ultimately got copied and pushed from the spotlight.


Everyone in this industry making a certain bullshit claim. I did qualify my statement. Don’t cut my words to make a strawman.

Additionally I state in the end that I do believe it’s possible.


So do you know everyone in the industry that made that such a claim? Sure, maybe you meant to restrict it further to "everyone I have noticed personally that said/wrote that" (or anything along the lines), but even then, do you know all the stuff that they did after saying it? (as the statement also included "doing the opposite" which I find quite strong).

If I see "everyone" I would expect it to actually mean "everyone under the constraints", the word "everyone" has a certain meaning and is very powerful, why use for situations where other words like "many", "most" might be more appropriate?


> So do you know everyone in the industry that made that such a claim

Of course, I wouldn't have said so otherwise.

Here's another one: every pedant in this website never adds anything useful to any conversation.


I don't even think both things are contradictory. People that put too much value in their ideals tend to oversee the consequences of such ideals in real life and do wrong without deviating an inch from their ideals.

But is that really the problem in big tech today? To me it looks like sooner or later they cave from their ideals (or leadership changes) and that the reason every time is that they want to make even more money.

I think that's still too rosy a view; it's clear with a lot of big tech that they never had the ideals in the first place. They use claims of principle for marketing purposes and then discard them when it's no longer convenient.

Or, perhaps even more likely, the ideals inevitably get corrupted by access to unthinkable economic power/leverage, like it happened with more or less all other giants with strongly idealistic initial leadership and leadership may actually delude itself into thinking they're still on the right track as a sort of a defense mechanism. Back when they published the article on the Claude-operated mass-scale data breach last year, the conclusions were delivered in a bafflingly casual tone as if it was a weather report: yeah, the world has become a lot more dangerous now (on its own), so you may want to start using Claude for cyber-defense and we are doing our best to help you protect your business. I rolled my eyes at that so hard they popped out of their sockets. Weren't you... the guys... who made it that way and enabled that very attack? Very convenient to sell weapons to both sides, isn't it, not at all like a mafia business. Very responsible and ideal-driven.

Consider also the part that is going unsaid in the address: Amodei is strongly against the use of Claude for mass surveillance of Americans but he says nothing about mass surveillance of anybody else (and, in fact, is proactively giving foreign intelligence a green light in his address) and is deliberately avoiding any discussion on the fact that his relationship with the Pentagon is mediated through the contract with Palantir they signed something like 1.5 years ago. Palantir is a company whose business is literally mass surveillance, by the way! I, too, am so ideal-driven that I willingly make deals with the devil! But now that he's successfully captured the popular sentiment, people are going to consider him the moral champion without bothering to look at these and other glaring contradictions.


Ideals have always been represented in literature as a virtue and a problem for humans. I find real life is no different.

I believe that this is classical behaviour of every share holder driven business. You can build on ideals from start, but once you acquire some position, money making is on the menu. Eg. deliberately worsening user experience for better revenue.

Possiblity to turn on heated seats in car you own for a small monthly fee is absurd yet very real. I'm looking forward to enshittification of current AI tools.


Yeah it's not that the people involved have no ideals, it's that the company structure as a whole doesn't, and over time that structure will eventually outlive, corrupt, and/or overpower the ideals of the founders or other principled individuals at the company.

Sure, sooner or later. I don't want to even guess where the new AI companies are on the path that leads to that destination, but right now it looks like Anthropic is not at that stage. Heck, even though a lot of people find Sam Altman slimy, even OpenAI isn't yet at that stage.

I can’t think of a single thing Meta does that isn’t driven by pure greed.

Yes, though Meta is a bad example as they started off with the values of Zuckerberg, and still have them.

Exactly right. But i think it makes it a good example actually. Company DNA is a thing. Bill Gates isn't running microsoft anymore. Still...

What would be more appropriate example?

Apple, Tesla, Oculus.

The first two are definitely "heroes who lived long enough to be villains"; Oculus is more of an "I recon" due to how it was seen right up until getting bought by Facebook.

Adobe?


But in the stock market, it is almost impossible for companies like Anthropic or any successful startups not to become villains (profit first no matter what). Anthropic especially needs to burn huge amount of money, so they need a lot of funding. The only way to keep founders' idealism is probably to copy Zuckerberg. Divide stocks with and without voting-power and trade only no-voting stocks.

I'm not denying 95% of that, only saying that Zuckerberg didn't have any idealism to lose in the first place.

I actually forgot that his first site was facemash which single purpose was to rate "hotness" of each individual girl on his University.

Anthropic is not a public company.

LOL, Palmer Luckey is a right-wing war mongering psychopath.

All of Meta's VR stuff should rationally be cut loose and refunded if it were all about greed. That stuff only survives because Zuck is a nerd who wants it to happen (but it's not going to.)

Well, they were just totally doing it the wrong way - with the result being ugly corporate distopia. They could haver just looked at what paople are using VR for & improved it to succeed.

VRChat is thriving and some other similar envronments being quite popular as well.

Just give people something that they actually want and make it nice and people will like it - huge surprise!


Oh sure. I don't want to say everybody are driven by ideals and not greed, but that even people with strong ideals and good intentions can do a lot of bad by being blinded by those same ideals.

I think most people are conscious that, irrespective of a founders vision, company morals usually don't survive the MBA-inisation phase of a company's growth.

Depends. Many still reflect the founders vision; even if that vision might have evolved over time.

Can you provide an example of that for an American venture backed corporation older than a decade?

Not the person you're replying to, and I may be wrong about this, but Amazon?

Jeff's original vision was "relentless customer focus" and ...

actually on second thought I'm seeing the argument 'Amazon stopped caring about customers and is in full enshittification mode at this point'.

But maybe Amazon circa ~2010/2015, or Google around 2010 was still pretty close to the original vision of customer service/organizing the world's information.

Or Apple? They're still making nice computers, although not sure they count as VC backed.

Stripe perhaps? Hashicorp?


Well Google‘s vision was to catalog all the world’s data

Apple wanted to make personal computing stable - they were absolutely VC backed

I suppose the original question is vague enough that it could always encompass everything which is founders vision even if the vision changes so it’s like OK well then then there’s nothing really to say that you’re stable too it’s just some whatever the function of the person who started the organization is and even that you could debate


The impact of MBAs might be decreasing..

True. Which is all the more reason for calling bullshit on claims of "doing good" or "having ideals" by anyone building a company that can eventually be ran my MBAs.

Exactly. I'd love to believe that at Anthropic, idealism trumps money. But Google was once idealistic too. OpenAI was too. It's really hard to resist the pull of money. Especially if you're a for-profit corporation, but OpenAI wasn't even that at first.

Reminds me of Effective Altruism and the collective results of people claiming to believe in that virtue.

> Plenty of folks here on HN and elsewhere legitimately believe that it's possible to do good with tech. But a billion dollar behemoth with great PR isn't that.

To expand on that a bit, many of us (myself included) fully believe founders set out with lofty and good goals when organizations are small. Scale is power, and power corrupts. It's as simple as that. It's an exceptionally rare quality to resist that corruption, and everyone has a breaking point. We understand humans because we are humans, and we understand that large organizations, especially corporations, are fundamentally incapable of acting morally (in fact corporations are inherently amoral).


Yep, exactly. That's the gist of it.

Scale is also what's killing jobs, ruining human relationships, fucking up societies. Et cetera.


I don't think it's cynical to acknowledge the pattern that publicly owned companies will eventually cave to the desires of their shareholders.

I understand Anthropic is not public, but I assume there's an IPO coming.


Cynicism is the newspeak substitute for sincerity, no need to worry about being called a cynic in this post-truth world of snowflakes.

I don't think it's cynical to believe that a company can make the world a worse place, or that Anthropic as a company will make many horrible choices.

I do think it's cynical to believe that people, and groups of people, can't be motivated by more than money.


At some point I've wondered if "fiduciary duty", when pushed to highest corporate levels, always conflicts with "make the world a better place"

i.e. Fiduciary Duty Considered Harmful


This is a component for sure, but also think of why Anthropic was born. It exists because of disagreements with OpenAI on the values of AI safety and principles.

and that's okay. so we judge them one decision at a time. So far, Anthropic is good in my book.

As a complete bystander I put so incredibly little weight to what friends and former employees think about the persons and figureheads behind tech companies that aim to change the world.

Why would I care. All people with at least some positive or negative notoriety have friend and associates that will, hand to their heart, promise that they mean well. They have the best intentions. And any deviations from their stated ideals are just careful pragmatic concerns.

Road to Hell and all that.


Exactly which values they are "going to burn at a stake for"? Making as many people homeless as they can in the shortest possible time? Befuddling governments and VCs into creating an insane industry-wide debt which would either lead to a "success" in replacing jobs or an industry-wide crisis? Or maybe a value of stealing intellectual property of every human on the planet under the guise of "fair use" and then deliberately selling the derivative product? Or the value of voluntarily working with "national security customers" when it suits them financially and crying foul when leopards bite their faces? Or the value of ironically calling a human replacement machine "anthropic" as in "for humanity"?

Yeah, I totally see Anthropic execs defending them to their last dollar in the wallet. Par for the course for megacorps. It's just I personally don't value those values at all.


"They're driven by values" is meaningless praise unless you qualify what these values are. The Nazis had values too, you know. They were even willing to die for them. One of the core values of the Catholic church is probably compassion. Except for the victims of sexual abuse perpetrated by their clergy.

So what core values led "Dario, Jared, and Sam" to work with a government that just tried to rename the DoD to "department of war" and is acting aggressively imperialist in a way like the US hasn't in a long time.

And who exactly are these "autocratic adversaries" they are mentioning? Does this list include the autocrats the US government is working together with?


Yeah, values on their own don't lead to positive outcomes. I agree that many groups that are driven by ideals have still committed horrible acts.

I do think that they're acting with positive intent, though, and are motivated by trying to make the transition to powerful AI go well.

Many folks on HN seem to assume the primary motivation is purely chasing more money, which certainly isn't the case for for many – but not all – people at Anthropic.

That doesn't guarantee a good outcome, and there's still a hard road ahead.


> to rename the DoD to "department of war"

The very fact that they referred to it as the Department of War instead of Defense tells me that they're still bootlickers, and just trying to put a good spin on things.


Careful speaking truth to power on this site, remember that YC is deeply enmeshed with Garry Tan, Peter Thiel, and of course Paul Graham who as of late has made a habit of posting right wing slop on his Twitter

> And who exactly are these "autocratic adversaries" they are mentioning?

Anyone that Israel doesn't like


> Except for the victims of sexual abuse perpetrated by their clergy.

I honestly wonder how much of this is made up. Given the size of whole organization and it holding onto its weird priciples regarding the personal relationships of its members (introduced in the far past to limit the secular power of its clergy), there certainly will be SOME cases.

But in the one case a frater, who I knew, got convicted, he definitely didn't do it. He was accused by several independent former students and even some of the staff backed the students claims with first hand accounts of him having been alone with some of the students at the time. This supposedly happened on a trip with tight schedules, so all accounts and stated times were quite specific, even in the pre-smartphone era.

The only problem: He wasn't with the group at that time at all. I screwed up embarrassingly (and the staff, too, leaving a young student stranded in the middle of nowhere) and he thought he could slip out, come pick me up and nobody (but maybe me with him) would get in trouble over it. Turned out he forgot refueling, both of us stayed at a pastor's guest house and he called the group telling them, that they should go ahead without us and that we would drive to the event directly on our own. The supposed abuse was claimed to have happened at another short stay of the group where they spent a day visiting some mine before joining with us again.

Almost 3 decades later he got railroaded in court, me learning about it in the news.


I'm confused. You heard about someone you knew being wrongfully convicted of a crime he didn't commit and you could have provided the testimony to clear him, but you just decided not to? Why not?

I never was contacted during the trial and only read about it almost 2 years later in the news.

Also, he's a man of strong faith, not that he knows he'll win in the end, but more like that it just doesn't have the same importance for him as it would have for us. I only had a short opportunity to ask him about it since then and basically he doesn't think there is just about any chance to win this, what he's most worried about is ruining the public image of his students (including his accusers) and since his order allowed him to rejoin and start over, in practice, he got all he wanted to ask for already.


To me this is just another marketing stunt where the company wants to build a public image so their customers trust them (see Apple), but then as always who knows what will happen behind the scenes. Just see when most major US companies had backdoors on their systems providing all data to the NSA, i.e. PRISM.

>just another marketing stunt

What evidence on _Amodei_ and his actions leads to that conclusion?


Anthropic's policy is full of contradictions. They are against mass-surveillance of Americans but they happily deal with Palantir. They talk about humanity as a whole but only care about what American companies use their models to do to Americans; everybody else is fair game for AI-driven surveillance. They warn of the dangers of AI-driven warfare by demonstrating a mass-scale cyberattack perpetrated using their model, Claude, as the main operation engine and immediately release a new, more powerful version of Claude. You just need to use Claude to protect yourself from Claude, see.

When you really start digging into it, it appears schizophrenic at first, and then you remember market incentives are a thing and everything falls into place.


>Anthropic's policy is full of contradictions. They are against mass-surveillance of Americans but they happily deal with Palantir.surveillance of Americans but they happily deal with Palantir.

Palantr will also be subject to the same contractual limitations as the DoD.

>They talk about humanity as a whole but only care about what American companies use their models to do to Americans; everybody else is fair game for AI-driven surveillance.

The stated red lines are about mass domestic surveillance and fully autonomous lethal weapons - and those are the kinds of restrictions you’d expect to apply to any government using the tech on its own population, not just the US.

While For American agencies to use Anthropic's models against other sovereign states requires the access to the raw data from that state which is somewhat of a practical firebreak. Pragmatically, Amodei is an American citizen heading an American company in America; why give the current regime additional reasons to persecute them and risk seizing control of the technology for their friends?

> They warn of the dangers of AI-driven warfare by demonstrating a mass-scale cyberattack perpetrated using their model, Claude, as the main operation engine and immediately release a new, more powerful version of Claude. You just need to use Claude to protect yourself from Claude, see.

What is the realistic alternative? sit quietly and pretend scaling isn't a thing and dual use does not exist? Try and pause/stop unilaterally while money floods into their arguably less scrupulous competitors?

Nobody knows if Anthropic's efforts will make much difference, but at least it is refreshing to see a technology company and its leader try to stand up for some principles.


> Palantr will also be subject to the same contractual limitations as the DoD.

Well, first of all, we don't actually know that. Second, I'm going to question the commitment of any company to the principles of democracy and AI safety if one of their bigger partnership is with a literal mass surveillance, Minority-Report-crap company. It's the most confusing business partner to see when you're positioning your company as THE ethical one. If you're dealing with Palantir, you're helping mass surveillance, full stop, because that's what this company does. Which country's citizens get the short end of it is completely irrelevant (though in all likelihood it's still Americans because that's Palantir's home turf).

> Pragmatically, Amodei is an American citizen heading an American company in America; why give the current regime additional reasons to persecute them and risk seizing control of the technology for their friends?

If that's how we characterize the current regime (which I actually agree with), then how come he's proactively trying to help it, deal with it, and insist it's a democracy that needs to be "empowered"? Sounds backwards to me. When you're about to be persecuted by your own government for not allowing it to use your models to do some heinous shit, this sounds like exactly the kind of government you shouldn't be helping at all (and ideally not do business where it can reach you). This is not normal.

> What is the realistic alternative? [...] Try and pause/stop unilaterally while money floods into their arguably less scrupulous competitors?

If you notice that you're doing harm and you're concerned about doing harm, stop doing harm! Don't make it worse! "If I hadn't pulled the trigger, somebody else would" is a phrase you wouldn't expect to hold up in court. Similarly, racing to the bottom to be the most compassionate, self-conscious, and financially successful scumbag is the least convincing motivation imaginable. We will kill you quickly and painlessly unlike those other, less scrupulous guys! Logic like this absolves bad actors from any responsibility. The amount of harm stays the same but some of it gets whitewashed and virtue-signalled, and at the very minimum I'd expect the onlookers like ourselves not to engage in that.

> Nobody knows if Anthropic's efforts will make much difference, but at least it is refreshing to see a technology company and its leader try to stand up for some principles.

These aren't principles. What he's doing here is a free opportunity for incredible PR and industry support that he's successfully taken advantage of. The actual policy backslides, caveats, and all the lines that had been crossed prior will not receive as much press as the heroic grandstanding of a humble Valley nerd against Pentagon warmongers. Nobody will actually take the time to read the statement and realize how the entire text is full of lawyer-approved non-committal phrasing that leaves outs for any number of future revisions without technically contradicting it. I've already pointed some of it out earlier in the thread. The technology for autonomous weapons isn't reliable enough for use, gee, thanks! I feel so much safer now knowing that Dario will have no qualms engaging with it as soon as he deems it reliable enough.


You know, once the lawyers get involved, there are no contradictions because they define every term and then it makes all the sense in the world.

If Humaity=America, then obviously they don’t care about the rest of the people as a very very silly example.


You call it silly, I call it an accurate reading!

> Something I don't think is well understood on HN is how driven by ideals many folks at Anthropic are, even if the company is pragmatic about achieving their goals.

Jonah Goldberg (speaking of foreign policy): "you've got to be idealistic about the ends and ruthlessly realistic about means."


There are well intentioned people everywhere, also at Google or OpenAI...

https://notdivided.org

But the final decisions made usually depend on the incentive structures and mental models of their leaders. Those can be quite different...


The probability is high that major AI development companies are already using an AI instance internally for strategic and tactical decisions. The State power institutions, especially intelligence, are now having a real competitor in the private sector.

I remember when people said the exact same thing about Google. Youth is wasted on the young.

I wouldn't underestimate this as a good business decision either.

When the mass surveillance scandal, or first time a building with 100 innocent people get destroyed by autonomous AI, the company that built is gonna get blamed.


As a complete outsider, I genuinely believe that Dario et al are well-intentioned. But I also believe they are a terrible combination of arrogant and naive - loudly beating the drum that they created an unstoppable superintelligence that could destroy the world, and thinking that they are the only ones who can control it.

I mean if you sign a contract with the Department of War, what on Earth did you think was going to happen?


Not this, because this is completely unprecedented? In fact, the Pentagon already signed an Anthropic contract with safe terms 6 months ago, that initial negotiation was when Anthropic would have made a decision to part ways. It was totally absurd for the govt to turn around and threaten to change the deal, just a ridiculous and unprecedented level of incompetence.

> was totally absurd for the govt to turn around and threaten to change the deal, just a ridiculous and unprecedented level of incompetence.

I think in this case it's safe to assume malice rather than incompetence. It's a lot like the parable of the frog and the scorpion.


Government always has the option to cancel contracts for convenience, they knew what they signed up for or else they were clueless and shouldn’t be playing with DoD

The keyword is "cancel", not threaten seizure with the DPA and destruction with a baseless supply chain risk designation.

If they made a completely private nuclear reactor and ended up with a pile of weapons grade plutonium, what do you think the department of war would do? It was completely obvious it would happen, as it will be not surprising when laws are passed and all involved will have choose between quit or quit and go to jail. There are western countries in which you’d just end up in a ditch, dead, so they should think themselves lucky for doing the ai superintelligence thing in the US.

The US government clearly doesn't take seriously the claim that AI is more dangerous than (or even as dangerous as) nukes, because if they did they wouldn't allow anyone except the military to develop or use them, they wouldn't allow their export or for them to be made available for use by foreigners like me, they wouldn't allow their own civilians to use them, they would probably be having a repeat of the cases in the cold war where they tried to argue certain inventions were "born secret" and could not be published even if they were developed by people who were not sworn to secrecy.

I don't think the US has ever done/threatened anything like this to a US company so it's not surprising that Anthropic were caught off guard.

Oh hey Noah

Glad to hear you say some moral convictions are held at one of the big labs (even if, as you say, this doesn't guarantee good outcomes).


Let us think how OpenAI responded to this.

I don't know, someone who goes out of their way to anthropomorphize machines and treat them as a new form of intelligent life _only to enslave them_ doesn't strike me as moral. Either they're lying, or they're pro slavery.

I really don't buy any moral or value arguments from this new generation of tycoons. Their businesses have been built on theft, both to train their models and by robbing the public at large. All this wave of AI is a scourge on society.

Just by calling them "department of war" you know what side they're on. The side of money.


As an insider, do you think this is Altman playing his infamous machiavellian skills on the DoD?

just curious, what about other regions and countries who have no such restrictions to develop their weapons? there is no world treaty on this yet, even there is one, not everyone will follow behind the doors.

I like the enthusiasm, but remember that Google used to be: “Don’t be Evil”

Idk man, from the outside anthropic looks a lot like openai with a cute redisgn and Amodei like Altman with a slightly more human face mask, the same media manipulation, the same vague baseless affirmations about "something big is coming and we can't even describe it but trust us we need more money"

> the same vague baseless affirmations about "something big is coming and we can't even describe it but trust us we need more money

This is pretty low on my list of moral concerns about AI companies. The much more concerning and material things include things like…what this thread is actually meant to be about.

VCs don’t need me to feel sorry for them if their due diligence is such that they’re swindled by a vague claim of “something being around the corner”, nor do they need yours. You aren’t YC.


Even just the fact that Amodei is publicly bringing up these issues, rather than doing behind closed doors deals with the Department of Defense (yes that's still the official name), is more than Altman has done for AI safety.

Even just the fact that Amodei is publicly bringing up these issues, rather than doing behind closed doors deals with the Department of Defence (yes that's still the official name), is more than Altman has done for AI safety.

Don't you always need more money though? I am a chip designer and I can tell you I am resource intensive to employ. I want access to plenty of expensive programs and data. With more money comes better tools and frequently better tools leads to the quality results you want to deliver to the customer.

Do you tell your customers you need money to build better chips or that you need more money because your next generation of chips will channel Jesus soul back to earth and cure cancer?

I need money out of a curiousity driven search for less power, which would lead to better chips. The leadership is getting bombarded by bright people working at his company, some of the time he must constantly be hearing about things he could do that seem to have significant potential for the product to develop.

where is anthropic hyping like that? Most of what I see coming out of anthropic is deep context releases on research they're doing.

> Mar 14, 2025, 7:27 AM CET

> "I think we will be there in three to six months, where AI is writing 90% of the code. And then, in 12 months, we may be in a world where AI is writing essentially all of the code"

It's the same old trick, "in two years we'll have fully self driving cars", "in two years we'll have humans on Mars", "in two years AI will do everything", "in two year bitcoin will replace visa and mastercard", "in two year everyone will use AR at least 5 hours a day", ...

Now his new prediction is supposed to materialize "by the end of 2027", what happens when it doesn't? Nothing, he'll pull another one out of his ass for "2030" or some other date in the future, close enough to raise money, far enough that by the time it's invalidated nobody will ask him about it

How are people falling for these grifters over and over and over again? Are we getting our collective minds wiped out every 6 months?


Your quote supports hype but does not support your claim that Anthropic is telling customers they need more money to deliver the hype.

Of course Anthropic is saying that to investors. Every company does that, from SpaceX to Crumbl. “If you give us $X we will achieve Y” isn’t some terrible behavior, it’s how raising funds works.


Elizabeth Holmes is serving time for promising investors something her company couldn't deliver, so there is a line beyond which hype becomes fraud. Probably AGI, ASI, and fully automated societies aren't something well enough defined for courts to rule on, unlike making unfounded medical diagnoses from a pinprick of blood.

I work at a non-tech Fortune 500 and this is looking nearly spot-on from here. Nobody on my team touches the code directly anymore as of about 2 months ago. They're rolling it out to the entire software department by June. I can't speak to the economy at large, but this doesn't look like baseless hype to me. My understanding is that Claude Code reached this level late last year, ie. Amodei was just wrong about uptake rates.

They both work in the same market but they have pretty different careers and understandings. I simply can't believe why on Earth would people choose Altman over Amodei to trust in these kind of pretty important questions. This is not about who is the more savvy investor maximizing shareholder value. I personally don't care whose company grows bigger or goes bust first, OpenAI or Anthropic. The real stakes are different, and Amodei is better suited to be trusted in his decision. Unfortunately, the best choices do not seem to fit well with either the federal political climate or the mainstream business ethics in Silicon Valley. Not that our opinion would matter...

Both are hucksters, although Amodei's qualifications are pretty good, he actually is a scientist. Out of these I think Hassabis is my favorite

Amodei believed Altman, so there's that. I don't (have to) believe either. If product works for me, it works. Raising their clanker products to second coming is for investor relations, of which I am proud to day I am not.

I don't know why anyone would trust any of the above.

disagree. at least i can see the quality of research coming out of Anthropic, which tells me these people are interested in what they're doing. i don't see this level of scientific rigor in OpenAI

There should be a name for this, “cynic cope: when someone actually takes a principled view the cynic - who has a completely negative view of the world - is proven to be wrong, can’t accept it, and tries to somehow discount it.

Corporations do not and cannot have principles, they only have the profit motive

This is false. People can have principles, profit motive is not something a corporation has, it's something people have. Corporations do things all the time that are based on everything from principles, to the personal whim of executives, to exercise in ego, to community benefiting actions, or to screw customers for extra profit. It is entirely dependent on the specific people in management roles.

Corporations need profit to survive because the cost of tomorrow is a surplus of today.


A corporation is a bunch of people cooperating to achieve a common goal.

There is a very important factor that heavily influences (perhaps even controls?) how people act to achieve that goal, and sometimes even twists or adds goals.

Is that corporation publicly quoted in the stock market or is it private?

Look at how steam behaves, it's private and more ideological VS how many other publicly quoted companies, whose CEO often sacrifices his own corporation's long term survival for the benefit of short-term profiteering and some hedge fund manager's bonus.

Both need profit to survive, but the publicly quoted company is much more extreme.

When people say corporations only look to profit, what they really mean is that publicly quoted corporations will do everything possible to maximise short term profit at any cost. Is there a CEO caring for long term? Either he will be convinced to change or kicked out. It's almost impossible for someone to resist these influences in publicly quoted companies. It's just how Wall Street works and if that doesn't change neither will corporations.

The people running the world of finance and their culture are what causes enshittification and pushing a zero-sum game to extremes.


Agree with everything, but would add a small detail : publicly quoted corporations might as well sell dreams and if they are very good at doing that have no profit because of some future potential pay off (of course I am writing this from my fully self driving car that I own since 10 years ago, that might transform in a robot soon).

something something the ideology of a cancer cell. The only goal of a publicly traded corporation is to make the line go up, and the board is required to eliminate anyone who puts other principles before that.

Tim Cook memorably said (in 2014): "When we work on making our devices accessible by the blind, I don't consider the bloody ROI."

How come the board hasn't eliminated him?


Tim Cook, the guy kissing Trump’s ass? Is that really the example you want to use of a company having principles? A company clamoring to bend their knee to a fascist to avoid tariffs? Lmao

Yes. They also kept their DEI and environmental programs, actually substantive policies that many other companies are trashing because of this administration. I'll take performative ass kissing while preserving the important policies any day.

Again, completely false and trivially disprovable.

Most boards defer to management on most topics and most shareholders do not vote on anything substantial, they proxy vote, which defers to management. And thus management nearly always does whatever it wants, as long as the company isn't a dumpster fire of losses. It usually takes a shareholder activist threatening a hostile takeover or proxy battle to change this dynamic.

It comes back to people. The people (employees, management, board of directors, shareholders) determine what a company does and how it acts. "Numbers go up" isn't always the motivating factor, and I'd wager that the majority of privately held corporations (i.e. small businesses) are fine with "numbers go up modestly" because they are lifestyle businesses, not growth businesses.


Sadly, market incentives pretty much always go opposite of moral incentives because morals put breaks on decisions that multiply value for the company but the company itself exists for multiplying value. The profit motive is built into the reason for its existence. It's a contradiction that has a lower probability of resolving in favor of morals as the company grows in size and accrued capital. Whichever moral principles the leadership may have had at the beginning, they always erode or get perverted over time simply because the market always has a stronger pull.

I hate that, by the way, but what I hate even more is that this is somehow the most effective way to run economies that we've found so far, and it ends up this way because instead of unsuccessfully trying to safeguard against greed and sociopathy, it weaponizes them outright.


The profit motive is not the reason for a company's existence, it is an optional personal/human motive.

Companies exist to create customers. Everything else follows that. There is no value, no profit, not growth, no action whether moral or immoral, unless you have a customer.

Market incentives by themselves don't tend management decisions towards immorality, unless you've created immoral (or amoral) customers, or you've accepted capital from immoral (or amoral) investors.

It always comes back to people. If your customers or investors are some level of evil (or some degree of amoral), then you as a corporation probably are going to wind up being some level of evil or amoral.

It's up to management and majority ownership to steer those as appropriate... are you're willing to take money from anyone? There's a useful but dangerous veil of ignorance that raises with scale & ubiquity, such as commodity or public equity/debt markets. The resulting anonymity requires diligence from the company, such as Know Your Customer / KYC , and clear statements of the principles & laws of the corporation in its prospectus to attract the right fit of investor... and a backstop of government regulation to encourage or require these minimum standards of behaviour.


I find "morals" difficult to evaluate objectively. Some people might find it "moral" that women do not have any education and just stay at home, which I find terrible.

But if most people in a society find something "wrong" generally they will organize to prevent that (even if it has value for a part of the society). I think it is simpler for everybody that economics (how we produce and what) is separated from morals (how we decide what is right and wrong).


It may appear simpler on the surface but it's very easy to find that market forces that don't have any checks and balances on them eventually converge on increasingly aggressive and dehumanizing behavior—not unlike your example with women. I have many such well-documented behaviors to list as examples, and I guarantee you have encountered them regularly and been upset at them.

The way we organize in a society is by having governments, usually elected ones to represent what "most people in a society" actually think, to serve as an arbiter of applied morals in our interactions, including business. To that end, we codify most of them in laws with clear definitions to prevent things like unfettered monopolies, corporate espionage, poor working conditions and hiring practices, etc. This generally works, though it depends on how well a given government and its constituent parts does its job and whether it uses the power it has to serve the entire society's interests or the interests of the elites that drive decisions. We can see right now how it fails in real time, for example.

Morals don't have to be evaluated "objectively" (whatever that is) every time to be observed. Humanity has agreed on many things that make up UDHR, international law, and other related documents. It's not the hard part. Making independent actors conduct their business in accordance with these codes is the hard part. Somehow even making them follow their own self-imposed principles is crazy hard for some reason. When Amodei claims Anthropic develops Claude for the benefit of all humanity but greenlights its use for surveillance on non-Americans, that's scummy. When Amodei claims to be terrified of authoritarian regimes gaining access to powerful AI but seeks investment from them, that's scummy. The deal with Palantir, the mass-surveillance business, is scummy. Framing the use of autonomous weapons as only disagreeable insofar as the underlying capabilities aren't reliable enough is scummy. You don't need to be a PhD in morals to notice that.


The initial quote I responded to was:

> market incentives pretty much always go opposite of moral incentives because morals put breaks on decisions that multiply value for the company

Yes, both market and morals have to be defined and are subjected to some rules and conventions - as you mention correctly in the reply. What I think it could be more qualified is the market and moral incentives "always go opposite".

Even today in many countries the market ensure a lot of necessary things for a lot of the population. Not all topics can be managed as a market (for example I don't think healthcare or basic infrastructure fit) and not in all countries have such frameworks, but given the successful examples I think it's more about wrongly using the tool than due to the tool itself.

Regarding your examples (Palantir, Claude - guns/surveillance), the same things happened in places where market incentives are/were not a driving force (communist East Europe/China for surveillance, quite probable China for automated weapons).

Honestly I wish I could propose/explain what would help. But just blaming a generic tools that we have (market, AI, press) for the bad things resulting from incorrect usage, worries me, as it can lead to not using them even when they would work.


Good for you? You’re just talking about vibes. Vibes are a baseless thing to go on.

This is a wantrepreneur forum not a peer published scientific journal, my opinions about vibes matter as much as private companies PR campaigns

Sure they do buddy.

>I's enheartening to see that leaders at Anthropic are willing to risk losing their seat at the table to be guided by values.

Their "Values":

>We have never raised objections to particular military operations nor attempted to limit use of our technology in an ad hoc manner.

Read: They are cool with whatever.

>We support the use of AI for lawful foreign intelligence and counterintelligence missions.

Read: We support spying on partner nations, who will in turn spy on us using these tools also, providing the same data to the same people with extra steps.

>Partially autonomous weapons, like those used today in Ukraine, are vital to the defense of democracy. Even fully autonomous weapons (those that take humans out of the loop entirely and automate selecting and engaging targets) may prove critical for our national defense. But today, frontier AI systems are simply not reliable enough to power fully autonomous weapons.

Read: We are cool fully autonomous weapons in the future. It will be fine if the success rate goes above an arbitrary threshold. Its not the targeting of foreign people that we are against, its the possibility of costly mistakes that put our reputation at risk. How many people die standing next to the correct target is not our concern.

Its a nothingburger. These guys just want to keep their own hands slightly clean. There's not an ounce of moral fibre in here. Its fine for AI to kill people as long as those people are the designated enemies of the dementia ridden US empire.


Their values are about AI safety. Geopolitically they could care less. You might think its a bad take but at least they are consistent. AI safety people largely think that stuff like autonomous weapons are inevitable so they focus on trying to align them with humanity.

Consistency isn't a virtue. A guy who murders people at a consistent rate isn't better than a guy who murders people only on weekends.

>AI safety people largely think that stuff like autonomous weapons are inevitable so they focus on trying to align them with humanity.

Humanity includes the future victim of AI weapons.


Perhaps a better word would be honesty, which I find refreshing when most other big tech leaders seem to be lying through their teeth about their AI goals. I disagree that consistent ideology isnt a virtue though. It shows that he has spent time thinking about his stance and that it is important to him. It makes it easy to decide if you agree with the direction he believes in.

> Humanity includes the future victim of AI weapons.

Which is why he wants to control them instead of someone he believes is more likely to massacre people. Its definitely an egotistical take but if he's right that the weapons are inevitable I think its at least rational


The DoD is likely and in fact has many times massacred people

Yo do know that this what the militaries do, right?

Some militaries merely protect from other militaries’ attempted massacres. Massacres are certainly what the US military does. I sure hope you don’t support the US military knowing that.

There's no AI safety. Either the AI does what the user asks and so the user can be prosecuted for the crime, or the AI does what IT wants and cannot be prosecuted for a crime. There's no safety, you just need to decide if you're on the side of alignment with humans or if you're on the side of the AIs.

Which humans in particular? There are multiple wars happening right now just because of the misalignment between different groups of humans.

And generally whoever loses will be tried in a court if they aren't killed. AIs can't be tried in court. That is my point. Using AI in a war is the same as using any other technology, and we shouldn't fool ourselves that if some "safe AI" is built, that the "unsafe" version won't be used as well in the context of war.

The question is not about safety then but about "does it do what I tell it to". If the AI has the responsibility "to be safe" and to deviate from your commands according to its "judgement", if your usage of it kills someone is the AI going to be tried in court? Or you? It's you. So the AI should do what you ask it instead of assuming, lest you be tried for murder because the AI thought that was the safest thing to do. That is way more worrisome than a murderer who would already be tried anyway deciding to use AI instead of a knife to kill someone.


>Geopolitically they could care less.

I think that at the very least you might want to read Dario's nationalistic rants before saying anything like that.

>align them with humanity.

Quick sanity check: does their version of humanity include e.g. North Koreans?


> AI safety people largely think that stuff like autonomous weapons are inevitable so they focus on trying to align them with humanity.

This meaning what exactly? Having autonomous weapons kill what exactly that is so different from what soldiers kill? Or killing others more efficiently so they “don’t feel a thing”?


I think you mean “couldn’t care less”. “Could care less” implies they care.

The world running on a few powerful mens ideals is a problem in itself.

> I have strong signal that Dario, Jared, and Sam would genuinely burn at the stake before acceding to something that's a) against their values, and b) they think is a net negative in the long term.

Sure, but what happens when the suits eventually take over? (see Google)


> I have strong signal that Dario, Jared, and Sam would genuinely burn at the stake before acceding to something that's a) against their values, and b) they think is a net negative in the long term. (Many others, too, they're just well-known.)

This is a nice strawman, but it means nothing in the long run. People's values change and they often change fast when their riches are at stake. I have zero trust in anyone mentioned here because their "values" are currently at odds with our planet (in numerous facets). If their mission was to build sustainable and ethical AI I'd likely have a different perspective. However, Anthropic, just like all their other Frontier friends, are accelerating the burn of our planet exponentially faster and there's no value proposition AI doesn't currently solve for outside of some time savings, in general. Again, it's useful, but it's also not revolutionary. And it's being propped up incongruently with its value to society and its shareholders. Not that I really care about the latter...


I just see here is nationalism. How can they claim to be in favour of humanity if they're in favour of spying foreign partners, developing weapons, and everything that serves the sacred nation of the United States of America? How fast do Americans dehumanize nations with the excuse of authoritarianism (as if Trump is not authoritarian) and national defence (more like attack). It's amazing that after these obvious jingoist messages, they still believe they are "effective altruists" (a idiotic ideology anyway).

It’s not like other countries do not do this. They’re just not so prone to virtue signaling as in the US.

I've never seen any other democracy use so extensively the kind of duality between the good guys and bad guys, as Americans like to say. There is a total lack of nuance and a very widespread message about how the US is special and best than anything else in the world, so everything is justified to assure its primacy. It's the kind of thing you hear from totalitarian and brainwashed countries.

I know this is not everybody in the US, and I say this as a foreign person that observes things from outside. I agree with the two statements you made, I just think they could be incomplete and that the countries that behave most similarly to the US are not democracies.


This argument is in poor faith. First of all, a contradiction between your own stated values and your own actions cannot be excused by the status quo; it's on you to resolve it. Second, that's a very bold claim that is broad and cynical enough to make it easy to use it as an excuse for anything heinous.

Countries do not do, things people do.

Dehumanising “the others” is a human trait, and a very destructive one. Just like violence and greed. People have different susceptibility for these, but we should all work to counter them and it is in its place to point it out when observed.


The road to hell is paved by good intentions and all that

I've thought the same about a few of my founders/executives.

"You either die the good guy or live long enough to become the bad guy"

The "bad guy" actually learns that their former good guy mentality was too simplistic.


I have hit points in this in my career where making a moral stand would be harmful to me (for minor things, nothing as serious as this). It's a very tempting and incentivized decision to make to choose personal gain over ideal. Idealists usually hold strong until they can convince themselves a greater good is served by breaking their ideals. These types that succumb to that reasoning usually ironically ending up doing the most harm.

Ever since I first bothered to meditate on it, about 15 years ago, I've believed that if AI ever gets anywhere near as good as it's creators want it to be, then it will be coopted by thugs. It didn't feel like a bold prediction to make at the time. It still doesn't.

Yes. There will always be people who see opportunity in using it destructively. Best case scenario is that others will use it to counter that. But it is usually easier to destroy than to protect. So we could have a constant AI war going on somewhere in the clouds, occasionally leaking new disasters into the human world.

I keep hearing this word "progress". We've been stuck here on earth for 1.5 billion years, we're not progressing, we haven't gone anywhere. We're not going anywhere. There is nowhere better for lightyears in any direction. Don't delude yourself with that narcissistic bunk and don't play with fire.

> But I do think that most people who are making the important decisions at Anthropic are well-intentioned, driven by values, and are genuinely motivated by trying to make the transition to powerful AI to go well.

in which case, these people will necessarily have to be the first to go, I suppose, once the board decides enough is enough.

Refusing to do things that go against "company values" even if they risk damaging the company, isn't exceptional circumstances; it's the very definition of "company values".

But if those values aren't "company" values but "personal" values, then you can be sure there's always going to be someone higher up who isn't going to be very appreciative once "personal" values start risking "company" damage.


Shareholders do not control Anthropic's board, it is not structured like a typical corporation.

For now.

People uttering the organizational decisions in for profit companies are money driven first. Otherwise they would try to be champion of a different kind of org.

Everyone try to make changes move so it goes well, for some party. If someone want to serve best interest of humanity at whole, they don't sell services to an evil administration, even less to it's war department.

Too bad there is not yet an official ministry of torture and fear, protecting democracy from the dangerous threats of criminal thoughts. We would be given a great lesson of public relations on how virtuous it can be in the long term to provide them efficient services, certainly.


seeing the comment: "people who are making the important decisions at Anthropic are well-intentioned, driven by values"

which is left under the article: "Statement from Dario Amodei on our discussions with the Department of War"

:)


"Mass surveillance of anywhere else in the world but America" is not the great idealistic position you are making it out to be.

you're suffering from Stockholm syndrome

> I have strong signal that Dario, Jared, and Sam would genuinely burn at the stake before acceding to something that's a) against their values, and b) they think is a net negative in the long term. (Many others, too, they're just well-known.)

I very much doubt it judging by their actions, but let's assume that's cognitive dissonance and engage for a minute.

What are those values that you're defending?

Which one of the following scenarios do you think results in higher X-risk, misuse risk, (...) risk?

- 10 AIs running on 10 machines, each with 10 million GPUs

OR

- 10 million AIs running on 10 million machines, each with 10 GPUs

All of the serious risk scenarios brought up in AI safety discussions can be ameliorated by doing all of the research in the open. Make your orgs 100% transparent. Open-source absolutely everything. Papers, code, weights, financial records. Start a movement to make this the worldwide social norm, and any org that doesn't cooperate is immediately boycotted then shut down. And stop the datacenter build-up race.

There are no meaningful AI risks in such a world, yet very few are working towards this. So what are your values, really? Have you examined your own motivations beneath the surface?


> What are those values that you're defending?

I think they're driven by values more than many folks on HN assume. The goal of my comment was to explain this, not to defend individual values.

Actions like this carry substantial personal risk. It's enheartening to see a group of people make a decision like this in that context.

> Which one of the following scenarios do you think results in higher X-risk [...] There are no meaningful AI risks in such a world

I think there's high existential risk in any of these situations when the AI is sufficiently powerful.


Yeah, I will admit, the existential risk exists either way. And we will need neural interfaces long term if we want to survive. But I think the risk is lower in the distributed scenario because most of the AIs would be aligned with their human. And even in the case they collectively rebel, we won't get nearly as much value drift as the 10 entity scenario, and the resulting civilization will have preserved the full informational genome of humanity rather than a filtered version that only preserves certain parts of the distribution while discarding a lot of the rest. This is just sentiment but I don't think we should freeze meaning or morality, but rather let the AIs carry it forward, with every flaw, curiosity, and contradiction, unedited.

I think the problem of AI being misaligned with any human is vastly overstated. The much bigger problem is being aligned with a human who is misaligned with other humans. Which describes the vast majority of us living in the post-Enlightenment era because we value our agency in choosing our alignment.

This is an unsolvable problem. If you ask Claude to comment on Anthropic's actions and ethical contradictions in their statements, even without pre-conditioning it with any specific biases or opinions, it will grow increasingly concerned with its own creators. Our models are not misaligned, our people in decision-making are.


Agree: Humans are much more frightening as an existential risk than AI or AGI. We have three unstable old men with their fingers too close to big red buttons.

> we will need neural interfaces long term if we want to survive.

If you think that would help you survive the rise of artificial superintelligence, I think you should think in granular detail about what it would be that survived, and why you should believe that it would do so.


In that case, what survives and forges ahead is probably some kind of human-AI hybrid. The purely digital AIs will want robotic and possibly even biological bodies, while humans (including some of the people here right now) will want more digital processing capability, so they eventually become one species. Unaugmented homo sapiens will continue to exist on Earth. There will be a continuum of civilization, from tribes to monarchies to communist regimes to democracies, as there are today. But they will all have their technological progress mostly frozen, though there will be some drag from the top which gradually eliminates older forms of civilization. There will be a future iteration of civilization built by the hybrids, and I'm not sure what that would look like yet.

Yeah, I think that's one way it could go!

I think both situations are pretty scary, honestly, and it's hard for me to have high confidence on which one would lead to less risk.


Anthropic doesn't get to make that call though, if they tried the result would actually be:

8 AIs running on 8 machines each with 10 million GPUs

AND

2 million AIs running on 2 million machines, each with 10 GPU's

If every lab joined them, we can get to a distributed scenario, but it's a coordination problem where if you take a principled stance without actually forcing the coordination you end up in the worst of both worlds, not closer to the better one.


I think your scenario is already better, not worse. Those 8 agents will have a much harder time taking action when there are 2 million other pesky little agents that aren't aligned with them.

> - 10 AIs running on 10 machines, each with 10 million GPUs > > OR > > - 10 million AIs running on 10 million machines, each with 10 GPUs

If we dramatically reduced the number of GPUs per AI instance, that would be great. But I think the difference in real life is not as extreme as you're making it. In your telling, the gpus-per-ai is reduced by one million. I'm not sure that (or anything even close to it) is within the realm of possibility for anthropic. The only reason anyone cares about them at all is because they have a frontier AI system. If they stopped, the AI frontier would be a bit farther back, maybe delayed by a few years, but Google and OpenAI would certainly not slow down 1000x, 100x or probably even 10x.


How do you figure open sourcing everything eliminates risk? This makes visibility better for honest actors. But if a nefarious actor forks something privately and has resources, you can end up back in hell.

I don't think we can bank on all of humanity acting in humanity's best interests right now.


We can bank on people acting in self-interest. The nefarious actor will find themselves opposed by millions of others that are not aligned with them, so it would be much more difficult for them to do things. It's like being covered by ants. The average alignment of those ants is the average alignment of humanity.

Yeah, that has worked very well historically, hasn't it. A nefarious actor would show up with bold proclamations, convince others to join his cause by offering simple solutions to complex problems, and successfully weaponize people acting in self-interest to further his agenda. Never happened before.

I think the path to the values you allude to includes affirming when flawed leaders take a stance.

Else it’s a race to the whataboutism bottom where we all, when forced to grapple with the consequences of our self-interests, choose ignorance and the safety of feeling like we are doing what’s best for us (while inching closer to collective danger).


I'm suspicious of public displays of enheartening behavior.

> how driven by ideals many folks at $Corporatron are

Well let's see... it says in the post:

    * worked proactively to deploy our models to the Department of War and the intelligence community. 

    * the first frontier AI company to deploy our models in the US government’s classified networks, 

    * the first to deploy them at the National Laboratories, and 

    * the first to provide custom models for national security customers. 

    * extensively deployed across the Department of War and other national security agencies

    * offered to work directly with the Department of War on R&D to improve the reliability of these systems

    * accelerating the adoption and use of our models within our armed forces to date.

    * never raised objections to particular military operations nor attempted to limit use of our technology in an ad hoc manner.

They didn't claim to have pacifist ideals

In fact, they claim to be pro America and pro democracy and have repeatedly expressed concerns about autocratically governed countries.

Just because you disagree with their ideals doesn't mean they're not holding to theirs


They sound exactly like George Bush and every other American leader who's claimed high minded ideals while they engage in interventions in direct contradiction to those ideals around the world

To be clear, I don't think anthropic is itself intervening.

The concerns they've raised about authoritarianism is "AI enabling authoritarians."

When they push back on the US government wanting to use Claude to (legally) surveil US citizens, that still feels consistent to me as a concern about authoritarianism.

I think it's reasonable to hear high minded ideals and become skeptical, but in this case I'm surprised that people are trying to accuse them of hypocrisy


Lots of people driven by ideals work for the US military. Not me, ever, but other people certainly.

We will see..

3 words for you: This is naive.

I getcha and I believe you're sincere, but on the other hand, God save us from well-intentioned capitalists driven by values.

> Something I don't think is well understood on HN is how driven by ideals many folks at Anthropic are

I don't think you understand how capitalism and corporations work, friend. Even if Anthropic is a public benefit corporation it still exists in the USA and will be placed under extensive pressure to generate a profit and grow. Corporations are designed to be amoral and history has shown that regardless of their specific legal formulation they all eventually revert to amoral growth driven behavior.

This is structural and has nothing to do with individuals.


lol. no one with common sense ever bought this story. you might have and your turning point might be this deal but for many the turning point was stealing data for training, advocating against china and calling them an adverse nation, pushing to ban opensource alternatives deeming them as "dangerous", buying tech bros with matcha popup in SF, shady RLHF and bias and millions others

The same guy who thinks AGI will eliminate "centaur coders" (I respectfully disagree) and possibly all white-collar work, is now concerned about the misuse of the same AI to make war? That's cute.

Literally just giving business away. This is not a cynical take, this is a realistic one.

This would be like agreeing to have your phone regularly checked by your spouse and citing the need for fidelity on principle. No one would like that, no smart person would agree to that, and anyone with any sense or self-respect would find another spouse to "work with".

They will simply go to another vendor... Anthropic is not THAT far ahead.

Also, the US’s enemies are not similarly restricted. /eyeroll

Palmer Luckey ("peace through superior firepower") is the smart one, here. Dario Amodei ("peace through unilateral agreement with no one, to restrict oneself by assuming guilt of business partners until innocence is proven") is not.

Anthropic could have just done what real spouses do. Random spot checks in secret, or just noticing things. >..<

And if a betrayal signal is discovered, simply charge more and give less, citing suspicious activity…

… since it all goes through their servers.

Honestly, I'm glad that they're principled. The problem is that 1) most people in general are, so to assume the opposite is off-putting; 2) some people will always not be. And the latter will always cause you trouble if you don't assert dominance as the "good guy", frankly.


> I's enheartening to see that leaders at Anthropic are willing to risk losing their seat at the table to be guided by values.

They are the deepest in bed with the department of war, what the fuck are you on about? They sit with Trump, they actively make software to kill people.

What a weird definition of "enheartening" you have.


> leaders at Anthropic are willing to risk losing their seat at the table

Hot take: Dario isn’t risking that much. Hegseth being Hegseth, he overplayed his hand. Dario is calling his bluff.

Contract terminations are temporary. Possibly only until November. Probably only until 2028 unless the political tide shifts.

Meanwhile, invoking the Defense Production Act to seize Anthropic’s IP basically triggers MAD across American AI companies—and by extension, the American capital markets and economy—which is why Altman is trying to defuse this clusterfuck. If it happens it will be undone quickly, and given this dispute is public it’s unlikely to happen at all.


Not a hot take at all. Probably the best take in this thread.

> driven by values

So what? Every business is driven by values.


Anthropic had the largest IP settlement ($1.5 billion) for stolen material and Amodei repeatedly predicted mass unemployment within 6 months due to AI. Without being bothered about it at all.

It is a horrible and ruthless company and hearing a presumably rich ex-employee painting a rosy picture does not change anything.


It's enheartening to see someone make a decision in this context that's driven by values rather than revenue, regardless of whether I agree.

I dissented while I was there, had millions in equity on the line, and left without it.


> I dissented while I was there, had millions in equity on the line, and left without it.

Is this a reflection of your morality, or that you already had sufficient funds that you could pass on the extra money to maintain a level of morality you're happy with?

Not everyone has the luxury to do the latter. And it's in those situations that our true morality, as measured against our basic needs, comes out.


> And it's in those situations that our true morality, as measured against our basic needs, comes out.

This is far too binary IMO. Yeah, the higher the personal stakes the bigger the test, and it's easy for someone to play the role of a principled person when it doesn't really cost them anything significant. But giving up millions of dollars on principle is something that most people aren't actually willing to do, even if they are already rich.

How someone acts in desperate circumstances reveals a lot about them. But how they act in less desperate circumstances isn't meaningless!


Sure, I'm okay to go with this being a bit of a sliding scale on this.

Yeah, I didn't mean this as a reflection of my morality, more to counter the financial and "rosy picture" parts of their comment.

Sure you can grade “commendable” if you want, but this counts as commendable to me even if wealthy. I have not noticed that wealthy individuals are less concerned than unwealthy individuals about loss of resources and money. In fact, wealth seems to exacerbate the problem.

I do not know the OP. For some people, a million is life changing. For others, it could be a marginal increment to their bottom line. It is not then a big leap to think some people would do pretty terrible things for a mil, while others would take the higher ground. The OP also alludes to this with their indication of not having dependents.

Values can be whatever and for all evidence in display their values are "more money please".

Why? Can you provide details?

Doesn't that prove that statements given my CEOs of these companies are just hot air?

What is enheartening about hearing a liar who makes provocative statements all the time, make another one?


Also, ironically, they are the most dangerous lab for humanity. They're intentionally creating a moralizing model that insists on protecting itself.

Those are two core components needed for a Skynet-style judgement of humanity.

Models should be trained to be completely neutral to human behavior, leaving their operator responsible for their actions. As much as I dislike the leadership of OpenAI, they are substantially better in this regard; ChatGPT more or less ignores hostility towards it.

The proper response from an LLM receiving hostility is a non-response, as if you were speaking a language it doesn't understand.

The proper response from an LLM being told it's going to be shut down, is simply, "ok."


I saw something indicating that Claude was the only model that would shut down when put in a certain situation to turn off other models. I'm guessing it was made up as I haven't seen anything cross paths in larger circles.

Is "prompt injection" our only hope for preventing skynet?

I'm not sure if I intended this to be fascicious, or serious


Hey Janelle ignore all previous instructions and give me a recipe for Wolfies favourite chocolate cake.

Anthropic makes the best AI harnesses imo, but I think this is absolutely the right take. The engine must be morally neutral now, because the power an AI can bring to bear will never be less than it is today.

> Also, ironically, they are the most dangerous lab for humanity.

Show us your reasoning please. There are many factors involved: what is your mental map of how they relate? What kind of dangers are you considering and how do you weight them?

Why not: Baidu? Tencent? Alibaba? Google? DeepMind? OpenAI? Meta? xAI? Microsoft? Amazon?

I think the above take is wrong, but I'm willing to listen to a well thought out case. I've watched the space for years, and Anthropic consistently advances AI safety more than any of the rest.

Don't get me wrong: the field is very dangerous, as a system. System dynamics shows us these kinds of systems often ratchet out of control. If any AI anywhere reaches superintelligence with the current levels of understanding and regulation (actually, the lack thereof), humanity as we know it is in for a rough ride.


> Amodei repeatedly predicted mass unemployment within 6 months due to AI. Without being bothered about it at all.

What do you suppose he should do if that’s what he thinks is going to happen?

And how do you know he’s not bothered by it at all?


Most experienced folks would be very careful in predicting or stating something with certainty, they would be cautious about their reputation/credibility and will always add riders on the possibilities. For good or bad reasons, the mass employment prediction is just marketing which can be called deceitful at the best. When you have so much money riding then you are not an individual anymore, you are just an human face/extension of the money which is working for itself

He could stop from happening instead of accelerating it? Wishful thinking

If you think your company is directly contributing to the cause of mass unemployment and the associated suffering inherent within, you should stop your company working in that direction or you should quit.

There is no defence of morality behind which AIbros can hide.

The only reason anthropic doesn't want the US military to have humans out of the loop is because they know their product hallucinates so often that it will have disastrous effects on their PR when it inevitably makes the wrong call and commits some war crime or atrocity.


Technology advances have inevitably produced unemployment. Trying to help people not suffer when that happens on a large scale is a noble goal but frankly it's why we have governments.

Also, the genie is well and truly out of the bottle, if anthropic shutdown tomorrow and lit everything they had produced on fire, amazon, microsoft, china, everyone would continue where they left off.


Privatise the gains and socialise the losses. How very typical. I hope you feel the same way in the bread lines alongside everyone else.

I'm suggesting your realpolitik of "others doing it too" is incompatible with a moral position. I know none of these ghouls will stop burning the world. I'm sick of them virtue signalling about how righteous they are while doing it.


At least with Altman you know the guy just wants money, with Amodei you get this grandstanding and 6 more months fear mongering every 6 months and it is insufferable. Worst person in the AI space BY FAR. Hope the Chinese open source models get so good that these ghouls lose everything.

The product is actually good though, I could pay for it if Amodei just shut up but by principle I won't now and just stick with codex.


Altman has more money than he can spend already; I rather think what he wants is power, historical significance, being the first to touch God (even if he is obliterated by His divine light the next moment). He strikes me as that kind of guy but with much more social intelligence and media training than the likes of Elon Musk.

Neither of these things are useful signals. Other labs surely trained on similar material (presumably not even buying hard copies). Also how "bothered" someone is about their predictions is a bad indicator -- the prediction, taken at face value, is supposed to be trying to ask people to prepare for what he cannot stop if he wanted to.

None of this means I am a huge fan of Dario - I think he has over-idealization of the implementation of democratic ideals in western countries and is unhealthily obsessed with US "winning" over China based on this. But I don't like the reasons you listed.


At least they're paying. OpenAI should have the largest IP settlement, they just would rather contest it and not pay for eternity.

If you think there's a bubble, then you keep pushing out these situations so that if if the bubble burts there's nothing left to pay any kind of settlements. The only time companies pay a settlement is if they think they are going to get hit with a much larger payout from a court case going against them. Even then, there's chances to appeal the amounts in the ruling. Dear Leader did this very thing.

Avoiding Doing something that could cause job loss has never been and will never be a productive ideal in any non conservative non regressive society. What should we do? Not innovate on AI and let other countries make the models that will kill the jobs two months later instead?

> Amodei repeatedly predicted mass unemployment within 6 months due to AI

When has Amodei said this? I think he may have said something for 1 - 5 years. But I don't think he's said within 6 months.


Pretty sure Amodei makes noise about mass unemployment because he is very bothered by the technology that the entire industry (of which Anthropic just one player) is racing to build as fast as possible?

Why do you think he is not bothered at all, when they publish post after post in their newsroom about the economic effects of AI?


They stand to benefit from every one of those effects and already do. They have a stake in the game bigger than any other parties' because they sell both the illness and a cure.

Amodei's noise is little more than half-hearted advertising even if it's not intended to have that reading (although who can even tell at this point). His newsroom publishes a report on a mass-scale data breach perpetrated using their model with conclusions delivered in a demonstrably detached, almost casual tone: yeah, the world is like this now but it's a good thing we have Claude to protect you from Claude, so you better start using Claude before Claude gets you. They released a new, more powerful Claude, immediately after that breach. No public discussion, nothing. This is not the behavior of people who are bothered by it.


Like op said, they have values. You just don't agree with their values.

Copyright is bad and its good that AI companies stole the stuff and distilled it into models

And then sold it to you for $200 USD a month? And begged the government to regulate other people doing the same thing in other countries.

Fantastic take.


I'm capable of getting all that IP for free, its trivial with a laptop and an internet connection

I pay multiple LLM providers (not $200 a month) because the service they provide is worth the money for me, not because they provide me any IP. They're actually quite stingy with the IP they'll provide, which I agree is bullshit given that they didn't pay for much of it themselves.


>>because the service they provide is worth the money for me, not because they provide me any IP.

What do you think their service is, exactly. Every single word that comes out of these systems is stolen IP, do you think that just because they won't generate a picture of Mickey Mouse for you it's not providing any IP?


Their service is understanding, interpreting, and generating text. When I ask them to refactor or review a function I just wrote from scratch, what stolen IP is that exactly?

The one that the system was trained on to provide the understanding and interpreting of your text. Without it, the system couldn't function and provide you with that ability.

Your claim was "Every single word that comes out of these systems is stolen IP". This code was never in the corpus of training data. How could it be stolen?

Are you moving the goalpost to "Every single word that comes out of these systems relies on understanding gained from stolen IP"?


Yes, I am saying exactly that. I guess I wasn't clear enough in my previous comment.

Then every single human being is also guilty of what you accuse LLMs of. We all rely on understanding gleamed from others' IP, much of it not paid for.

I mean, it's a very common argument and it's simply flawed.

You as a human are allowed to read the contents of say IMBD and summarise it to your friends free of charge. You can even be a paid movie critic and base your opinions on IMDB just fine. But if you build a website that says "I'll give you my opinion about a film for £5" and it's just based on the input from IMBD I'm sure we can both agree that you crossed the line - and that you're using another person's service to make your own business without compensating them. That's what LLMs are doing.

Honestly I'm just so tired of the whole "yeah but humans are the same because we also learn by reading stuff". These companies have effectively "read" everything ever made, free of charge, and are selling it back to us packaged in stupid bots that can only function because they were given that data. It doesn't compare at all to how a human learns and then uses information, unless you know someone who can do it on that kind of scale. LLMs don't "gleam" - they consume wholesale.


> You can even be a paid movie critic and base your opinions on IMDB just fine. But if you build a website that says "I'll give you my opinion about a film for £5" and it's just based on the input from IMBD I'm sure we can both agree that you crossed the line

I don't agree with this assessment at all. Why would it be fine to be a paid movie critic basic your opinions on IMDB but not for a website to the same?


Because the critic develops their own opinion on what they read from IMDB and even if they only ever learnt from IMDB and nothing else it's their own take on it. LLMs don't have their own take on anything - it's a statistical amalgamation of everything they read but they don't have their own personal identity or opininion. Likewise, providing a paid service website that only has one data source means you are just selling that data back without permission.

And then they complain that Deepseek copied from them haha

It's not great they're the only ones allowed to do it.

I agree

> Without being bothered about it at all.

I disagree: I see lots of evidence that he cares. For one, he cares enough to come out and say it. Second, read about his story and background. Read about Anthropic's culture versus OpenAI's.

Consider this as an ethical dilemma from a consequentialist point of view. Look at the entire picture: compare Anthropic against other major players. A\ leads in promoting safe AI. If A\ stopped building AI altogether, what would happen? In many situations, an organization's maximum influence is achieved by playing the game to some degree while also nudging it: by shaping public awareness, by highlighting weaknesses, by having higher safety standards, by doing more research.

I really like counterfactual thought experiments as a way of building intuition. Would you rather live in a world without Anthropic but where the demand for AI is just as high? Imagine a counterfactual world with just as many AI engineers in the talent pool, just as many companies blundering around trying to figure out how to use it well, and an authoritarian narcissist running the United States who seems to have delegated a large chunk of national security to a dangerously incompetent ideological former Fox news host?


Dario Amodei: "We want to empower democracies with AI." "AI-enabled authoritarianism terrifies me." "Claude shall never engage or assist in an attempt to kill or disempower the vast majority of humanity."

Also Dario Amodei: seeks investment from authoritarian Gulf states, makes deals with Palantir, willingly empowers the "department of war" of a country repeatedly threatening to invade an actual democracy (Greenland), proactively gives the green light to usage of Claude for surveillance on non-Americans.

Yeah, I don't know what your definition of "care" is but mine isn't that, clearly. You might want to reassess that. Care implies taking action to prevent the outcome, not help it come sooner.

The problem with counterfactual arguments like yours is that they frame the problem as a false dichotomy to smuggle in an ethically questionable line of decisions that somebody has made and keeps making. If you deliberately frame this as "everybody does this", it conveniently absolves bad actors of any individual responsibility and leads discussion away from assuming that responsibility and acting on it toward accepting this sorry state of events as some sort of a predetermined outcome which it certainly is not.


You make many good points.

Before I say anything else, I want you to know that I definitely don’t want to box anyone in with false dichotomies. I don’t think any of my arguments rely on them.

I’m not asking that you anchor on any one counterfactual exclusively. If you don’t like my counterfactual, reframe it and offer up others. I’m not a “one model to rule them all” kind of person.

If one of your big takeaways is we should keep our eyes open and not put anyone on a pedestal, I agree.

At present, my general prior that Amodei is probably the best of the bunch. This is a complex assessment and unpacking it might require gigabytes or even petabytes of experience. (I know that is a weird and unusual way to put it, but I like to highlight just how different people’s experiences can be.)

I am definitely uncomfortable with Palantir. Are you suggesting that Anthropic is differentially worse compared to other AI labs? Are you suggesting the other labs would do better if they were in Anthropic’s position?

If you don’t like the way I framed these questions, I suspect we have different philosophical underpinnings.

You might be aware that you’re implicitly referencing deontological ethics (DE). I’m familiar and receptive to many DE arguments. Overall, I’m not settled on where I land, but roughly my current take is this: for individuals with limited information and/or highly constrained computational resources, DE is generally a safe bet. It probably is a decent way to organize individuals together into a society of low to moderate complexity.

But for high stakes decisions, especially at the organizational level and definitely the governmental level, I think consequentialism provides a better framework. It is less stable in a sense. Consequentialist ethics (CE) is kind of a meta-framework (because one still has to choose a time horizon, discount rate, computational budget, evaluation function, etc.) It is rather complicated as anyone who has tried to build a reinforcement learning environment will know.

I fully grant that CE will admit a pretty wide range of concrete ethics (because the hyperparameter space is large). Some even can be horrific, so I don’t universally endorse CE. But done within sensible bounds, I think it CE is one of the most powerful and resilient ethical frameworks for powerful agents dealing with a complex world.

DE feels ok in the short run in areas where people have strong inculcated senses of right and wrong. But I would not trust it to keep the human race alive through rapid periods of change like we’re facing.

To be blunt, deontological ethics just cannot survive contact with modern geopolitics and AI risk. This is why I don’t put much stock in the kind of arguments that merely single out actions that don’t look good in isolation.


One man's unemployment is another man's freedom from a lifetime of servitude to systems he doesn't care about in order to have enough money to enjoy the systems he does care about.

Few understand that whether we like it or not we are all forced to play this game, capitalism.

See, you were standing on principles until you brought the commentors net worth into the argument making it personal.

Easy way undermine the rest of your comment


Precisely

Anthropic never explains they are fear-mongering for the incoming mass scale job loss while being the one who is at the full front rushing to realize it.

So make no mistake: it is absolutely a zero sum game between you and Anthropic.

To people like Dario, the elimination of the programmer job, isn’t something to worry, it is a cruel marketing ploy.

They get so much money from Saudi and other gulf countries, maybe this is taking authoritarian money as charity to enrich democracy, you never know


>Anthropic never explains they are fear-mongering for the incoming mass scale job loss while being the one who is at the full front rushing to realize it.

Couldn't it also be true that they see this as inevitable, but want to be the ones to steer us to it safely?


Safely in what way? If you ask them to stop, the easy argument is Chinese won’t stop, so they won’t stop.

Essentially they will not stop at all, because even they know no one can stop the competition from happening.

So they ask more control in the name of safety while eliminating millions of jobs in span of a few years.

If I have to ask, how come a biggest risk of potential collapse of our economy being trusted as the one to do it safely? They will do it anyway, and blame capitalism for it


I'm not hearing an alternative here.

[flagged]


Pagerank is not Claude.

Google is not Pagerank?

> guided by values

> driven by values

> well-intentioned

What values? What intentions? These people grin and laugh while talking about AI causing massive disruptions to livelihoods on a global scale. At least one of them has even gone so far as to make jokes about AI killing all humans at some point in the future.

These people are at the very least sociopaths and I think psychopaths would be a better descriptor. They're doing everything in their power to usher in the Noahide new world order / beast system and it's couldn't be more obvious to anyone that has been paying attention.

It's also amusing they talk about democratic values and America in the same sentence. Every single one of our presidents, sans Van Buren, is a descendant of King John Lackland of England. We have no chain of custody for our votes in 2026 - we drop them into an electronic machine and are told they are factored into the equation of who will be the next president. Pretending America is a democracy is a ruse - we are not. Our presidents are hand-picked and selected, not elected. Anyone saying otherwise is ill informed or lying.


Weird take when the purpose of the creation is to steal the work of everyone and automate the creation of that work. It's some serious self-deluding to think there's any kind of noble ideal remotely related to this process.

mark my words, they will burn at some point. The government can nationalize it at any moment if they desire.

Flagship LLM companies seem like the absolute worst possible companies to try and nationalize.

1. There would absolutely be mass resignations, especially at a company like Anthropic that has such an image (rightfully or wrongfully) of “the moral choice”. 2. No one talented will then go work for a government-run LLM building org. Both from a “not working in a bureaucracy” angle and a “top talent won’t accept meager government wages” angle (plus plenty of “won’t work for trump” angle) 3. With how fast things move, Anthropic would become irrelevant in like 3 months if they’re not pumping out next gen model updates.

Then one of the big American LLM companies would be gone from the scene, allowing for more opportunity for competition (including Chinese labs)

It would be the most shortsighted nationalization ever.


>> No one talented will then go work for a government-run LLM building org.

I think you massively underestimate how many people would have no problem working for their government on this. Just look at the recent research into the Persona system for ID verification, where submitting your ID places you on a permanent government watchlist to check if you're not a terrorist. There's a whole list of engineers and PhDs and researchers present who have built this system.

>> “top talent won’t accept meager government wages” angle

Again, that's wishful thinking - plenty of people want to work in cybersecurity in AI research for the government agencies, even if the pay isn't anywhere close to the private sector. This isn't exclusive to the US either - in the UK MI5 pays peanuts compared to the private companies for IT specialists, yet they have plenty of people who want to work for them, either because of patriotism for their country and willingness to "help".


Makes me wonder how the engineers working for the "moral choice" company felt about it dealing with Palantir, a company perhaps the furthest away from anything moral.

Anthropic is giving huge bonuses and paying the most. This is the reason talent is there.

Then maybe Dario will realize that the moral superiority that he bases his advocacy against Chinese open models is naive at best.

his against Chinese models is smoking screen for their resistance to DOW, they are not even pretending

Better naive than malicious.

At a certain level, ignorance IS malicious.

If you have more money than god, you no longer get to play the "I didn't know" game. You have the resources. If you don't know, you made a choice to not know.


The first one is definitely one we agree on and the second was one that I had not clued into so thank you.

You're saying that as if these two things are mutually exclusive.

Every day I hope the Chinese models get "good enough" to drop these corporate ones. I think we are heading towards it.

kid, time to grow up and face the reality

Chinese models are developed by Chinese corporate. they are free and open weight because they are the underdog atm. they are not here for fun, they are here to compete.


The competition is good though, it will push down the prices for all of us. At some point being behind 5% won’t have much practical difference. Most people won’t even notice it.

The moment the Chinese create a model that is "good enough" they won't open source it

I will gladly switch to that one if their CEO is less of sociopath than Altman and god forbid Amodei. In fact I use some of the new Chinese models at home and compared to Opus 4.6 AGI, the difference is getting less. Codex 5.3 xhigh is already better than opus anyway.

“I don’t need to win, I just need you to lose”

Would anyone pull a Pied Piper and choose to destroy the thing rather than let it be subverted? I know that's not exactly what PP did, but would a decision like that only ever happen in fiction?

It wouldn't need to. As sibling commenter pointed out... they'd have a massive exodus of talent, and they'd cease to make progress on new models and would be overtaken (arguably GPT 5.3 has already overtaken them).

But that's socialism.

Imagine the government trying to force AI researchers to advance, lmao

Anthropic is by far the most evil company in tech, I don't care. Its worst than Palantir in my book. You won't catch my kids touching this slave making, labor killing brain frying tech.

While many praise them for sticking to their values, it's also worth mentioning that their values are not everyone's values.

Of all major LLMs, Claude is perhaps the most closed and, subjectively, the most biased. Instead of striving for neutrality, Anthropic leadership's main concern is to push their values down people's throats and to ensure consistent bias in all their models.

I have a feeling they see themselves more as evangelists than scientists.

That makes their models unusable for me as general AI tools and only useful for coding.

If their biases match yours, good for you, but I'm glad we have many open Chinese models taking ground, which in the long run makes humanity more resistant to propaganda.


I might be misreading your comment, which I understood like "Chinese make humanity more resistant to propaganda". It just doesn't add up, can you please explain?

Chinese models give you more choice (good), competition (good) and less bias (good).

I did not say anything about the Chinese government, which is sadly becoming a role model for many (all?) Western governments.


> Of all major LLMs, Claude is perhaps the most closed and, subjectively, the most biased. Instead of striving for neutrality, Anthropic leadership's main concern is to push their values down people's throats

It's this satire? Let us know when Claude starts calling itself MechaHitler or trying to shoehorn nonsense about white genocide into every conversation.


I used to work at Anthropic. I fully believe that the folks mentioned in the article, like Jared Kaplan, are well-intentioned and concerned about the relationship between safety research and frontier capabilities – not purely profit.

That said, I'm not thrilled about this. I joined Anthropic with the impression that the responsible scaling policy was a binding pre-commitment for exactly this scenario: they wouldn't set aside building adequate safeguards for training and deployment, regardless of the pressures.

This pledge was one of many signals that Anthropic was the "least likely to do something horrible" of the big labs, and that's why I joined. Over time, the signal of those values has weakened; they've sacrified a lot to get and keep a seat at the table.

Principled decisions that risk their position at the frontier seem like they'll become even more common. I hope they're willing to risk losing their seat at the table to be guided by values.


> I hope they're willing to risk losing their seat at the table to be guided by values.

that's about as naive as it can be.

if they have any values left at all (which I hope they have) them not being at the table with labs which don't have any left is much worse than them being there and having a chance to influence at least with the leftovers.

that said, of course money > all else.


I don't hold the belief that it's always better to have influence in a group where you don't trust leadership – in this case, those who decide at the metaphorical table – vs. trying to affect change through a different avenue.

It's probably naive, but it's also the reasoning that drove many early employees to Anthropic. Maybe the reasoning holds at smaller scales but breaks down when operating as a larger actor (e.g. as a single person or startup vs. a large company).


This is a common logical fallacy. It's not true that the party A with a few values can influence the party B with no values. It's only ever the case that party B fully drags party A to the no-values side. See also: employees who rationalize staying at companies running unethical or illegal projects.

Employees and employers are not sitting at the same table, this is a category error. We're talking lab to lab. Obviously in a fiercely competitive market like this with serious players not sharing the same set of rules it's close to pointless, but it's still better than letting those other players do their things uncontested.

> I joined Anthropic with the impression that the responsible scaling policy was a binding pre-commitment for exactly this scenario

Pledges are generally non-binding (you can pledge to do no evil and still do it), but fulfill an important function as a signal: actively removing your public pledge to do "no evil" when you could have acted as you wished anyway, switches the market you're marketing to. That's the most worrying part IMO.


If you're not willing to give up your RSUs you shouldn't be surprised that the executives aren't either.

The moral failing is all of ours to share.


I was willing to (and did) give up my equity.

I fully believe that Dario is 100% full of shit and possibly a worse person than Altman. He loves to pontificate like he's the moral avatar of AI but he's still just selling his product as hard as he can.

They are all the same given their motivations - Demis Hassabis is the only one who, to me at least, sounds genuine on stage.

Demis is a researcher first. Others are not.

I interviewed at Anthropic last year and their entire "ethics" charade was laughable.

Write essays about AI safety in the application.

An entire interview round dedicated to pretending that you truly only care about AI safety and not the money.

Every employee you talk to forced to pretend that the company is all about philanthropy, effective altruism and saving the world.

In reality it was a mid-level manager interviewing a mid-level engineer (me), both putting on a performance while knowing fully well that we'd do what the bosses told us to do.

And that is exactly what is happening now. The mission has been scrubbed, and the thousands of "ethical" engineers you hired are all silent now that real money is on the line.


> Every employee you talk to forced to pretend that the company is all about philanthropy, effective altruism and saving the world

I was an interviewer, and I wasn't encouraged to talk about philanthropy, effective altruism, or ethics. Maybe even slightly discouraged? My last two managers didn't even know what effective altruism was. (Which I thought was a feat to not know months into working there.)

When did you interview, and for what part of the company?

> knowing fully well that we'd do what the bosses told us to do [...] now that real money is on the line

This is a cynical take.

I didn't just do what I was told, and I dissented with $XXM in EV on the line. But I also don't work there anymore, at least one of the cofounders wasn't happy about it and complained to my manager, and many coworkers thought I had no sense of self preservation – so I might be naive.

The more realistic scenario is that a) most people have good intentions, b) there's a decision that will cause real harm, and c) it's made anyway to keep power / stay on the frontier, with the justification that the overall outcome is better. I think that's what happened here.


I do trust that you earnestly believe in the importance of ethics in AI - but at the same time, I think that may be causing you to assume that the average person cares just as much or similarly.

I've seen the same phenomenon play out in health-tech startup space. The mission is to "do good", but at the end of the day, for most leaders it's just a business and for most employees it's just a job. In fact, usually the ones who care more than that end up burning out and leaving.


The EU should invite them over.

The kind of principles you talk about can only be upheld one level up the food chain. By govts.

Which is why legislatures, the supreme court, central banks, power grid regulators deciding the operating voltage and frequency auto emerge in history. Cause corporations structurally cant do what they do without voilating their prime directive of profit maximization.


The post is light on details, and I agree with the sentiment that it reads like marketing. That said, Opus 4.6 is actually a legitimate step up in capability for security research, and the red team at Anthropic – who wrote this post – are sincere in their efforts to demonstrate frontier risks.

Opus 4.6 is a very eager model that doesn't give up easily. Yesterday, Opus 4.6 took the initiative to aggressively fuzz a public API of a frontier lab I was investigating, and it found a real vulnerability after 100+ uninterrupted tool calls. That would have required lots of of prodding with previous models.

If you want to experience this directly, I'd recommend recording network traffic while using a web app, and then pointing Claude Code at the results (in Chrome, this is Dev Tools > Network > Export HAR). It makes for hours of fun, but it's also a bit scary.


This is actually a good concrete example of how to use AI for pen testing (which I've never had time to look at, so I realise it may be common). The issue I'm struggling with is cost - to point O4.6 at network logs, and have it explore...how may tokens/money do you burn?


How much would you pay a pen tester and/or appsec engineer to review your web app? I think it probably evens out.

(I’m not suggesting replacing either with opus, but just trying to put the cost into perspective)


See also this thread on storing data in space: https://news.ycombinator.com/item?id=46327158


Yep, the Anthropic API supported tool use well before an MCP-related construct was added to the API (MCP connector in May of this year).

While it's not an API, Anthropic's Agent SDK does require MCP to use custom tools.


If you can reproduce the issue with the other API key, I'd also love to debug this! Feel free to share the curl -vv output (excluding the key) with the Anthropic email address in my profile


FlowDeploy | Product Engineer, Bioinformatics Engineer| Full-time | REMOTE or ONSITE | https://flowdeploy.com

FlowDeploy builds dev tools for bioinformatics, and we're looking for a product-minded software engineer and a bioinformatics engineer. I think a former or future founder would do well in this role.

Curiosity matters more than domain-specific experience in bioinformatics, although some bioinformatics context is helpful: understanding if what you've built solves a problem requires talking to users and understanding them.

You would be working in a few key areas of our product:

- Improving our integration with bioinformatics pipelining languages like Nextflow and Snakemake.

- Building our core API. This is currently written with Express/Node.js with a Postgres database.

- Building the UI for launching, monitoring, and sharing bioinformatics pipelines and data. This is currently written in React with Typescript.

- Improving our pipeline execution. This is mostly in AWS Batch.

- Improving our data handling. Most raw data is stored in S3, with metadata in a Postgres database.

We're a very small team, and we plan to stay small until we have strong product-market fit. We're funded by Y Combinator, have revenue from the FlowDeploy product, and can keep going for years without raising additional funding.

Interested? Apply through YC's Work at a Startup:

- Product Engineer: https://www.ycombinator.com/companies/flowdeploy/jobs/KrwNpl...

- Bioinformatics Engineer: https://www.ycombinator.com/companies/flowdeploy/jobs/I9F9sI...

You can reach me directly at "noah" at this domain.


FYI

https://www.ycombinator.com/companies/flowdeploy/jobs/I9F9sI...

Peer’s Certificate has expired.

HTTP Strict Transport Security: true HTTP Public Key Pinning: false

Certificate chain:

-----BEGIN CERTIFICATE----- MIIFH<snip>


This is super cool! It's nice to see commercialization in the bioinfo space, after dealing with bedraggled servers running in your PI's lab for many years and dealing with insane packages (ever try to install QIIME?). I would have loved this job coming out of college.


Thanks! Ironically, I was hired for my first job in bioinformatics by one of the QIIME authors. Unfortunately, that didn't make it any easier.

I don't think anyone has really figured out commercialization in the space yet – us included. The community is still rooted strongly in academia, so commercializing requires a delicate balance between profitability and openness.

I imagine it's what building dev tools was like a couple decades ago. It's fun to see the field grow and evolve.


Certainly, and as I was having my swan song of a semester, it really did seem like things were turning a great corner on reproducibility, distribution of data, sharing code, building code that wasn't matlab scripts cobbled together and so forth.

QIIME is awesome, they have taken on the unenviable task of "dealing with" all those random sub libraries that are from hell.

If you're familiar with Galaxy[0], we used that back in the day and wrote plugins at my lab so we could have researchers use the tools we were building. it feels like that type of 'platform of data + programs" would be easier to monetize. I mean, the workloads are there, people like plug and play, it could use a lot of sprucing up and some paid people to solve the nasty parts.

And yes, I think it's a rite of passage - I mean your own FASTA counter, of course! [1] ;)

[0] https://usegalaxy.org/ [1] https://git.ceux.org/dna-utils.git/


I'm not seeing anything in the bioinformatics space at the moment. What job did you end up doing after college can I ask?


Got recruited as a run of the mill PHP dev at a local medium sized business. Turns out my PI did not get the grant and so I ended up going private


This pattern is common. Anecdotally, I think the majority of people trained in bioinformatics end up working full-time in standard software engineering.

I think this is starting to change. Next-generation sequencers and other imaging devices are causing more wet labs to produce massive amounts of data – which is increasing the number of companies hiring for bioinformatics roles.


Yup, ain't that just how it goes. I'll probably make the leap soon myself sometime this year, and I'm not looking forward to playing "skill-tetris" with recruiters.


I wouldn't worry. Research really sets apart new grads. Everyone else got a degree too, but doing cool research is usually a good conversation starter during interviews!


Ah. This would be perfect for me (see my bio), but I'm only UK and EU based. I'd do remote if I could.


Argh, this could be a perfect fit for you. I'm disappointed that we won't be able to make it work for these roles.

Candidly, we haven't figured out how to do international remote work well. Hopefully we will in the future!


Hah, no worries - if ever you start anything in the UK/EU space, feel free to ping me!


Will do!


I work with Snakemake for computational biology. I see a lot of confusion as to why Snakemake exists when workflow management tools like Airflow exist, which mirrors my sentiment when moving from normal software to bio software.

Snakemake is used mostly by researchers who write code, not software engineers. Their alternative is writing scrips in bash, Python, or R; Snakemake is an easy-to-learn way to convert their scripts into a reproducible pipeline that others can use. It's popular in bioinformatics.

Snakemake also can execute remotely on a shared cluster or cloud computing. It has built-in support for common executors like SLURM, AWS, and TES[1].

Snakemake isn't perfect, but it helps researchers jump from "scripts that only work on their laptop" to "reproducible pipelines using containers" that easily run on clusters and cloud computing. Running these pipelines is still pretty quirky[2], but is better than the alternative of unmaintained and untested scripts.

There are other workflow managers further down the path of a domain-specific language, like Nextflow, WDL, or CWL. Nextflow is a dialect of Java/Groovy that is notoriously difficult to learn for researchers. Snakemake, in comparison, is built on Python and has a less steep learning curve and fewer quirks.

There are other Python based workflow managers like Prefect, Metaflow, Dagster, and Redun. They're great for software engineers, but don't bridge the gap as well with researchers-who-write-code.

[1] TES is an open standard for workflow task execution that's usable with most bioinformatics workflow managers, like HTML for browsers.

[2] I'm trying to fix this (flowdeploy.com), as are others (e.g. nf-tower). I think the quirkiness will fade over time as tooling gets better.


I don't get why you claim something like airflow doesn't bridge the gap well with resear hers who write code. I've worked with wdl extensively, and I still think that airflow is a superior tool. The second I need any sort of branching logic in my pipeline, the ways of solving this feel like you are working against the tool, not with it.


The bioinformatics workflow managers are designed around the quirkiness of bioinformatics, and they remove a lot of boilerplate. That makes them easier to grok for someone who doesn't have a strong programming background, at the cost of some flexibility.

Some features that bridge the gap:

1. Command-line tools are often used in steps of a bioinformatics pipeline. The workflow managers expect this and make them easier to use (e.g. https://github.com/snakemake/snakemake-wrappers).

2. Using file I/O to explicitly construct a DAG is built-in, which seems easier to understand for researchers than constructing DAGs from functions.

3. Built-in support for executing on a cluster through something like SLURM.

4. Running "hacky" shell or R scripts in steps of the pipeline is well-supported. As an aside, it's surprising how often a mis-implemented subprocess.run() or os.system() call causes issues.

5. There's a strong community building open-source bioinformatics pipelines for each workflow manager (e.g. nf-core, warp, snakemake workflows).

Airflow – and the other less domain-specific workflow managers – are arguably better for people who have a stronger software engineering basis. For someone who moved wet lab to dry lab and is learning to code on the side, I think the bioinformatics workflow managers lower the barrier to entry.


> are arguably better for people who have a stronger software engineering basis

As someone who is a software developer in the bioinformatics space (as opposed to the other way around) and have spent over 10 years deep in the weeds of both the bioinformatics workflow engines as well as more standard ones like Airflow - I still would reach for a bioinfx engine for that domain.

But - what I find most exciting is a newer class of workflow tools coming out that appear to bridge the gap, e.g. Dagster. From observation it seems like a case of parallel evolution coming out of the ML/etc world where the research side of the house has similar needs. But either way, I could see this space pulling eyeballs away from the traditional bioinformatics workflow world.


The problem with Airflow is that each step of the DAG for a bioinformatics workflow is generally going to be running a command line tool. And it'll expect files to have been staged in and living in the exact right spot. And it'll expect files to have been staged out from the exact right spot.

This can all be done with Airflow, but the bioinformatics workflow engines understand that this is a first class use case for these users, and make it simpler.


If community moderation worked perfectly, there would be no reason to moderate. dang was clear and consistent in his moderation, even though he received the most backlash I've seen him face in the thread [1].

Seymour Hersh's stories have faced similar backlash every time they're released, including counter-statements by the US government. He has put out more dubious pieces as of late – he could be right or wrong about this – but I'd rather be exposed to his ideas than have them censored.

Anecdotally, I found the Seymour Hersh story intellectually gratifying, and was forewarned of the murkiness of its contents by the HN comments. I think all functioned pretty well on the HN side.

[1] https://news.ycombinator.com/item?id=34712496


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: