Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Idk man, from the outside anthropic looks a lot like openai with a cute redisgn and Amodei like Altman with a slightly more human face mask, the same media manipulation, the same vague baseless affirmations about "something big is coming and we can't even describe it but trust us we need more money"


> the same vague baseless affirmations about "something big is coming and we can't even describe it but trust us we need more money

This is pretty low on my list of moral concerns about AI companies. The much more concerning and material things include things like…what this thread is actually meant to be about.

VCs don’t need me to feel sorry for them if their due diligence is such that they’re swindled by a vague claim of “something being around the corner”, nor do they need yours. You aren’t YC.


Even just the fact that Amodei is publicly bringing up these issues, rather than doing behind closed doors deals with the Department of Defense (yes that's still the official name), is more than Altman has done for AI safety.


Even just the fact that Amodei is publicly bringing up these issues, rather than doing behind closed doors deals with the Department of Defence (yes that's still the official name), is more than Altman has done for AI safety.


Don't you always need more money though? I am a chip designer and I can tell you I am resource intensive to employ. I want access to plenty of expensive programs and data. With more money comes better tools and frequently better tools leads to the quality results you want to deliver to the customer.


Do you tell your customers you need money to build better chips or that you need more money because your next generation of chips will channel Jesus soul back to earth and cure cancer?


I need money out of a curiousity driven search for less power, which would lead to better chips. The leadership is getting bombarded by bright people working at his company, some of the time he must constantly be hearing about things he could do that seem to have significant potential for the product to develop.


where is anthropic hyping like that? Most of what I see coming out of anthropic is deep context releases on research they're doing.


> Mar 14, 2025, 7:27 AM CET

> "I think we will be there in three to six months, where AI is writing 90% of the code. And then, in 12 months, we may be in a world where AI is writing essentially all of the code"

It's the same old trick, "in two years we'll have fully self driving cars", "in two years we'll have humans on Mars", "in two years AI will do everything", "in two year bitcoin will replace visa and mastercard", "in two year everyone will use AR at least 5 hours a day", ...

Now his new prediction is supposed to materialize "by the end of 2027", what happens when it doesn't? Nothing, he'll pull another one out of his ass for "2030" or some other date in the future, close enough to raise money, far enough that by the time it's invalidated nobody will ask him about it

How are people falling for these grifters over and over and over again? Are we getting our collective minds wiped out every 6 months?


Your quote supports hype but does not support your claim that Anthropic is telling customers they need more money to deliver the hype.

Of course Anthropic is saying that to investors. Every company does that, from SpaceX to Crumbl. “If you give us $X we will achieve Y” isn’t some terrible behavior, it’s how raising funds works.


Elizabeth Holmes is serving time for promising investors something her company couldn't deliver, so there is a line beyond which hype becomes fraud. Probably AGI, ASI, and fully automated societies aren't something well enough defined for courts to rule on, unlike making unfounded medical diagnoses from a pinprick of blood.


I work at a non-tech Fortune 500 and this is looking nearly spot-on from here. Nobody on my team touches the code directly anymore as of about 2 months ago. They're rolling it out to the entire software department by June. I can't speak to the economy at large, but this doesn't look like baseless hype to me. My understanding is that Claude Code reached this level late last year, ie. Amodei was just wrong about uptake rates.


They both work in the same market but they have pretty different careers and understandings. I simply can't believe why on Earth would people choose Altman over Amodei to trust in these kind of pretty important questions. This is not about who is the more savvy investor maximizing shareholder value. I personally don't care whose company grows bigger or goes bust first, OpenAI or Anthropic. The real stakes are different, and Amodei is better suited to be trusted in his decision. Unfortunately, the best choices do not seem to fit well with either the federal political climate or the mainstream business ethics in Silicon Valley. Not that our opinion would matter...


Both are hucksters, although Amodei's qualifications are pretty good, he actually is a scientist. Out of these I think Hassabis is my favorite


Amodei believed Altman, so there's that. I don't (have to) believe either. If product works for me, it works. Raising their clanker products to second coming is for investor relations, of which I am proud to day I am not.


I don't know why anyone would trust any of the above.


disagree. at least i can see the quality of research coming out of Anthropic, which tells me these people are interested in what they're doing. i don't see this level of scientific rigor in OpenAI


There should be a name for this, “cynic cope: when someone actually takes a principled view the cynic - who has a completely negative view of the world - is proven to be wrong, can’t accept it, and tries to somehow discount it.


Corporations do not and cannot have principles, they only have the profit motive


This is false. People can have principles, profit motive is not something a corporation has, it's something people have. Corporations do things all the time that are based on everything from principles, to the personal whim of executives, to exercise in ego, to community benefiting actions, or to screw customers for extra profit. It is entirely dependent on the specific people in management roles.

Corporations need profit to survive because the cost of tomorrow is a surplus of today.


A corporation is a bunch of people cooperating to achieve a common goal.

There is a very important factor that heavily influences (perhaps even controls?) how people act to achieve that goal, and sometimes even twists or adds goals.

Is that corporation publicly quoted in the stock market or is it private?

Look at how steam behaves, it's private and more ideological VS how many other publicly quoted companies, whose CEO often sacrifices his own corporation's long term survival for the benefit of short-term profiteering and some hedge fund manager's bonus.

Both need profit to survive, but the publicly quoted company is much more extreme.

When people say corporations only look to profit, what they really mean is that publicly quoted corporations will do everything possible to maximise short term profit at any cost. Is there a CEO caring for long term? Either he will be convinced to change or kicked out. It's almost impossible for someone to resist these influences in publicly quoted companies. It's just how Wall Street works and if that doesn't change neither will corporations.

The people running the world of finance and their culture are what causes enshittification and pushing a zero-sum game to extremes.


Agree with everything, but would add a small detail : publicly quoted corporations might as well sell dreams and if they are very good at doing that have no profit because of some future potential pay off (of course I am writing this from my fully self driving car that I own since 10 years ago, that might transform in a robot soon).


> corporations will do everything possible to maximise short term profit at any cost. Is there a CEO caring for long term? Either he will be convinced to change or kicked out.

While public companies are more likely to be short term focused, even this is not true. There are plenty (ie. thousands) of executives and public companies that are long term focused and tell investors to pound sand and sell the stock (or mount a shareholder challenge) if they don't like it.

Elon Musk is the most extreme example of this. He wants to go to Mars. He is turning Tesla into a robot company and discontinuing or curtailing the growth of some of his most profitable products.

Mark Zuckerberg is another one. He is losing $20 billion a year on VR, and even with recent cuts, will still be doing that. He's spending $50 billion on AI. None of that has anything to do with short term profit. Don't like it? Sell the stock.

Wall Street doesn't necessarily force companies into short term gains: they hold you to perform to what you say you will perform. This is often the trap that leads to poor management decisions, as they overpromise and underdeliver, leading to the enshittification spiral.

All of this depends on the governance structure and ownership structure, and how competitive the business is.

Many public companies for example have only common shares available while a family or an individual retains preferred shares with more voting power. This is how Zuck, or Larry Page or Larry Ellison etc can do whatever they do. Elon just has a reality distortion field so the board gives him a trillion dollar pay package.


something something the ideology of a cancer cell. The only goal of a publicly traded corporation is to make the line go up, and the board is required to eliminate anyone who puts other principles before that.


Tim Cook memorably said (in 2014): "When we work on making our devices accessible by the blind, I don't consider the bloody ROI."

How come the board hasn't eliminated him?


Tim Cook, the guy kissing Trump’s ass? Is that really the example you want to use of a company having principles? A company clamoring to bend their knee to a fascist to avoid tariffs? Lmao


I'm refuting your childish claim that the "only goal of a publicly traded corporation is to make the line go up".


Yes. They also kept their DEI and environmental programs, actually substantive policies that many other companies are trashing because of this administration. I'll take performative ass kissing while preserving the important policies any day.


Again, completely false and trivially disprovable.

Most boards defer to management on most topics and most shareholders do not vote on anything substantial, they proxy vote, which defers to management. And thus management nearly always does whatever it wants, as long as the company isn't a dumpster fire of losses. It usually takes a shareholder activist threatening a hostile takeover or proxy battle to change this dynamic.

It comes back to people. The people (employees, management, board of directors, shareholders) determine what a company does and how it acts. "Numbers go up" isn't always the motivating factor, and I'd wager that the majority of privately held corporations (i.e. small businesses) are fine with "numbers go up modestly" because they are lifestyle businesses, not growth businesses.


Sadly, market incentives pretty much always go opposite of moral incentives because morals put breaks on decisions that multiply value for the company but the company itself exists for multiplying value. The profit motive is built into the reason for its existence. It's a contradiction that has a lower probability of resolving in favor of morals as the company grows in size and accrued capital. Whichever moral principles the leadership may have had at the beginning, they always erode or get perverted over time simply because the market always has a stronger pull.

I hate that, by the way, but what I hate even more is that this is somehow the most effective way to run economies that we've found so far, and it ends up this way because instead of unsuccessfully trying to safeguard against greed and sociopathy, it weaponizes them outright.


The profit motive is not the reason for a company's existence, it is an optional personal/human motive.

Companies exist to create customers. Everything else follows that. There is no value, no profit, not growth, no action whether moral or immoral, unless you have a customer.

Market incentives by themselves don't tend management decisions towards immorality, unless you've created immoral (or amoral) customers, or you've accepted capital from immoral (or amoral) investors.

It always comes back to people. If your customers or investors are some level of evil (or some degree of amoral), then you as a corporation probably are going to wind up being some level of evil or amoral.

It's up to management and majority ownership to steer those as appropriate... are you're willing to take money from anyone? There's a useful but dangerous veil of ignorance that raises with scale & ubiquity, such as commodity or public equity/debt markets. The resulting anonymity requires diligence from the company, such as Know Your Customer / KYC , and clear statements of the principles & laws of the corporation in its prospectus to attract the right fit of investor... and a backstop of government regulation to encourage or require these minimum standards of behaviour.


I find "morals" difficult to evaluate objectively. Some people might find it "moral" that women do not have any education and just stay at home, which I find terrible.

But if most people in a society find something "wrong" generally they will organize to prevent that (even if it has value for a part of the society). I think it is simpler for everybody that economics (how we produce and what) is separated from morals (how we decide what is right and wrong).


It may appear simpler on the surface but it's very easy to find that market forces that don't have any checks and balances on them eventually converge on increasingly aggressive and dehumanizing behavior—not unlike your example with women. I have many such well-documented behaviors to list as examples, and I guarantee you have encountered them regularly and been upset at them.

The way we organize in a society is by having governments, usually elected ones to represent what "most people in a society" actually think, to serve as an arbiter of applied morals in our interactions, including business. To that end, we codify most of them in laws with clear definitions to prevent things like unfettered monopolies, corporate espionage, poor working conditions and hiring practices, etc. This generally works, though it depends on how well a given government and its constituent parts does its job and whether it uses the power it has to serve the entire society's interests or the interests of the elites that drive decisions. We can see right now how it fails in real time, for example.

Morals don't have to be evaluated "objectively" (whatever that is) every time to be observed. Humanity has agreed on many things that make up UDHR, international law, and other related documents. It's not the hard part. Making independent actors conduct their business in accordance with these codes is the hard part. Somehow even making them follow their own self-imposed principles is crazy hard for some reason. When Amodei claims Anthropic develops Claude for the benefit of all humanity but greenlights its use for surveillance on non-Americans, that's scummy. When Amodei claims to be terrified of authoritarian regimes gaining access to powerful AI but seeks investment from them, that's scummy. The deal with Palantir, the mass-surveillance business, is scummy. Framing the use of autonomous weapons as only disagreeable insofar as the underlying capabilities aren't reliable enough is scummy. You don't need to be a PhD in morals to notice that.


The initial quote I responded to was:

> market incentives pretty much always go opposite of moral incentives because morals put breaks on decisions that multiply value for the company

Yes, both market and morals have to be defined and are subjected to some rules and conventions - as you mention correctly in the reply. What I think it could be more qualified is the market and moral incentives "always go opposite".

Even today in many countries the market ensure a lot of necessary things for a lot of the population. Not all topics can be managed as a market (for example I don't think healthcare or basic infrastructure fit) and not in all countries have such frameworks, but given the successful examples I think it's more about wrongly using the tool than due to the tool itself.

Regarding your examples (Palantir, Claude - guns/surveillance), the same things happened in places where market incentives are/were not a driving force (communist East Europe/China for surveillance, quite probable China for automated weapons).

Honestly I wish I could propose/explain what would help. But just blaming a generic tools that we have (market, AI, press) for the bad things resulting from incorrect usage, worries me, as it can lead to not using them even when they would work.


Good for you? You’re just talking about vibes. Vibes are a baseless thing to go on.


This is a wantrepreneur forum not a peer published scientific journal, my opinions about vibes matter as much as private companies PR campaigns


Sure they do buddy.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: