When did hacker news become laggard-adopter/consumer-news.
Cal is a consumer of AI - interesting article for this community, but not this community. I thought hacker news was for builders and innovators - people who see the potential of a technology for solving problems big and small and go and tinker and build and explore with it, and sometimes eventually change the world (hopefully for the better). Instead of sitting on the sidelines grumbling about that some particular tech that hasn’t yet changed the world / met some particular hype (yet).
Incredibly naive to think AI isn’t making real difference already (even without/before replacing labor en masse.)
Actually try to explore the impact a bit. It’s not AGI, but doesn’t have to be to transform. It’s everywhere and will do nothing but accelerate. Even better, be part of proving Cal wrong for 2026.
Hard disagree. You don’t need AGI to transform countless workflows within companies, current LLMs can do it. A lot of the current investments are to help with the demand with current generation LLMs (and use cases we know will keep opening up with incremental improvements). Are you aware of how intensely all the main companies that host leading models (azure, aws, etc) are throttling usage due to not enough data center capacity? (Eg. At my company we have 100x more demand than we can get capacity for, and we’re barely getting started. We have a roadmap with 1000x+ the current demand and we’re a relatively small company.)
AGI would be more impactful of course, and some use cases aren’t possible until we have it, but that doesn’t diminish the value of current AI.
> Eg. At my company we have 100x more demand than we can get capacity for, and we’re barely getting started. We have a roadmap with 1000x+ the current demand and we’re a relatively small company.
OpenAI's revenue is $13bn with 70% of that coming from people just spending $20/mo to talk to ChatGPT. Anthropic is projecting $9bn in revenue in 2025. For nice cold splash of reality, fucking Arizona Iced Tea has $3bn in revenue (also that's actual revenue not ARR)
You might have 100x more demand than you can get capacity for, but if that 100x still puts you at a number that in absolute terms is small, it's not very impressive. Similarly if you're already not profitable and achieving 100x growth requires 1,000x in spend, that's also not a recipe for success. In fact it's a recipe for going bankrupt in a hurry.
I have no idea if OpenAI’s valuation is reasonable. All I’m saying is I’m convinced the demand is there, even without AGI around the corner. You do not need AGI to transform countless industries.
And we are profitable on our AI efforts while adding massive value to our clients.
I know less about OpenAI’s economics, I know there are questions on whether their model is sustainable/for how long. I am guessing they are thinking about it and have a plan?
This is correct, it should burn the retinas of anyone thinking that OAI or Anthropic are in any way worth their multi-billion dollar valuations. I liked AK’s analysis of AI for coding here (it’s overly defensive, lacks style and functionality awareness, is a cargo cultist, and/or just does it wrong a lot) but autocomplete itself is super valuable, as is the ability to generate simple frontend code and let you solve the problem of making a user interface without needing a team of people with those in-house skills.
There are many more use cases that aren't fully realised yet. With regards to coding, LLMs have shortcomings. However, there's a lot of work that can be automated. Any work that requires interaction with a computer can eventually be automated to some extent. To what extent is something only time can tell.
Sure, but you don’t need AI to automate computer work. You can make a career out of formalizing the kinds of excel-jockeying that people do for reports or data entry
This is a relatively reasonable take. Unfortunately, that's not what most AI investors or non-technical punters think. Since GPT 1 it's been all about unlocking 100%+ annual GDP growth by wholesale white collar automation. I agree with AK that the actually effect on GDP will be more or less negligible, which will be an unmitigated disaster for us economically given how much cash has already been incinerated
We’re a regular old SaaS company that has figured out how to add massive value using AI. I am making no statements about valuations and bubbles. I’m actually guessing there is some bubble / overhype. That doesn’t mean it isn’t still incredibly valuable.
I think this perspective is probably only true for 0.001% of people that actually follow Sam closely and are not optimistic about AGI and like to throw their opinions around. The superficial stuff. The rest don’t care to even know who Sam is and don’t care to assume motive.
It’s very likely they’ll bounce back. I’d rather OpenAI continue to innovate and push the industry forward as they have been. Haven’t seen much of that from Microsoft, so heavily disagree with you there. Prefer to focus on the actual product of the company not the personalities of the people there or armchair assumptions on the vibes of the culture.
> this perspective is probably only true for 0.001% of people that actually follow Sam closely
It’s corroded his credibility in D.C. and Brussels for a generation. He raised his profile tremendously right before people credibly called him a liar. It’s like he lofted an adversary’s payload into orbit. He will still get an audience with anyone, as he deserves. But people fact check him in a way they didn’t before and don’t with others. Even those who support his policy priorities, and with whom he and his team talk frequently. (OpenAI’s GR is between incompetent and non-existent.)
Microsoft is the de facto controlling shareholder in OpenAI. They provide all the money, compute, and backing, and have full access to the models. If OpenAI collapsed tomorrow, Microsoft would absorb its key employees (as they almost did during the board debacle) and everything would continue under the Microsoft umbrella. “OpenAI” is just a shinier name for work that is being done under the near-total control of Microsoft.
The money and compute is not the innovation. The LLM models and associated tools are, which is work by OpenAI employees and teams, not Microsoft employees/teams.
Confused here, is your argument here that OpenAI is not responsible for any innovation when it comes to LLM tech today? I’m curious about why you so strongly want to believe that?
Nobody knew that scaling transformer architecture would lead to the emergent intelligence we see today. Among other things, OpenAI did R&D for years on that. Also the only situation where this could true is if Google knew that LLMs could lead to this intelligence and decided to not make it happen, (along with every other tech company now that is furiously trying to catch up to OpenAI), which is absurd.
You seem to be inflating the emotional importance of my comment. Google did an enormous amount of the research prior to scaling. I merely pointing out that if there's credit to be given out, a bunch of it goes to Google.
Actually Google were building and scaling transformers at same time as OpenAI - BERT (following Allen AI's ELMo), T5, Meena, LaMDA (chatbot - preceding ChatGPT by a year or two), PaLM ...
It seems that Google really didn't know what to do with the tech, and hadn't figured out a way to control it (OpenAI's RLHF - critical for ChatGPT's success). It's a bit ironic that DeepMind were doing so much with RL, but Google Brain being separate at the time apparently were not consulting with them or tapping into their expertise.
It might’ve been easier to hire that talent under the shiny OpenAI umbrella, but as I said, Microsoft could absorb the entire thing overnight if it wanted to. And pay them enough to make them stay.
> Microsoft is the de facto controlling shareholder in OpenAI
No, they are not even a shareholder.
> Microsoft is entitled to up to 49 percent of the for-profit arm of OpenAI's profits, according to reports. But that's not the same as 49% ownership. That investment does not result in Microsoft owning part of OpenAI
You're right, this isn't a view that is likely to be shared by the general public.
However, I don't think the general public's view of OpenAI is much better at this point, given that their exposure is Hinton on 60 Minutes claiming that AI is going to imminently end civilization, creatives arguing that OpenAI has stolen their work, and students using their products to cheat.
The only people that I do know who have historically had a positive view of OpenAI has been people working in tech. And Sam seems to be doing everything he can to destroy that goodwill.
Among the people I've discussed recent AI with that aren't in tech, almost everyone is very uneasy about it. Some of them use it, and all of them recognize it as potentially useful, but almost everyone is more concerned than excited. Seems like surveys back my personal experience:
It's doesn't matter if Twitter/X has been around since the dawn of time, some people don't want to make accounts, and it's none of our business why. In the same way it's none of our business why someone might not want to subscribe to NYT or WaPo when their articles are posted here. It's a norm in this community to provide workarounds for walled content. The original poster in this sub thread was observing that this is now necessary for Twitter/X as well - that shouldn't be controversial or objectionable to anyone.
1. “The heat death of the universe” is my favorite HN comment of the decade.
2. The heat death of the universe does not mean one gigantic black hole. I’m just a hobbyist but my understanding of the theory is that black holes will continue to form, but through Hawking radiation, they eventually radiate out all their energy until it is all dispersed, ultimately leading to uniformity across the entire universe, max entropy, where “work” can no longer take place.
(It is an interesting question then whether information is actually destroyed through Hawking radiation?)
Curious what model the dentist bot is running on? Tried it out, was surprisingly good, though eventually it contradicted itself (booked a slot it said previously was not available). (I get that’s the programming but am curious especially given the latency is really great).
Completely disagree. You’re not making a phone call in most cases for entertainment purposes. If the options are wait in line for 20 minutes or speak to an actually useful bot, I would take the latter in 100% of cases.
They could have just said "5 times as much" to save us the bother of doing the calculation ... which needs a tiny bit of care because of the dual units.
The calculation involves dividing 5 by 1; most people should be able to handle it.
Note that they've correctly not bothered with providing equivalent quantities in each unit - 5 kilograms is 11 pounds, not 10 pounds. This doesn't matter, because the ratio 10 to 2 is equal to the ratio 5 to 1.
But it does raise the question of why they provide pounds at all, and if they're going to, why not just say "for every pound (or kilogram) of matter, there are roughly 5 pounds (or kilograms) of dark matter".
> why not just say "for every pound (or kilogram) of matter, there are roughly 5 pounds (or kilograms) of dark matter".
Your suggestion is better writing. As for why? Likely the author and editors rushed the content or lack strong skills in this particular style of writing. The author Paul Sutter seems to have a strong background in writing. The original sentence is awkward enough that it looks like it was written by one person and edited by another.
> Dark matter is the mysterious, unknown substance that seems to make up the bulk of all the mass in the universe; for every 2 pounds (1 kilogram) of regular matter, there's roughly 10 pounds (5 kg) of dark matter.
If I had to guess, I bet it only originally had one set of units and an editor added converted units to match some style guide. I doubt that Paul would have originally gone with
> for every 2 pounds of regular matter, there's roughly 10 pounds of dark matter.
Because 2:10 ratio is not a natural thing to write. He's an astrophysicist who did post-doc fellowships in Paris and Italy, so most likely he submitted an article with SI units:
> for every 1 kilogram of regular matter, there's roughly 5 kilograms of dark matter.
And I bet a livescience.com editor changed that to pounds to match a US-centric style guide.
Have to consider cost for all of this. Big value of RAG already even given the size of GPT-4’a largest context size is it decreases cost very significantly.
Not entirely sure I believe everything here. Curious how it plays out. Lot of narrative-affirming content.
“In the Examiner article, O’Reilly made reference to a “scam bot [that] had a blue check mark, meaning that, unlike me, it pays money every month to Elon Musk’s vastly indebted and unprofitable platform, a situation which would greatly disincentive his company taking proactive measures to weed them out”.
Is the argument here that someone is creating an army of paid accounts and spamming with them? And this is a major source of bots? Is this actually happening? I think that is probably unlikely given how expensive it would be?
Cal is a consumer of AI - interesting article for this community, but not this community. I thought hacker news was for builders and innovators - people who see the potential of a technology for solving problems big and small and go and tinker and build and explore with it, and sometimes eventually change the world (hopefully for the better). Instead of sitting on the sidelines grumbling about that some particular tech that hasn’t yet changed the world / met some particular hype (yet).
Incredibly naive to think AI isn’t making real difference already (even without/before replacing labor en masse.)
Actually try to explore the impact a bit. It’s not AGI, but doesn’t have to be to transform. It’s everywhere and will do nothing but accelerate. Even better, be part of proving Cal wrong for 2026.