Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Why is Meta doing it though? This is an astronomical investment. What do they gain from it?


They're commoditizing their complement [0][1], inasmuch as LLMs are a complement of social media and advertising (which I think they are).

They've made it harder for competitors like Google or TikTok to compete with Meta on the basis of "we have a super secret proprietary AI that no one else has that's leagues better than anything else". If everyone has access to a high quality AI (perhaps not the world's best, but competitive), then no one -- including their competitors -- has a competitive advantage from having exclusive access to high quality AI.

[0]: https://www.joelonsoftware.com/2002/06/12/strategy-letter-v/

[1]: https://gwern.net/complement


Yes. And, could potentially diminish OpenAI/MS.

Once everyone can do it, then OpenAI value would evaporate.


Once every human has access to cutting edge AI, that ceases to be a differentiating factor, so the human talent will again be the determining factor.


And the content industry will grow ever more addictive and profitable, with content curated and customized specifically for your psyche. The very industry Meta happens to be the one to benefit from its growth most among all tech giants.


> Once everyone can do it, then OpenAI value would evaporate.

If you take OpenAI's charter statement seriously, the tech will make most humans' (economic) value evaporate for the same reason.

https://openai.com/charter


> will make most humans' (economic) value evaporate for the same reason

With one hand it takes, with the other it gives - AI will be in everyone's pocket, and super-human level capable of serving our needs; the thing is, you can't copy a billion dollars, but you can copy a LLaMA.


> OpenAI’s mission is to ensure that artificial general intelligence (AGI)—by which we mean highly autonomous systems that outperform humans at most economically valuable work—benefits all of humanity. We will attempt to directly build safe and beneficial AGI, but will also consider our mission fulfilled if our work aids others to achieve this outcome.

No current LLM is that, and Transformers may always be too sample-expensive for that.

But if anyone does make such a thing, OpenAI won't mind… so long as the AI is "safe" (whatever that means).

OpenAI has been totally consistent with saying that safety includes assuming weights are harmful until proven safe because you cannot un-release a harmful model; Other researchers say the opposite, on the grounds that white-box research is safety research is easier and more consistent.

I lean towards the former, not because I fear LLMs specifically, but because the irreversibly and the fact we don't know how close or far we are means it's a habit we should turn into a norm before it's urgent.


Very similar to Tesla and EVs


...like open balloon.


He went into the details of how he thinks about open sourcing weights for Llama responding to a question from an analyst in one of the earnings call last year after Llama release. I had made a post on Reddit with some details.

https://www.reddit.com/r/MachineLearning/s/GK57eB2qiz

Some noteworthy quotes that signal the thought process at Meta FAIR and more broadly

* We’re just playing a different game on the infrastructure than companies like Google or Microsoft or Amazon

* We would aspire to and hope to make even more open than that. So, we’ll need to figure out a way to do that.

* ...lead us to do more work in terms of open sourcing, some of the lower level models and tools

* Open sourcing low level tools make the way we run all this infrastructure more efficient over time.

* On PyTorch: It’s generally been very valuable for us to provide that because now all of the best developers across the industry are using tools that we’re also using internally.

* I would expect us to be pushing and helping to build out an open ecosystem.


"different game"

But what game? What is the AI play that makes giving it away a win for meta?


A lot of the other companies are selling AI as a service. Meta hasn't really been in the space of selling a raw service in that way. However, they are at a center point of human interaction that few can match. In this space, it is how they can leverage those models to enhance that and make that experience better that can be where they win. (Think of, for example, giving a summery of what you've missed in your groups, letting you join more and still know what's happening without needing to shift through it all, identifying events and activities happening that you'd be interested in. This will make it easier to join more groups as the cost of being in one is less, driving more engagement).

For facebook, it isn't the technology, but how it is applied, is where their game starts to get interesting.

When you give away the tooling and treat it as first class, you'll get the wider community improving it on top of your own efforts, cycle that back into the application of it internally and you now have a positive feedback loop where other, less open models, lack one.


Weaken the competition (google and ms). Bing doesn’t exist because it’s a big money maker for ms, it exists to put a dent in google’s power. Android vs apple. If you can’t win then you try to make the others lose.


I think you really have to understand Zuckerberg's "origin story" to understand why he is doing this. He created a thing called Facebook that was wildly successful. Built it with his own two hands. We all know this.

But what is less understood is that from his point of view, Facebook went through a near death experience when mobile happened. Apple and Google nearly "stole" it from him by putting strict controls around the next platform that happened, mobile. He lives every day even still knowing Apple or Google could simply turn off his apps and the whole dream would come to an end.

So what do you do in that situation? You swear - never again. When the next revolution happens, I'm going to be there, owning it from the ground up myself. But more than that, he wants to fundamentally shift the world back to the premise that made him successful in the first place - open platforms. He thinks that when everyone is competing on a level playing field he'll win. He thinks he is at least as smart and as good as everyone else. The biggest threat to him is not that someone else is better, it's that the playing field is made arbitrarily uneven.

Of course, this is all either conjecture or pieced together from scraps of observations over time. But it is very consistent over many decisions and interactions he has made over many years and many different domains.


I think what Meta is doing is really smart.

We don't really know where AI will be useful in a business sense yet (the apps with users are losing money) but a good bet is that incumbent platforms stand to benefit the most once these uses are discovered. What Meta is doing is making it easier for other orgs to find those use-cases (and take on the risk) whilst keeping the ability to jump in and capitalize on it when it materializes.

As for X-Risk? I don't think any of the big tech leadsership actually beleive in that. I also think that deep down a lot of the AI safety crowd love solving hard problems and collecting stock options.

On cost, the AI hype raises Met's valuation by more than the cost of engineers and server farms.


> I don't think any of the big tech leadsership actually beleive in that.

I think Altman actually believes that, but I'm not sure about any of the others.

Musk seems to flitter between extremes, "summoning the demon" isn't really compatible with suing OpenAI for failing to publish Lemegeton Clavicula Samaltmanis*.

> I also think that deep down a lot of the AI safety crowd love solving hard problems and stock options.

Probably at least one of these for any given person.

But that's why capitalism was ever a thing: money does motivate people.

* https://en.wikipedia.org/wiki/The_Lesser_Key_of_Solomon


Zuck equated the current point in AI to iOS vs Android and MacOS vs Windows. He thinks there will be an open ecosystem and a closed one coexisting if I got that correctly, and thinks he can make the former.


Meta is an advertising company that is primarily driven by user generated content. If they can empower more people to create more content more quickly, they make more money. Particularly the metaverse, if they ever get there, because making content for 3d VR is very resource intensive.

Making AI as open as possible so more people can use it accelerates the rate of content creation


You could say the same about Google, couldn't you?


Yea probably, but I don't think Google as a company is trying to do anything open regarding AI other than raw research papers

Also google makes most of their money off search, which is more business driven advertising vs showing ads in between user generated content bites


Mark probably figured Meta would gain knowledge and experience more rapidly if they threw Llama out in the wild while they caught up to the performance of the bigger & better closed source models. It helps that unlike their competition, these models aren't a threat to Meta's revenue streams and they don't have an existing enterprise software business that would seek to immediately monetize this work.


If they start selling ai in their platform, it's a really good option, as people know they can run it somewhere else if they had to (for any reason, e.g: you could make a poc with their platform but then because of regulations you need to self host, can you do that with other offers?)


Zuck is pretty open about this in a recent earnings call:

https://twitter.com/soumithchintala/status/17531811200683049...


Besides everything said here in comments, Zuck would be actively looking to own the next platform (after desktop/laptop and mobile), and everyone's trying to figure what that would be.

He knows well that if competitors have a cash cow, they have $$ to throw at hundreds of things. By releasing open-source, he is winning credibility, establishing Meta as the most used LLM, and finally weakening the competition from throwing money on the future initiatives.


They heavily use AI internally for their core FaceBook business - analyzing and policing user content, and this is also great PR to rehabilitate their damaged image.

There is also an arms race now of AI vs AI in terms of generating and detecting AI content (incl deepfakes, election interference, etc, etc). In order not to deter advertizers and users, FaceBook need to keep up.


They will be able to integrate intelligence into all their product offerings without having to share the data with any outside organization. Tools that can help you create posts for social media (like an AI social media manager), or something that can help you create your listing to sell an item on Facebook Marketplace, tools that can help edit or translate your messages on Messenger/Whatsapp, etc. Also, it can allow them to create whole new product categories. There's a lot you can do with multimodal intelligent agents! Even if they share the models themselves, they will have insights into how to best use and serve those models efficiently and at scale. And it makes AI researchers more excited to work at Meta because then they can get credit for their discoveries instead of hoarding them in secret for the company.


The same thing he did with VR. Probably got tipped off Apple is on top of Vision Pro, and so just ruthlessly started competing in that market ahead of time

/tinfoil

Releasing Llama puts a bottleneck on developers becoming reliant on OpenAI/google/microsoft.

Strategically, it’s … meta.


Generative AI is a necessity for the metaverse to take off. Creating metaverse content is too time consuming otherwise. Mark really wants to control a platform so the companies whole strategy seems to be around getting the quest to take off.


I would assume it's related to fair use and how OpenAI and Google have closed models that are built on copyrighted material. Easier to make the case that it's for the public good if it's open and free than not...


It’s a shame it can’t just be giving back to the community and not questioned.

Why is selfishness from companies who’ve benefited from social resources not a surprising event vs the norm.


Because they're a publicly traded company with a fiduciary duty to generate returns for shareholders.


The two are not mutually exclusive.


If it was Wikipedia doing this, sure, assume the best.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: