I'm inclined to believe that they consolidated their code across a lot of their microservices and simplified their architecture since that was stressed a lot from the moment Musk acquired Twitter. But we also can't really verify or disprove it.
I'm a bit confused about their so-called improvements to video recommendation quality and bot detection. I've seen a lot of sentiment from people that they see more bots, hate speech, and irrelevant content on their timelines. Maybe what I'm hearing is just anecdotal evidence or stories in a bubble?
The Sacramento data center migration to Portland is an entertaining story detailed here[1]. Here's the Hacker News thread on it[2].
They have a GPU supercompute cluster?? It seems like they have the capability to do training and inference with state-of-the-art algorithms at massive scales then. Why have Twitter's recommendations and ad revenue (even before the acquisition) been so poor then?
They're just running most of the bots in-house on the GPU supercompute cluster now, and billing Putin for it directly, now that the middleman Yevgeny Prigozhin is out of the picture.
> I've seen a lot of sentiment from people that they see more bots, hate speech, and irrelevant content on their timelines. Maybe what I'm hearing is just anecdotal evidence or stories in a bubble?
1. Note that they do not unambiguously define bots; it's "bots and content scrapers" lumped together:
> Blocked bots and content scrapers at a rate +37% greater than 2022. On average, we prevent more than 1M bots signup attacks each day and we’ve reduced DM spam by 95%.
2. I suspect it's a matter of signal-to-noise. Yes, the absolute bot count could be down, but how does it compare to the number of humans using X/Twitter and the volume of content which they are contributing each day? I've tried to find reliable statistics on this, but to no avail. My anecdotal experience is that fewer people I've followed are still regularly using X/Twitter since the Musk acquisition. Some are over at Post.News, some at Threads, some have shifted to Mastodon. It's a very fragmented experience now.
Sometimes I click a link to twitter, get log-in-walled or see the head of a tweet thread that's missing the rest of the thread. I wonder if they measure that as a successful "scrape bot blocked"?
My go-to here are the scams advertising support for metamask problems. Those still appear every day and are trivial to identify. I'm not sure I consider Twitter having a bot detection success until trivial issues like that are solved. They're still below the level of "text match and auto ban" solutions.
>They have a GPU supercompute cluster?? It seems like they have the capability to do training and inference with state-of-the-art algorithms at massive scales then. Why have Twitter's recommendations and ad revenue (even before the acquisition) been so poor then?
In general, assuming you are in some sort of decent state better ML doesn't make your revenue 50% better. It makes it 5% better per year.
> I've seen a lot of sentiment from people that they see more bots, hate speech, and irrelevant content on their timelines. Maybe what I'm hearing is just anecdotal evidence or stories in a bubble?
TBF, this is my year-over-year experience on all social media platforms.
"Musk turned to his security guard and asked to borrow his pocket knife. Using it, he was able to lift one of the air vents in the floor, which allowed him to pry open the floor panels. He then crawled under the server floor himself, used the knife to jimmy open an electrical cabinet, pulled the server plugs, and waited to see what happened. Nothing exploded. The server was ready to be moved."
This remains a fascinating, Howard Roark-like tale. I mean, it worked.
I don't use Twixter at all. But it seems like he got rid of most of the engineering org without getting rid of most of the technical aspects of Twitter. It's not super stable, but it's not a smoldering wreck. If he hadn't scared off the advertisers with his personality and social behaviors, it might be financially in a much better place. The human side of content moderation was definitely going to cost more, I am steering away from the content impact on Twixter financials.
Roark dynamited his own creation because he had no other recourse and was prepared to face the consequences. Musk dynamited something he bought out of foolhardiness and is expending his energy avoiding consequences. Similar in some regard, but different.
Yes, and it would probably work much of the time. The times when it doesn't work and you have to write off equipment worth more than the cost of a professional move, which shifts liability as well, is why more risk-averse companies don't do it.
> Among the changes we made was a shift of all media/blob artifacts out of the cloud, which reduced our overall cloud data storage size by 60%, and separately, we succeeded in reducing cloud data processing costs by 75%.
Huh. I feel like that's the one place to not leave cloud. 90% of why I want to use AWS is S3.
There's a difference between "all images" and "one in 100 million images"
S3 durability is "for every ten million objects stored, you can expect to incur an average loss of a single object once every 10,000 years". They could drop the durability by probably 5 9's and nobody would even notice.
Lol, 'forward thinking' by moving one of the most expensive cloud costs internal in an app that clearly doesn't need that storage near their other cloud assets and operates at a scale few other services do? Woooow, he's sooooo smart. Nobody else has thought of this
I hope their Trust & Security team has a roadmap that prioritizes topic and thread-jacking. The advent of GPT and their recent changes to creator compensation has created a massive drive for engagement by any means; the results are fairly predictable. Clicking on any trending topic usually surfaces completely unrelated videos with that topic's hashtags (and all of the other trending topic hashtags) appended to the tweet.
Then you have the "content creators" who use GPT to summarize or add details to a post from a larger content aggregator in hopes of bandwagoning engagement. I see a lot of this type of behavior from popular History-focused accounts and the mega-accounts that post engagement bait content (think canned, "desert island"-type questions and polls). It's less malicious but certainly reinforces cynicism towards the state of Twitter/X and the broader social web.
It's garbage, yes, but all these companies overhired for years and it was a no-brainer that they all needed to fix that. Hard to criticize that decision in principle. Though the execution of it seemed like a clown show.
If your main focus is growth or potential growth then inefficiencies are fine if they allow increased product velocity. A unified platform for N features is more efficient but communication overheads will cut velocity on improvements for all those N features. Of course if growth is no longer a focus that’s a different story.
I’m curious if you have a source for that? There has been lots of speculation on how much X traffic has reduced, but the best data I can find basically says that it might’ve dropped from the 2nd most visited website to the 5th most visited website in the world (according to SimilarWeb).
I'm a bit confused about their so-called improvements to video recommendation quality and bot detection. I've seen a lot of sentiment from people that they see more bots, hate speech, and irrelevant content on their timelines. Maybe what I'm hearing is just anecdotal evidence or stories in a bubble?
The Sacramento data center migration to Portland is an entertaining story detailed here[1]. Here's the Hacker News thread on it[2].
They have a GPU supercompute cluster?? It seems like they have the capability to do training and inference with state-of-the-art algorithms at massive scales then. Why have Twitter's recommendations and ad revenue (even before the acquisition) been so poor then?
[1] https://www.cnbc.com/2023/09/11/elon-musk-moved-twitter-serv...
[2] https://news.ycombinator.com/item?id=37470110