The default, naive assumption should always have been programs keep running indefinitey on their own. If thats not the goal of software then I don’t know what is (might as well go back to switchboard operators). Real world experience tells us that, to the contrary, all software goes down and requires specialist intervention eventually. I think a lot of people just jumped to the second level based on political motivations rather than deep knowledge of system failures.
> all software goes down and requires specialist intervention eventually
Well, that’s it, isn’t it? How many software systems need to keep running for Twitter to remain more or less functional?
If there are 10 critical systems that are running at four 9’s, you’d expect 3.6 hours of downtime a year, or about 90 days of uptime at a stretch if I have my math right.
If there are 100 critical systems running at 3 9’s, you’d expect 2.5 hours of downtime per day.
So yeah, all software should keep running. But it doesn’t. And something like Twitter isn’t “a software”, it’s a very large assembly of lots of different software systems and the exponential math that dependencies create.
Yep, and when one of the SEVs rolls around that would have been small (say 5m of downtime fixed with a flag flip), it instead will have a nontrivial chance of escalating into a major multi-hour/multi-day outage without the right institutional knowledge.
I'd guesstimate that Twitter probably has dozens of services that are in the critical path of an average user interaction. It's hard to keep even logically optional dependencies truly optional in large scale systems involving many people.
However Twitter didn't die in the past when fail whales ruled its day, so they probably won't kill it now. It's just not that kind of business. (In contrast, a one hour outage had me directly apologizing to our largest customers on the phone). That said, Twitter can only be unstable and lack feature growth for so long before something else takes its place, so Musk is on a clock.
Right, but Twitter wasn't a healthy business (in the sense of being profitable most years) in the first place so it's not beyond the realm of possibility they took reliability further than made sense. Anyway they now have a huge debt load that changes the calculus regardless.
I had MySQL running on some bare metal for many years without a restart.
I was terrified to update the kernel at that point, knowing that system disk had been running continuously for many years, and had no faith it would restart successfully.
Finally got two new servers to replace these (with these new SSD things!) and after migration, sure enough, one of the old servers failed to boot.
Even if your mysql instance and hardware had run indefinitely, if a table is being written to it will eventually run out of disk space or key space and crash. How long it will take depends on the application but it will happen eventually and if no one is around to fix it...
> I think a lot of people just jumped to the second level based on political motivations rather than deep knowledge of system failures.
Anyone who has ever been oncall can intuit how often stuff breaks in big or little ways. Sometimes it's transient and goes away, sometimes it can be filed away to be fixed in the next year, but sometimes, it turns out to be an all-hands-on-deck crisis for a team, or 5.
> The default, naive assumption should always have been programs keep running indefinitey on their own.
...for people who understand software to some extent. I get the feeling a lot of people see it more like a hamster wheel, where once the developers are gone it immediately starts noticeably slowing down as it stops (and are confused when that doesn't happen).
Now, if your Rust code was a distributed system that handles spiky loads from ~330m users, and processes petabytes of data, then I'd consider your comparison relevant to Twitter.
But I'm going to assume it's not relevant.
P.S., I've written Java services that never went down, because they had a well defined domain and all potential errors were handled. But, I'm not about to compare that to all of frigging Twitter.
The infra usually matters way more than the code. RAM or a disk will typically fail before the Linux kernel, and it's written in the boogeyman language.