software, like everything, is subject to laws of physics
I disagree; math would be a closer analogy. And indeed, arithmetic still works like it did a millenia ago. Closer to the present, I have binaries from the late 80s that still work today (and I use them semi-regularly.)
Indeed, much of the impetus of the software industry seems to be to propagate the illusion that software somehow needs constant "maintenance" and change. For the preservation of their own self-interests, of course; much like the company that makes physical objects too robust and runs out of customers, planned obsolescence and the desire to change things and justify it so they can be paid to do something are still there.
It's possible to make things which last. Unfortunately, much of the time, other economic considerations preclude that.
If software ran without side effects, perhaps. But it doesn't. Databases grow, files are uploaded, logs pile on, messages and events propagates and filesystems fill up. This is why entropy matters.
Exactly. Tiny memory leaks in seldom called functions can also cause slow degradation over time. People wonder why a simple restart seems to 'fix a boatload of problems' but this is often the reason why.
> I disagree; math would be a closer analogy. And indeed, arithmetic still works like it did a millenia ago. Closer to the present, I have binaries from the late 80s that still work today (and I use them semi-regularly.)
Sure, those binaries might work the same when executed. Although the probability of that is never 100%, but as you pointed out, the rules of arithmetic aren’t expected to change any time soon. That’s correct. Unfortunately software does not exist in its own micro-verse, it’s subject to the laws of physics acting on the machines it’s running on. So while you might be able to write scripts that work decades later, it’s much harder to ensure those scripts consistently run for decades. RAM chips, CPUs, and everything in between are guaranteed to eventually fail if left running unsupervised in perpetuity. Entropy raises with complexity. At Twitter’s scale, to run a software service you need globally distributed cloud infrastructure. They likely have hundreds of services, deployed to many hardware instances distributed across the globe. Twitter isn’t 1 script running 1 time producing a single result. It’s hundreds if not thousands of systems interacting with one another across many physical machines. Layers of redundancy help, but ultimately cascading failures are a mathematical certainty. Many would argue the best strategy to reduce downtime on these systems is to actually optimize for low recovery time when you do fail.
Software is also bound to the world in other ways. Similarly to how most business processes, products and even more generally, tools, change over time, so too do the requirements placed on software systems made to facilitate or automate these things.
Ultimately the only way to escape the maintenance cost of software is to stop running it. The longer you leave a software system running, the more likely it will eventually stop.
Even if the entirety of Twitter.com were mathematically proven correct, it still would run on servers that are made of physical bits that are subject to entropy.
It’s possible to make things that last if you are in total control of the whole stack, including hardware.
Embedded systems that still do their job after 30 years do exist but they live in isolation in a specific and controlled environment, and are built for a limited, unchanging task.
On the other hand, complex web software is build on layer upon layers that are not in Twitter’s complete control.
Hardware change regularly, requiring changes at the lower levels of an OS, inducing potential changes in behaviour, performance, which require adaptation as a consequence.
And that’s before considering security, eternally moving goalposts. Not just at the OS or network level, but also at the business level.
Twitter and al are not living in a locked down context, they live in the messy world of human interactions and that alone requires constant tweaking.
So yes, a binary is more like a mathematical construct and by itself it won’t rot, but if the world around that binary changes, you need to change the binary as well, and for that you need maintenance. The amount required depends on the complexity, brittleness and how well your stack is engineered, but implying it’s a con is a bit extreme.
Computation is literally bound by entropy. Math has no such limitations unless you explicitly define them.
I thoroughly recommend researching entropy as it regards to e.g. information theory, systems engineering and even (perhaps especially) to machine learning.
Computation is ultimately about what we can compute _in this universe_ and the forward flow of time is an emergent property from the universe’s innate entropic guarantees.
Time is “pre-sorted” for us thanks to entropy, enabling us to define algorithmic complexities over the time domain in the first place.
> we don’t think so. the prod incident we heard about involved someone making an ill-advised choice to reactivate a large account, causing a huge load on the social-graph system, on the night before a prolonged high-traffic event.
Spot on. Absolutely hate this attitude that software sitting there just gathers wear and tear as if it's a mechanical device. Software is written with a particular target platform in mind: x86, ARM, Nvidia GPUs, FPGA soft-processor etc. If the hardware you are running on doesn't change, your software should still function. If the specs of that target platform don't change, your software should still function. If the specs of the target platform change but a hard-working compiler engineer has done the work to make sure your software gracefully uses the new features (for example, a compiler optimizing using AVX instructions), your software should still function.
The fact that most software doesn't continue to function even on the same platform, and on the same hardware, is a massive indictment of the software industry's standard practices.
Complex software has complex failure modes though.
An application running on a single platform, self-contained and with some basic failovers such as redundancy (2+ machines running the same application), etc. should have ridiculous uptimes.
A distributed and complex system with interdependent components, under variable load, with different capacities for subsystems running across some thousands of machines will, inevitably, encounter some unforeseen state which degrades the system as a whole. It can be a small component that cascades a failure mode in an unexpected way or can be a big component failing spectacularly due to a bug, or race condition, or a multitude of other issues that are not entirely predictable and guarded against at the time of writing such software.
The latter is what has "wear and tear", it's not one software, it's a whole system of software communicating with each other in varying states of hardware decay, you can design and build it to be resilient against a multitude of predictable issues but you can never expect that it will run perfectly fine unattended.
Unfortunately it’s not this simple because most non-trivial software is written with a dependency tree, every node of which may discover vulnerabilities (or performance problems) which, when patched, trigger update cascades in this tree.
You're forgetting that software on it's own is basically useless. In order for it to provide value, it has to be operated by a physical machine. All running software is physical, with spinning disks and mechanical relays, electrons being pushed back and forth, and photons flying around. Twitter is not a piece of software, it's a complicated physical system. software is an abstraction.
Don't forget that users have been trained to be incredibly fault tolerant as a result of how flaky general software can be. Now that cars are having BSOD that tolerance may reach new levels or just evaporate.
I disagree; math would be a closer analogy. And indeed, arithmetic still works like it did a millenia ago. Closer to the present, I have binaries from the late 80s that still work today (and I use them semi-regularly.)
Indeed, much of the impetus of the software industry seems to be to propagate the illusion that software somehow needs constant "maintenance" and change. For the preservation of their own self-interests, of course; much like the company that makes physical objects too robust and runs out of customers, planned obsolescence and the desire to change things and justify it so they can be paid to do something are still there.
It's possible to make things which last. Unfortunately, much of the time, other economic considerations preclude that.