That definitely helps things to work, but it makes it very much more difficult to work out why things might not be working.
Not least because an unexpected cache can lead to things looking like they're working when they're actually broken at source, as well as things looking like they're still broken when you've actually fixed them at source already.
"I didn't know that cache existed" isn't because of the difficulty of invalidating the right items, though.
And the occasional cache that keeps things forever is so extra broken that it's not doing that because cache invalidation is hard, it's either a supreme misunderstanding or it's incompetence.
> And the occasional cache that keeps things forever is so extra broken that it's not doing that because cache invalidation is hard, it's either a supreme misunderstanding or it's incompetence.
Working in phone technical support in the early 2000s, I encountered first in CF6 and then at least one J2EE implementation (Websphere, maybe?) where the $^&#ing default was to cache DNS results forever.
The behavior was borderline undocumented, and the setting to fix it was even less well documented. It's like they wanted DNS to not be a thing.
You sometimes can perform an invalidation, but it's a manual process and you need to know who to ask. Slack did this when they botched their DNSSEC rollout[1]:
> Our team quickly started contacting major ISPs and operators that run public DNS resolvers, with the request to flush all cached records for slack.com.
DNSSEC is another part of DNS that is still hard to learn.
> It's not the hard kind of cache invalidation. You don't really have to do "invalidation" at all.
One of the points he brings up is negative cache, or caching the dns into a state that it won’t retrieve a resolved address even if it’s available simply because the negative case is cached.
Invalidation is definitely a part of it, mostly because you kind of can’t.
“Michael Barr, a well-respected embedded software specialist, spent more than 20 months reviewing Toyota’s source code at one of five cubicles in a hotel-sized room, supervised by security guards, who ensured that entrants brought no paper in or out, and wore no belts or watches. Barr testified about the specifics of Toyota’s source code, based on his 800-page report. Phillip Koopman, a Carnegie Mellon University professor in computer engineering, a safety critical embedded systems specialist, authored a textbook, Better Embedded System Software, and performs private industry embedded software design reviews – including in the automotive industry – testified about Toyota’s engineering safety process. Both used a programmer’s derisive term for what they saw: spaghetti code – badly written and badly structured source code.
“Barr testified:
“There are a large number of functions that are overly complex. By the standard industry metrics some of them are untestable, meaning that it is so complicated a recipe that there is no way to develop a reliable test suite or test methodology to test all the possible things that can happen in it. Some of them are even so complex that they are what is called unmaintainable, which means that if you go in to fix a bug or to make a change, you’re likely to create a new bug in the process. Just because your car has the latest version of the firmware — that is what we call embedded software — doesn’t mean it is safer necessarily than the older one….And that conclusion is that the failsafes are inadequate. […]”
It’s always easy to blame the driver. But perhaps they didn’t actually did deep enough during investigation. This seems like a pretty obscure fault that a manufacturer would not look to troubleshoot too deeply
I use Bunsenlabs, one of the successors of Crunchbang. (There are a couple others). You can use the vanilla version, built on top of debian, or add the repositories to devuan.
Are you asking to compare mutexes and channels?
The article goes through multiple reasons why the go scheduler has more information about how something should execute than the Linux kernel.
Reread and then maybe rephrase?
I think the key difference here is that the "init" is actually the process scheduler, and each of those managed processes is actually an "init" for its own children, and so on. Kind of like running a VM inside of a VM inside of a VM inside of a VM, but with less VM-specific baggage and more process-related baggage.