Thank you for sharing your story. I hope you don't mind if I turn it into a thought experiment, because I find it a good example of how tricky finding and fixing a problem can be. I know nothing about water quality standards, so they might be naive, but the following questions stick out to me:
* Is there a reasonable contamination hazard from water sitting in pipes that are flushed daily with chlorinated water? In other words, is it vital that there always be residual chlorine 100% of the time, or were the standards designed to accomodate this sort of periodic lapse?
* If the dips were harmless, would the inspectors be willing to accept that?
* Was the dump solenoid effective at flushing the entire network of pipes, or just the branch containing the sensor?
Depending on the answers, the end result could range all the way from "the new system let us eliminate a serious hazard", through "we were probably fine before but now we can be sure at a little extra cost", down to "now we have to waste water to avoid tripping a sensor so we don't get fined, with no actual improvement to the water quality". It's a great example of how what we want, what we test, and what we enforce can get just a little out of alignment, and make a hinderance out of what ought to be a definite improvement. Thanks again for sharing.
Caveat: I come from the monitoring and control system side, so I've only learned about the actual water treatment part as a side effect from working on monitoring and control systems for it.
1)
My understanding is there is very little actual hazard here, provided the pipes are in reasonable shape and there isn't a source of contamination.
One example of a source of contamination: dead ends in plumbing (eg, an old branch that's been capped off). The stagnant water can grow bacterial colonies which can then contaminate everything downstream from where the dead end branches off. The residual chlorine can fight this but it's of course better to not have the dead end at all.
2)
The problem with the dips is it's not possible to distinguish between the "expected" nightly dips and real problems.
For example: do you just ignore all alarms between 3am and 6am? or should that be 7am? Is there another check (+alarm) to be sure the clock is correct (and do you now need a secondary clock source for that) and that DST is respected?
Or do you build something very complex that checks against recent flow rates before raising the alarm -- in which case, how do you test that code and ensure it never breaks (keeping in mind it's safety-related and it's very hard to unit test real-world flow meters, chlorine sensors etc). This gets difficult because you might have different plumbers / maintenance people doing things (adding branches, fixing leaks, etc) that might change the physical layout and not even realize there is control software that might be affected by changes. You can add more sensors to try to check for some of these potential conditions but each sensor costs even more money and adds more complexity.
A lot of this is really "CYA". If something ever did happen and a resident got sick (or worse) from the water, and it came to light that not only were there alarms every night but they were specifically suppressed, even though that might be a rational decision given the facts, at best, that decision would still be faced with a lot of scrutiny and at worse it could be considered criminally negligent.
3)
It's basically not possible to flush the ENTIRE building, because you'd have to open every fixture (every sink, shower, and flush every toilet). In this case, the solenoid dumped water from what was effectively the main line through the building which everything branched from, so from any apartment running the water for maybe a minute would get you fresh water from that main line.
So I'd rank this in the middle of what you said: "quality was probably (usually) fine before, definitely more likely to be fine now, and if an alarm goes off at any time it's real".
> Not to mention the fact that Homebrew itself uses the system git to install itself.
To me, this is the biggest problem, and it's not just Homebrew. Any source package manager that uses Git will potentially have this problem. With a vulnerable Git on your system, you have to second-guess every build script you ever run that might make use of Git, to make sure it obeys the path you set instead of choosing its own.
A breaking example might be trying to find lines containing an "=" in a file:
grep = my_file
Also a problem is the syntax for running a program with environment assignments that apply only to the program:
env1=foo env2=bar env3= my_program
Note that under POSIX rules, "env3" here is assigned a zero-length string. Making these sorts of assignments work with spaces around the equal signs would open up a can of worms.
The problem with this is the limits of human attention. It's hard enough to maintain focus on long drives as it is; if the "enhanced cruise control" takes over the job entirely, the driver will have nothing to do and is likely to stop paying attention to the road at all. Then he'll either miss his chance to take manual control, or do so in a state of panic.
> is there a disadvantage to using a higher blocksize?
Maybe, depending on the details. Imagine reading 4 GB from one disk then writing it all to another, all at 1 MB/sec. If your block size is 4 GB, It'll take 4000 seconds to read, then another 4000 seconds to write... and will also use 4 GB of memory.
If your block size is 1 MB instead, then the system has the opportunity to run things in parallel, so it'll take 4001 seconds, because every read beyond the first happens at the same time as a write.
And if your block size is 1 byte, then in theory the transfer would take almost exactly 4000 seconds... except that now the system is running in circles ferrying a single byte at a time, so your throughput drops to something much less than 1 MB/sec.
In practice, a 1 MB block size works fine on modern systems, and there's not much to be gained by fine-tuning.
This reminds me of assertions we used to take for granted about DRAM. We used to assume that the contents are lost when you cut the power, but then someone turned a can of cold air on a DIMM. We usually assume that bits are completely independent of each other, but then someone discovered the row hammer. The latter is especially interesting because it only works on newer DIMM technology. Technology details change, and it's hard to predict what the ramifications will be. A little extra caution isn't necessarily a bad thing.
I agree but redoing a wipe isn't extra caution, its just literally repeating the same thing. If that thing is wrong, you're not helping the situation, just wasting time/resources.
Extra caution would be shredding the drive or some other non-wipe method. At work for example, we zero out drives and then those drives get physically destroyed by a vendor.
I think you're overstating how bad things are. Dreamhost, for example, no longer requires a dedicated IP for SSL, though they do still recommend it for e-commerce. They are charging $15/year for a CA-signed certificate. Granted, that's for a single-site cert and they don't support wildcards under this scenario, but the vacation blogger isn't likely to need that anyway.
The main reason I would think it's a good choice is because if you decide to get a CA certificate later, you just drop it in and you're done; no additional configuration required.
If you don't have a CA certificate, you're probably not advertising your https:// URLs anyway, so unless search engines are aggressively looking/prioritizing for https transport, it wouldn't seem to hurt anything to run a self-signed certificate there.
* Is there a reasonable contamination hazard from water sitting in pipes that are flushed daily with chlorinated water? In other words, is it vital that there always be residual chlorine 100% of the time, or were the standards designed to accomodate this sort of periodic lapse?
* If the dips were harmless, would the inspectors be willing to accept that?
* Was the dump solenoid effective at flushing the entire network of pipes, or just the branch containing the sensor?
Depending on the answers, the end result could range all the way from "the new system let us eliminate a serious hazard", through "we were probably fine before but now we can be sure at a little extra cost", down to "now we have to waste water to avoid tripping a sensor so we don't get fined, with no actual improvement to the water quality". It's a great example of how what we want, what we test, and what we enforce can get just a little out of alignment, and make a hinderance out of what ought to be a definite improvement. Thanks again for sharing.