I suspect most of HN hasn't ever worked in a compliance-driven, regulatory environment. They've never had to deal with the possibility of audits by government, insurance, or financial regulators.
This is the world I live in - building pipelines for financial and health care giants. I agree with this article 100%. Not only can continuous delivery provide regulatory compliance consistency, but also more traditional human-bureaucracy approaches cannot provide it. There are too many potential leaky spots, and far too much potential for human error - in particular, people signing off on things they don't understand. And this includes compliance auditors!
And when someone asks you for proof that some check that you happen to have automated was run, you get to ask "and how many reams of paper's worth of proof would you like?" instead of saying "umm, uhhh..." and rummaging around trying to figure out what you can send.
I love being able to show the lines of code that say "This code cannot advance to a prod-deployable state if the tests haven't passed, because here's where it gets rejected".
The problem is that you're often showing those lines of code to someone who became an auditor because they're incapable of authoring or even understanding what those lines of code are doing.
Or you're showing them to technical management, architects, etc, who are quite capable of understanding their importance and see how those lines benefit them, the ones on the front lines for compliance issues.
Your snarkiness aside, that’s why it’s important to find an auditor that understands your business. Traditional auditors understand traditional compliance regimes. If you want to use innovative compliance methodology, like relying on infrastructure as code for instance, then you need an auditor who understands those practices.
Take PCI for example. A couple of years ago they released a document called Cloud Computing Guidelines, which was basically guidance for compliance through software defined network and that sort of stuff. As a PCI QSA (auditor), you may not ever need to know about these guidelines, or that they even exist. But if you want to work with business that rely on them then you’ll need to, and if you want to implement them in your organisation, then you’ll need to find a QSA that understands them.
Auditing is a partnership. Truth be told, as an engineer, you can probably get things past the auditor with some judicious lies and by socially engineering some confidence. I don't even think it would be particularly hard.
But I don't envy you when you get caught. The consequences basically open at losing your job and all possible good references from that job, and get progressively worse from there depending on the kind of audit. I don't intend to become more informed about the consequences, personally.
(I've only waded into this world, but my most likely future trajectory is only up, up, up.)
>Not only can continuous delivery provide regulatory compliance consistency, but also more traditional human-bureaucracy approaches cannot provide it.
Ehhh... I work in a regulated environment, and think I disagree. The reason I don't know if I do is because it's really a question of definitions.
By this article, developing with our super-manual bureaucratic compliance process of 15 years ago was CD (and probably further back, I've never had to dig that far in the archives). The things we did in the name of disaster recover, control-ability and traceability fit the provided definition of "deployment pipeline."
At the very same time, I think this article is trying to say our highly automated tool-assisted compliance processes of today are not "continuous compliance" because we have manual stakeholder synchronization gates and quality-enforcing review gates on the way to production.
>> What are the goals of Regulatory Compliance?
All of the regulatory regimes that I have seen are, in essence, focussed on two things: 1) Trying to encourage a professional, high-quality, safe approach to making change. 2) Providing an audit-trail to allow for some degree of oversight and problem-finding after a failure.
No and no. The goal of regulatory compliance is to avoid liability. You maintain professional standards because that helps keep everyone safe, but you obey the rules to avoid the punishment associated with not obeying them.
This really matters when regulatory compliance conflicts with good judgement. Sometimes you do the bad thing because the bad thing is mandated in the rules. If a regulation says that you have to have "antivirus software" on the machine then you have antivirus software on the machine.... even if no antivirus software exists for that machine. You shoehorn something because the rules say you need it. You don't do this to increase security. You do it because the lawyers tell you that not doing it will get you sued.
> No and no. The goal of regulatory compliance is to avoid liability
But the way you avoid liability in this scenario is by implementing controls that reduce risk. You can fudge the audit if you want to. But for most regulated industries, if you have a major incident, you’ll have an investigation. If that finds your controls actually weren’t in place effectively, you can lose that protection, regardless of the outcome of the audit. The PCI SSC will tell you that a merchant that has been breached has never been found to be compliant at the time of the breach, and that’s probably true.
Which leads to things like anti-virus software being installed and configured to not run. Or anti-virus software being selected based on the ability of the software to be so configured.
The most common issue is normally password managment. Lots of little regs say things like "passwords must be changes every 30 days", which might have made sense decades ago but is very anachronistic today. (I have an alert at the bottom of my screen right now saying my password expires next week.)
In general I agree -- but it means your pipelines will have to support every customer you deliver to and be version-aware.
Part 11 of the FDA regulations that oversee software compliance enforce expensive re-validation processes for every major release of software. It's not something your customers are going to want to do very often. So set up a CD for each customer on version X.y.z where X is static and make sure that you don't accidentally ship a backwards-incompatible major-version change on that pipeline.
It's an interesting challenge for operations-focused teams but I agree that CD is a valuable tool to have.
This is a major problem and one of the artifacts of the reg not having been updated since 1997. The industry expectation for the validation process is for it to take four to six months, and it's specific to one version. We have 99.99% of our software requirements specification testing covered by several thousand automated tests that run in several minutes. Nonetheless, auditors have been wary of frequent updates, even though by their own admission our test strategy is far superior to any manual testing strategy and, critically, is the only realistic way to monitor for regressions specific to our usage of software that the vendor might not catch.
I don't disagree. The validation process is hilariously outdated as are the compliance requirements for software vendors. But it's ambiguous enough in places that we can adapt some modern operations tooling and practices to it.
In my experience, putting CD into place makes a significant improvement to the SDLC only when automated unit tests are a part of that life cycle. I have never seen the benefit of implementing a system like Jenkins if the only automated testing portion is 'does it compile without error? Yes? Then, good to go!'. Without the automated unit tests it just doesn't seem like a worthwhile endeavor to me.
Unit tests are hardly the only kind of testing needed. First, they're completely decontextualized (hopefully), so they don't show the code working in any sort of broader context. Second, they only show the code is consistent with itself, not consistent with requirements. More to the point, unit tests are tests by programmers for programmers, rather than by business for business.
For any mildly complex modern app, you need integration tests to show context, and behavioral/functional tests (Cucumber) to show the code actually does what it's supposed to do according to the customer.
> they don't show the code working in any sort of broader context
If you’re testing a REST interface, then how much additional context you need? They’re supposed to be stateless, so they can be tested in isolation just fine, because that’s how they’re supposed to operate. Or does it stop being a unit test once your test includes DB interaction?
For your typical web app, front end testing is the primary place you need anything more complicated, and even then you could debate how much of that should be manual.
Do you test timeouts? Connection drops? REST itself may be stateless, but it exists in a complex and failure-prone environment with numerous moving parts. What's the routing like between interfaces? Are there proxies? Do you control their configuration? Are your certs properly managed?
"Unit tests should be sufficient" is a fancy way of saying "Throw it over the wall and pray it works".
Where’s this “throw it over the wall” stuff coming from? I write tests, if I push and my pipeline fails, I have to fix it. I’m rostered on call, if my app breaks, I have to get up in the middle of the night.
For a simple web app (which is probably what most engineers on HN work on), unit testing is most of the testing you probably need to do (unless you think that a unit test that involves a DB interaction becomes an integration test). Front end code needs a little bit more than that, but automating front end tests is not particularly reliable to begin with, so you could really debate how much of that should be done by hand anyway. Some people swear by BDD frameworks, but that’s how you end up with a 4 hour test pipeline, without getting all that much additional confidence in your work.
The original discussion is about working in regulated environments like health care and banking, not youtubeforcats.com. BDD frameworks and four hour test pipelines can start sounding pretty good after things reach a certain level of complexity.
I used to work on a federal banking system that moved a hundred billion dollars a DAY. Believe me, unit testing is not sufficient for that kind of world.
I disagree. My own company in the very early days had an immense boost of productivity without tests, just by building the damn thing and failing loudly. Once we got everything wired up and errant semi-colons were blocking merges, testing was a logical next-step.
Writing code without thinking about tests is a great way to make writing tests a HUUUUUUUUUGE pain in the ass. Thus ensuring they never get written, and everyone is happy :)
Yes, it was a pain for everyone when we had no tests in the early days (REALLY early). My main point to the OP is that you still get benefits with CI/CD for compiled projects if you only build during integration. We now have a very healthy test suite and workflow 2.5 years later.
It's ok to start small and not boil the ocean immediately when jumping into modern ci/cd workflows.
Even minimal CD is better than doing (or trying to do) the same thing 'by hand'. And adding automated tests is going to be easier with even the rudest CI/CD setup. Some organizations haven't even taken the first steps towards adopting good software development practices:
1. Version control
2. 'Ticket' system
3. Automated builds
4. Automated deployments
5. Automated tests
6. Code review
7. ...
We went from deploying every 6 months with fingers crossed and hunting for db changes that someone forgot he had somewhere to deploy every 2 weeks just because of automated builds that said "this does not compile". There is insane amount value in CD to test environment and merging changes multiple times a day and then having that deploy promoted with one push of a button to acceptance/prod. Keep in mind we barely have code coverage above 5% and we did not had much tests starting.
It starts to pay off as soon as you have more than 1 developer in the project. Though even with one person building something and setting up automated build and deploy to something there is a lot of value when someone else has to do the changes to a project. We have bunch of small projects with Jenkins and Team city set up. I can jump into a project that I never touched before, make a fix and I don't have to figure out how the hell I am supposed to make production version out of it or build it.
“ In fact it is quite hard to imagine a Pipeline that doesn’t give you access to this information“
I have seen multiple pipelines that don’t tell you who manually tested this change - or at least, don’t always tell you. The author here is assuming the existence of a lot of practices already, which makes his pointing to CD as magic look a lot weaker.
We have had a lot of success using Inspec and for a while Chef compliance in validating the infrastructure elements remain compliant continually.
For instance, you can check a folder remains encrypted or certain ports remain closed in dev and test before promoting your config management to production.
Quite involved to setup but a big tick for auditors.
Years back, I was brought in as a consultant for a large online provider of automated loan application systems to provide Chef automation for confirming that the CIS-CAT controls had been properly applied.
This was for a firm that handled over 80% of all online loan applications, from all the major banks, insurance companies, credit unions, etc.... From the moment you typed in the first character of your name to when you hit the final submit button, everything was handled on their site, and proxied through to the customer systems via an iframe or other technology.
I tried to convince them that they should have automated compliance testing and confirmation on a regular and frequent basis (like every thirty minutes), but they refused. They said it wasn't necessary according to their auditor, and they weren't going to open themselves up to additional risk by having the tool run too frequently.
They had quarterly audits, and therefore they wanted the tool to only be run quarterly, and to only be kicked off manually at that.
While us normal human beings might be convinced of the need and desire to always be protected, but banks and financial institutions don't think like that. For them, they only ever do the absolute minimum required by law, because anything more just creates more risk.
This is the world I live in - building pipelines for financial and health care giants. I agree with this article 100%. Not only can continuous delivery provide regulatory compliance consistency, but also more traditional human-bureaucracy approaches cannot provide it. There are too many potential leaky spots, and far too much potential for human error - in particular, people signing off on things they don't understand. And this includes compliance auditors!