Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

In my experience, putting CD into place makes a significant improvement to the SDLC only when automated unit tests are a part of that life cycle. I have never seen the benefit of implementing a system like Jenkins if the only automated testing portion is 'does it compile without error? Yes? Then, good to go!'. Without the automated unit tests it just doesn't seem like a worthwhile endeavor to me.


Unit tests are hardly the only kind of testing needed. First, they're completely decontextualized (hopefully), so they don't show the code working in any sort of broader context. Second, they only show the code is consistent with itself, not consistent with requirements. More to the point, unit tests are tests by programmers for programmers, rather than by business for business.

For any mildly complex modern app, you need integration tests to show context, and behavioral/functional tests (Cucumber) to show the code actually does what it's supposed to do according to the customer.


> they don't show the code working in any sort of broader context

If you’re testing a REST interface, then how much additional context you need? They’re supposed to be stateless, so they can be tested in isolation just fine, because that’s how they’re supposed to operate. Or does it stop being a unit test once your test includes DB interaction?

For your typical web app, front end testing is the primary place you need anything more complicated, and even then you could debate how much of that should be manual.


Do you test timeouts? Connection drops? REST itself may be stateless, but it exists in a complex and failure-prone environment with numerous moving parts. What's the routing like between interfaces? Are there proxies? Do you control their configuration? Are your certs properly managed?

"Unit tests should be sufficient" is a fancy way of saying "Throw it over the wall and pray it works".


Most of that sounds more like application performance monitoring than pipeline testing.


Most of that sounds like things that can be tested for, rather than just tossing code over the wall and waiting to see if it catches fire.


Where’s this “throw it over the wall” stuff coming from? I write tests, if I push and my pipeline fails, I have to fix it. I’m rostered on call, if my app breaks, I have to get up in the middle of the night.

For a simple web app (which is probably what most engineers on HN work on), unit testing is most of the testing you probably need to do (unless you think that a unit test that involves a DB interaction becomes an integration test). Front end code needs a little bit more than that, but automating front end tests is not particularly reliable to begin with, so you could really debate how much of that should be done by hand anyway. Some people swear by BDD frameworks, but that’s how you end up with a 4 hour test pipeline, without getting all that much additional confidence in your work.


The original discussion is about working in regulated environments like health care and banking, not youtubeforcats.com. BDD frameworks and four hour test pipelines can start sounding pretty good after things reach a certain level of complexity.

I used to work on a federal banking system that moved a hundred billion dollars a DAY. Believe me, unit testing is not sufficient for that kind of world.


Even without tests, you still get the automatic tracing about which version was deployed when, and built from which source commit(s).

I agree that much of the benefit doesn't apply, but even having the traceability makes some auditors happy.


I disagree. My own company in the very early days had an immense boost of productivity without tests, just by building the damn thing and failing loudly. Once we got everything wired up and errant semi-colons were blocking merges, testing was a logical next-step.


Writing code without thinking about tests is a great way to make writing tests a HUUUUUUUUUGE pain in the ass. Thus ensuring they never get written, and everyone is happy :)


Yes, it was a pain for everyone when we had no tests in the early days (REALLY early). My main point to the OP is that you still get benefits with CI/CD for compiled projects if you only build during integration. We now have a very healthy test suite and workflow 2.5 years later.

It's ok to start small and not boil the ocean immediately when jumping into modern ci/cd workflows.


Even minimal CD is better than doing (or trying to do) the same thing 'by hand'. And adding automated tests is going to be easier with even the rudest CI/CD setup. Some organizations haven't even taken the first steps towards adopting good software development practices:

1. Version control 2. 'Ticket' system 3. Automated builds 4. Automated deployments 5. Automated tests 6. Code review 7. ...


We went from deploying every 6 months with fingers crossed and hunting for db changes that someone forgot he had somewhere to deploy every 2 weeks just because of automated builds that said "this does not compile". There is insane amount value in CD to test environment and merging changes multiple times a day and then having that deploy promoted with one push of a button to acceptance/prod. Keep in mind we barely have code coverage above 5% and we did not had much tests starting.

It starts to pay off as soon as you have more than 1 developer in the project. Though even with one person building something and setting up automated build and deploy to something there is a lot of value when someone else has to do the changes to a project. We have bunch of small projects with Jenkins and Team city set up. I can jump into a project that I never touched before, make a fix and I don't have to figure out how the hell I am supposed to make production version out of it or build it.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: