This release contains a neat feature: you can now bind HAProxy to a specific FD opened by its parent process. This means that you can babysit your HAProxy processes underneath a parent process that opens ports and get hitless HAProxy restarts, which I've long desired.
Have you looked at einhorn? We've been running HAProxy under einhorn for a while now, using something like https://gist.github.com/ebroder/36b2f4f3aa210b9d9f3d to translate between HAProxy's signalling mechanisms and einhorn's signalling mechanisms.
Er, yes, hi Evan! Cooper here. I believe either you or Andy originally pointed this interesting HAProxy restart behavior out to me, in the context of explaining why you wrote Einhorn.
This will also allow HAProxy to be brought into the fold if you're using Circus (http://circus.readthedocs.org/en/0.11.1/) for process monitoring. Probably not a big win in reliability, but I bet there are plenty of other things you can do with that (like parent mentioned)
The bud ( https://github.com/indutny/bud ) does support this kind of hot config reloads and process restarts. It's just starting new worker processes on SIGHUP, and let old workers wait until all their connections will be closed.
This is basically done by moving the balancing between workers into the master, instead of calling `accept()` concurrently from the workers.
No, not really, for short period of time you've got state where previously configured instance is not working already and future one is not working yet.
This is generally correct. In particular, when the HAProxy process is stopped/restarted there is a brief period during which the port is not bound by either process. (If the new process isn't able to get the socket when it boots it will sleep ~XXms, then try to bind/listen in a loop until it gets it or a retry threshold is hit.) During this time the kernel will reject incoming connections to the HAProxy port, so you are in danger of dropping incoming requests on the ground.
I feel I must point out that this relies on your clients to retry packets that iptables drops, so at the least they'll have a slower experience than they otherwise would. I believe that this will also break requests that are in flight when the reload happens, if they have sent partial data.
Yeah, it feels like a dirty, dirty hack. The way I understood it is that the requests in flight would retry the same way as the new connections when the first iptables rule is applied.
I think what will actually happen to requests in flight is:
- partial data received by old HAProxy is lost as old HAProxy exits
- new HAProxy comes online, binds to port, receives fd
- iptables rule removed. new HAProxy starts receiving new requests
- in-flight requests from the old HAProxy are timed out by the kernel (TCP RST) as nothing is there to read request data from the old fd or send response data.
So I think this is actually "worse" in some sense than the other retry behavior since it's not recovered inside the same TCP session but instead forces the client to open a new TCP session.