Does anyone know why nginx used separate processes for workers, instead of threads? This post makes it sound like threads are the way to go, but presumably nginx had a reason for using processes back in the day.
Share-nothing architecture were deemed more scalable as you don't need synchronization. But then, you can't share stuff, like a connection pool. Also, the architecture was simpler this way. Nginx is also an application server and it was "easy" to develop applications on top of it because of this architecture.
Nginx was written in C. Multithreaded code in a language that doesn't provide any safety rails is hard to get right, and so is async code. They probably figured that the complexity of doing both async and multithreading outweighed the benefits that were predicted to be small. Rust's type system checks for and prohibits many kinds of mistakes that are possible in multithreaded code and in async code, so it's much easier to combine them safely.
Wikipedia says Nginx started in 2004. If you look at the state of things for threads and other "lots of sockets to deal with" things in that timeframe, you can see multiprocess was probably a safer choice, especially if you intended on running well on a variety of Unix or Unix like OSes. This page has some of that state captured pretty well: http://www.kegel.com/c10k.html
It was easier to develop, easy to do zero-downtime graceful restarts and reloads, tooling available today wasn't available back then. SMP in FreeBSD 5.x was uhm...very bad...