Maybe baremetal BSD 4.4? That's the codebase for the Unix personality layer of OSX (which is Mach as ukernel + a Unix layer on top). Or FreeBSD like Netflix uses for their first-party servers?
Speaking in general, one reason not to use macOS for servers is that the macOS kernel does not provide all of the necessary APIs for containerization.
A container is a combination of a restricted filesystem (e.g. chroot), separate namespaces (e.g. pids, network, ipc), and resource limits (e.g. cgroups for max RAM and CPU usage). It is a big undertaking to modify a kernel to provide these capabilities.
A few operating systems have these APIs (or something similar). The ones that I know about off the top of my head are: Solaris Zones, FreeBSD Jails, Linux containers, and Windows Server containers in Process Isolation mode.
The macOS kernel simply doesn't provide these APIs and I doubt that Apple is really interested in putting in the substantial effort to develop them.
They don't need Linux style containers, they control the OS. The Linux container situation is just a hack around the general API instability of the Linux userspace, a problem macOS doesn't have.
For inferencing workloads they also don't need to control max RAM or CPU usage as they can just dedicate the entire machine to handling requests.
And for sandboxing, Apple's sandboxing infrastructure is actually the best of any OS (but mostly private unfortunately).
If you don't mind me asking do you know if they intentionally removed support for containers? The closest cousin FreeBSD seems a lot more friendly in terms of support
Plus a lot of devs use mac is it not a large enough addressable market for apple to care about the servers
I'm honestly not really sure how it went down. It's possible that XNU (the macOS kernel) was simply developed before FreeBSD Jails were developed, and they never put in the engineering effort to port over the feature. I don't think there's a fundamental reason why it wouldn't work.
For devs, I think that most devs are okay with the current practice of running a virtualized Linux (or other) guest via Docker and deploying to Linux (or other) servers. macOS does support virtualization. The difference between virtualization and containerization is that a virtualized guest uses its own kernel whereas a containerized guest shares the host kernel.
I'll also point you to two comments in this old HN thread which seem to have good information about Apple's use of server operating systems:
IIRC Apple has had a history of using NetBSD for server/infra systems, especially those demanding high network throughput and stability. Examples:
• their ObjC WebObjects-based web backends (such as the iTunes Music Store backend — which might still be ObjC-based to this day? Anyone know?) ran on NetBSD servers
• the firmware in Airport routers was NetBSD
I wouldn't be surprised if there’s an Apple Silicon build of NetBSD, created for internal use at Apple. (Though I also wouldn't be surprised, if they've tried to "converge efforts" since then, and have somehow stuck a Darwin userland on top of a NetBSD kernel.)
I would be genuinely surprised, though, if the infra folk at Apple trust the XNU kernel's network stack enough at this point to want to use plain XNU/Darwin for their servers!
---
As an aside, I've always been fascinated by Apple's approach to backend/server technologies, compared to other bigcorps. Despite not being a "UNIX shop" per se, Apple seems to have acquired some engineers somewhere along the way who have a deep understanding of the "UNIX way" to build stuff, and who carry what might even be called "legacy ideas" about system architecture.
You can get a peek into this approach, by prying into the insides of the old Server.app. Sort of like how XCode.app has a BSD buildroot inside it, with the base OS shipping with stub binaries that call inside it when it's available; Server.app does the same, but with an extended BSD userland containing regular old BSD daemons.
Besides being a wrapper for this BSD-userland payload, Server.app itself was just a configuration wizard and state-converger for a set of plain-old config files, living in a virtual /etc dir (regenerated from canonial plist files), that enabled and drove these venerable daemons to do their thing. The "Websites" feature was just Apache, and the "Wiki", "Calendar/Contacts", etc features were just symlink-managed PHP-FCGI plugins for that Apache instance. The DNS was just BIND. The VPN feature was mostly racoond[1]. It's exactly what you'd expect from e.g. cPanel on Linux, with no sense of anything "Apple-y" going on.
This UNIX-y approach also seemingly extends to Apple's use of networking protocols. Even while Apple was giving users a proprietary SMB-alike protocol (AFP) to use for local-network file sharing, their internal approach was different: macOS, to this day, ships with a pre-configured "auto-home" mobile user profile feature, that expects to talk not to an AFP server, or even an SMB server, but rather to an NFS(!!) server. (You could even set this up for yourself, in the Server.app days, given sufficient understanding of Apple's OpenDirectory + how it integrates with BSD Kerberos.)
I feel like with any tech, the folks who really understand how things work + the history / can get the most out of technology. The folks who understand the history, the whys, and accordingly they know where the bottlenecks are, where efficiencies are and etc.
Accordingly they know where in the "old stuff" are really good ideas.