> It is a bit weird to see LLMs suddenly being presented as the reason to follow what are basically long standing best practices.
About 95% of the work needed to make LLMs happy is just general purpose better engineering. Units tests? Integration tests? CI? API documentation? Good example? All great for humans too!
I consider this largely a good thing. It would be much worse if the changes needed for Happy LLMs were completely different than what you want for Happy Humans! Even worse would be if they were mutually exclusive.
Reminds me of the "semantic web". Making content machine-readable has positive side effects. (clear structure, reduced ambiguity, separation of data and presentation)
It's another example where the reason for better engineering is to make machines (search engines) happy.
I will admit to being pretty ignorant. But wasm seems like a pretty big swing and a miss. Far too many edges cases and gotchas. It definitely does not “just work”. :(
It's because every 'programmer category' (game devs, backend devs, traditional web devs, ...) wants something different from WASM, and the early hype also didn't help.
For me WASM is "just another ISA" like x86 or ARM, and the web is "just another platform" like Windows, macOS or Linux.
All my hobby C/C++ projects (and some Zig projects) also automatically run in the browser, and 'distributing' a URL is much less painful than distributing native binaries or asking users to build the code themselves.
I am so thoroughly convinced that Docker is a hacky-but-functional solution to an utterly failed userspace design.
Linux user space decided to try and share dependencies. Docker obliterates this design goal by shipping dependencies, but stuffing them into the filesystem as-if they were shared.
If you’re going to do this then a far far far simpler solution is to just link statically or ship dependencies adjacent to the binary. (Aka what windows does). Replicating a faux “shared” filesystem is a gross hack.
This is a distinctly Linux problem. Windows software doesn’t typically have this issue. Because programs ship their dependencies and then work.
Docker is one way to ship dependencies. So it’s not the worst solution in the world. But I swear it’s a bad solution. My blood boils with righteous fury anytime anyone on my team mentions they have a 15 minute docker build step. And don’t you damn dare say the fix to Docker being slow is to add more layers of complexity with hierarchical Docker images ohmygodiswear. Running a computer program does not have to be hard I promise!!
Okay, so what's the best solution? What's even just a better solution than Docker? I mean really truly lay out all the details here or link to a blog post that describes in excruciating detail how they shipped a web application and maintained it for years and was less work than Docker containers. Just saying "a far far simpler solution is to just link statically or ship dependencies adjacent to the binary" is ignoring huge swaths of the SDLC. Anyone can cast stones, very few can actually implement a better solution. Bring the receipts.
The first half of my career was spent shipping video games. There is no such thing as shipping a game in Docker. Not even on Linux. You depend on minimum version of glibc and then ship your damn dependencies.
The more recent half of my career has been more focused on ML and now robotics. Python ML is absolute clusterfuck. It is close to getting resolved with UV and Pixi. The trick there is to include your damn dependencies… via symlink to a shared cache.
Any program or pipeline that relies on whatever arbitrary ass version of Python is installed on the system can die in a fire.
That’s mostly about deploying. We can also talk about build systems.
The one true build system path is a monorepo that contains your damn dependencies. Anything else is wrong and evil.
I’m also spicy and think that if your build system can’t crosscompile then it sucks. It’s trivial to crosscompile for Windows from Linux because Windows doesn’t suck (in this regard). It almost impossible to crosscompile to Linux from Windows because Linux userspace is a bad, broken, failed design. However Andrew Kelley is a patron saint and Zig makes it feasible.
Use a monorepo, pretend the system environment doesn’t exist, link statically/ship adjacent so/dll.
Docker clearly addresses a real problem (that Linux userspace has failed). But Docker is a bad hack. The concept of trying to share libraries at the system level has objectively failed. The correct thing to do is to not do that, and don’t fake a system to do it.
Windows may suck for a lot of reasons. But boy howdy is it a whole lot more reliable than Linux at running computer programs.
Given that distributions are the distributors of packages and not the upstream developers, I think static linking is fine as is dep-shipping. The now dead Clear Linux was great at handling package distribution.
Personally, I think docker is dumb, so is AppImage, so is FlatPak, so are VMs… honestly, it’s all dumb. We all like these various things because they solve problems, but they don’t actually solve anything. They work around issues instead. We end up with abstractions and orchestrations of docker, handling docker containers running inside of VMs, on top of hardware we cannot know, see, control, or inspect. The containers are now just a way to offer shared hosting at a price premium with complex and expensive software deployment methods. We are charged extortionate prices at every step, and we accept it because it’s convenient, because these methods make certain problems go away, and because if we want money, investors expect to see “industry standards.”
Yeah, but if the problem you are solving is rare for most practitioners, effectively theoretical until it actually happens, then people won't switch until they get bit by that particular problem.
But they’re roughly the same paradigm as docker, right? My understanding of the Nix approach is that it’s still reproducing most of a user land/filesystem in a captive/separate/sandbox environment. Like, docker is using namespaces for more stuff, Nix has a heavier emphasis on reproducibility/determinism, but … they’re both still throwing in the towel on deploying directly on the underlying OS’s userland (unless you go all the way to nixOS) and shipping what amounts to a filesystem in a box, no?
I daily drive NixOS. I don't have a global "userland". Packages are shipped from upstream and pull in the dependencies they need to function.
That means unlike Gentoo, I've never dealt with a "slot conflict" where two packages want conflicting dependencies. And unlike Ubuntu, I have new versions of everything.
Pick 2: share dependencies, be on the bleeding edge, or waste your time resolving conflicts.
Yeah nix is great for this. Also I can update infrequently and still package anything I want bleeding edge without any big issues other then maybe some build from sourcing.
> But they’re roughly the same paradigm as docker, right?
Absolutely not. Nix and Guix are package managers that (very simplified) model the build process of software as pure functions mapping dependencies and source code as inputs to a resulting build as their output. Docker is something entirely different.
> they’re both still throwing in the towel on deploying directly on the underlying OS’s userland
The existence of an underlying OS userland _is_ the disaster. You can't build a robust package management system on a shaky foundation, if nix or guix were to use anything from the host OS their packaging model would fundamentally break.
> unless you go all the way to nixOS
NixOS does not have a "traditional/standard/global" OS userland on which anything could be deployed (excluding /bin/sh for simplicity). A package installed with nix on NixOS is identical to the same package being installed on a non-NixOS system (modulo system architecture).
> shipping what amounts to a filesystem in a box
No. Docker ships a "filesystem in a box", i.e. an opaque blob, an image. Nix and Guix ship the package definitions from which they derive what they need to have populated in their respective stores, and either build those required packages or download pre-built ones from somewhere else, depending on configuration and availability.
With docker two independent images share nothing, except maybe some base layer, if they happen to use the same one. With nix or Guix, packages automatically share their dependencies iff it is the same dependency. The thing is: if one package depends on lib foo compiled with -O2 and the other one depends on lib foo compiled with -O3, then those are two different dependencies. This nuance is something that only the nix model started to capture at all.
> Docker ships a "filesystem in a box", i.e. an opaque blob, an image. Nix and Guix ship the package definitions from which they derive what they need to have populated in their respective stores, and either build those required packages or download pre-built ones from somewhere else, depending on configuration and availability.
The rest of your endorsement of NixOS is well taken, but this is a silly distinction to draw. Dockerfiles and nix package definitions are extremely similar. The fact that docker images are distributed with a heavier emphasis on opaque binary build step caching, and nix expressions have a heavier emphasis on code-level determinism/purity is accidental. The output of both is some form of a copy of a Linux user space “in a box” (via squashfs and namespaces for Docker, and via path hacks and symlinks for Nix). Zoom out even a little and they look extremely alike.
> This nuance is something that only the nix model started to capture at all.
Unpopular opinion, loosely held: the whole attempt to share any dependencies at all is the source of evil.
If you imagine the absolute worst case scenario that every program shipped all of its dependencies and nothing was shared then the end result would be… a few gigabytes of duplicated data? Which could plausible be deduped at the filesystem level rather than build or deployment layer?
Feels like a big waste of time. Maybe it mattered in the 70s. But that was a long, long time ago.
I think the storage optimization aspect is secondary, it is more about keeping control over your distribution. You need processes to replace all occurrences of xz with an uncompromised version when necessary. When all packages in the distribution link against one and the same that's easy.
Nix and guix sort of move this into the source layer. Within their respective distributions you would update the package definition of xz and all packages depending on it would be rebuild to use the new version.
Using shared dependencies is a mostly irrelevant detail that falls out of this in the end. Nix can dedupe at the filesystem layer too, e.g. to reduce duplication between different versions of the same packages.
You can of course ship all dependencies for all packages separately, but you have to have a solution for security updates.
Node.js basically tried this — every package gets its own copy of every dependency in node_modules. Worked great until you had 400MB of duplicated lodash copies
and the memes started.
pnpm fixed it exactly the way you describe though: content-addressable store with hardlinks. Every package version exists once on disk, projects just link to it.
So the "dedup at filesystem level" approach does work, it just took the ecosystem a decade of pain to get there.
> If you imagine the absolute worst case scenario that every program shipped all of its dependencies and nothing was shared then the end result would be… a few gigabytes of duplicated data?
Honestly, I've seen projects that do this. In fact, a lot of projects that do this, at the compilation level.
It feels like a lot of the projects that I would want to use from git pull in their own dependencies via submodules when I compile them, even when I already have the development libraries needed to compile it. It's honestly kind of frustrating.
I mean, I get it - it makes it easier to compile for people who don't actually do things like that regularly. And yeah, I can see why that's a good thing. But at the very least, please give me an option to opt out and to use my own installed libraries.
It used to be, but only in cases where your distro doesn't just package whatever software you require. Nowadays I prefer Flatpak or AppImage over crappy custom Windows installers for those cases. They allow for sandboxing and reliable updating/deinstallation.
These days, I equate anything that ships via docker/flatpak first as built by someone that only care about their own computer, especially if the project is opensource. As soon as a library or a tool update, they usually rush to add a hard condition on it for no reason other than to be on the "bleeding edge".
I'm with you on this, but I do want to point out that a big reason that people will update bundled libraries like that is because they don't want to put the effort in to see whether their bundled library versions actually have any critical vulnerabilities that affect the project. It's easier to update everything and be sure that there are no critical vulnerabilities.
In other words, the Microsoft Windows update process as applied to software development.
We've given up on native Windows containers in OCaml after trying to use them for our CI builds for many years. See https://www.tunbury.org/2026/02/19/obuilder-hcs/ for our recent switch to HCS instead. Compared to Linux containers, they're very much a second-class citizen in the Microsoft worldview of Docker.
This is because your team doesn’t know how to ship software without using containers.
If you have adopted a bad tool then people are likely to want the bad tool in more places. This is the opposite of a virtuous cycle and is a horrible form of tech debt.
I guess it’s because I do C++ and robotics. But npm is just not part of my world. The only time I come across it is when someone gets real lazy and doesn’t ship a proper single exe distributable. Claude Code and Codex CLIs were both naughty on initial release. But are now a single file distributable the way the lord intended.
MinGW/MSYS2 are flaming poop hurdles. That’s the bending over backwards to fake a hacky ass bad dev environment. Projects that only support MinGW on Windows are projecting “don’t take windows seriously”.
Supporting Windows without MinGW garbage is really really easy. Only supporting MinGW is saying “I don’t take this platform seriously so you should probably just ignore this project”.
About 95% of the work needed to make LLMs happy is just general purpose better engineering. Units tests? Integration tests? CI? API documentation? Good example? All great for humans too!
I consider this largely a good thing. It would be much worse if the changes needed for Happy LLMs were completely different than what you want for Happy Humans! Even worse would be if they were mutually exclusive.
It's a win. I'll take it.
reply