"One package manger to rule them all"... pip/gem/npm/cargo/cabal&stack/whatnot all pose the same issue. From a distributors point of view, I get why you dont want to solve all these problems. From a user point of view, there is no good reason why I should learn more then one package manager.
Itsa a dilemma. A similar thing is happening with ~/.local/
When I started with Linux 25 years ago, it was totally normal to know and do the configure/make/make install dance. 10 years later, most of what I wanted to use was available as a packge. And if it wasn't, I actually built one.
These days, I have a 3G ~/.local/ directory. It has become normal again to build stuff locally and just drop it into ~/.local/. And in fact, sometimes it is way easier to do so then to try and find a current version packaged by your distribution.
I usually give up on the software if the INSTALL file only has references to Docker.
A few times I try really hard not giving up, but the reality is that Docker-only is highly correlated with non-working, so up to now I have eventually given up on every piece of Docker-only software I have met.
Same here. There are exceptions to everything, but to me it feels like if someone distributes their work via Docker they are actually not on top of the complexity they are trying to deal with. Pushing that complexity into something like a Docker image doesn't make it go away. At best, a bit chunk of bloat is generated with at least trashes your disk cache. At worst, a big hunk of unmaintained software is waiting in a corner to be taken apart and hacked to pieces.
Docker images have an air of really big bit graves.
> These days, I have a 3G ~/.local/ directory. It has become normal again to build stuff locally and just drop it into ~/.local/. And in fact, sometimes it is way easier to do so then to try and find a current version packaged by your distribution.
That's great; now move it to your production webserver where the user the server runs as doesn't have a "home" directory, or you run in to a dozen other reasonable security restrictions that break the tiny world that vendoring was never tested outside of.
Indeed the fact that, in many cases, one has to compile software themselves just to get a reasonably recent version of it is one of Linux's most colossal and inexcusable failures.
Fortunately people are starting to come around to things like AppImage, FlatPak, and (to a lesser extent) Docker in order to deal with it.
Itsa a dilemma. A similar thing is happening with ~/.local/
When I started with Linux 25 years ago, it was totally normal to know and do the configure/make/make install dance. 10 years later, most of what I wanted to use was available as a packge. And if it wasn't, I actually built one. These days, I have a 3G ~/.local/ directory. It has become normal again to build stuff locally and just drop it into ~/.local/. And in fact, sometimes it is way easier to do so then to try and find a current version packaged by your distribution.