Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> By trying to shoehorn node/go modules into Debian packages we are creating busy work with almost no value.

Another problem, at least with python I've encountered this, is that the debian packages sometimes seem to fight what you downloaded via pip. It's not made to work together. I'm not a python dev so it was very confusing to figure out what is going on, and I wouldn't be surprised if it would be similar if you mix npm and deb packages for js libs. They don't know of each other and can't know which libs were anyway provided by the other, then search paths are unknown to the user etc. I think I went through similar pain when I had to get some ruby project going.

My gut feeling is that it would be best if debian only supplied the package of the software in question and let's the "native" dependency management tool handle all the libs, but I guess that would get the debian folks the feeling of loss of control, as it indeed makes it impossible to backport a fix for specific libs; rather you'd have to fiddle with the dependency tree somehow.



> the debian packages sometimes seem to fight what you downloaded via pip

It's a bit annoying, but there are simple rules and it applies to pip/gem/npm the same (not sure about go): For each runtime installation you have a place for global modules. If you installed that runtime from a system package, you don't touch the global modules - they're managed using system packages.

If you install the language runtime on a side (via pyenv, asdf or something else) or use a project-space environment (python venv, bundler, or local node_modules) you can install whatever modules you want for that runtime without conflicts.


Put more simply: never run `sudo pip install foo`. That's never expected to work, and it's a pity it doesn't just give a simple error "don't do that!" rather than sometimes partially working.

As you said, you should start a new environment instead and install whatever you like into that. For Python, that means using virtualenv or python -m venv. You can always use the --system-site-packages switch to get the best of both worlds: any apt install python3-foo packages show up in the virtual envioronment, but you can use pip to shadow them with newer versions if you wish.


I basically don't let anything but the package manager touch /usr. Too many mysteriously issues in systems that had gotten screwed up. It's extremely rare that it's necessary for any project: if you need to build and install some other project you can generally just install in a directory dedicated to any single codebase you may be working (with appropriate PATH adjustments which can be sources from a shell script so they are isolated from the rest of the system). I really dislike tutorials and guides which encourage just blindly installing stuff into system managed areas but it's rife.


> I basically don't let anything but the package manager touch /usr.

That's the standard approach. Custom system-wide packages (as opposed to packages that are only installed for one user) should go in /usr/local/ or in a package-specific directory under /opt/.


The comment you're replying to is a bit ambiguous. Did they mean don't put anything directly in /usr (i.e. except /usr/local)? Or did they mean don't put anything anywhere under /usr? Both are consistent with my comment.

Personally, I stopped using even /usr/local (or /opt) many years ago. If it's not managed by the operating system then it goes in my home directory (except a few things in /etc that have to go there).


Exactly. Stuff in /usr/local is very capable of messing with other parts of the system (plus it munges everything not package managed together, which is even worse).


Pip has that feature now. Put this in your ~/.pip/pip.conf:

  [global]
  require-virtualenv = true
and then you get errors like:

  $ pip install foo
  ERROR: Could not find an activated virtualenv (required).


That's not exactly the same. I might be fine with installing Python packages with pip into my home directory, just not /usr.


My workflow's been to temporary disable that, do the stuff I need, then re-enable it. It's a bit clunky, but I don't install stuff outside a virtualenv frequently enough for it to be a major pain in the neck.


Yeah I use pyenv + virtualenvwrapper and there are a few packages I am fine with having in the top-level pyenv version, rather than in any particular virtual environment: black, requests, click, etc.


This is simple but completely counterintuitive. I've seen it go wrong hundreds of times and has been subject to a bunch of different workarounds (e.g. pipx).

Debian should probably ship a separate global python environment for Debian packages that depend on python where it is managing the environment - one with a different name (e.g. debpy), a different folder and, preferably, without pip even being available so that it's unlikely people will accidentally mess with it.

This could have isolated the python 2 mess they had for years also, de coupling the upgrade of the "python" package from the upgrade of all the various Debian things that depended on python 2.

really, it's easier to make "apt install python" be the way to install python "on the side".


> without pip even being available so that it's unlikely people will accidentally mess with it.

This has already happened. It only resulted in lots of "ubuntu broke pip" posts rather than understanding why that happened. (the fact it's not entirely separate from venvs didn't help) But considering that issue, imagine what would happen for people running `apt install python` and not being able to run `python` or `virtualenv`. Most setup guides just wouldn't apply to debian/ubuntu and they can't afford that.


Yeah, of course it did! That's why my primary suggested fix wasn't "just removing pip" but hiving the Debian managed python environment off somewhere different and calling it something else and with different binary names (e.g. debsyspython) that debian package authors could rely upon.

Then the default "python" and "pip" could be without debian dependencies and users could go wild doing whatever the hell they want without messing up anything else in the debian dependency tree (like they would with pyenv or conda).


I don’t have much to add but Python maintainers have been suggesting solutions like this for years, and IIUC Red Hat distros use a similar approach you described. Debian devs refuse to bulge, like they always do on many topics, for better or worse. They are not going to do it, not because your approach is technically wrong, but does not fit their idea of system packaging.


This was probably considered and discarded because altering all references in all packages would be a ton of work, and bound to produce issues with every single merge.


If it's truly an unmanageable amount of work that's a sign that there are other bugs/problems lurking that need fixing.

If they did consider it and reject it I imagine it is more likely it was about avoiding backwards compatibility issues than the amount of work.

This would also signal that there are deeper bugs lurking that need fixing, however.


This is good advice for software lifecycle management in general:

https://wiki.debian.org/DontBreakDebian


Sure they may have to fiddle with the dependency tree, but Node & Go both have well defined dependency formats (go.mod, package.json). It should be relatively easy to record the go.mod/package.json when these applications are built, and issue mass dependency bump & rebuilds if some security issue comes up.

Really seems like the best of both worlds, and less work than trying to wrangle the entire set of node/go deps & a selection of versions into the Debian repos. I mean Debian apparently has ~160,000 packages, while npm alone has over 1,000,000!


> mass dependency bump

That’s not an option for Debian stable. They intentionally backport security and stability patches, and avoid other changes that might break prod without a really good reason.

https://www.debian.org/doc/manuals/debian-faq/choosing.en.ht...


The situation with backporting security fixes is still the same. Debian could backport the fix to any node/go lib the same way they backport security fixes to C libs.

The only difference is that a backported fix in a language that uses vendored dependencies rather than .so's needs to have all depending packages rebuilt.


Debian Developer here. Backporting fixes to tenths of thousands of packages is already a huge amount of (thankless) work.

But it's still done - as long as there's usually one version of a given library in the whole archive.

Imagine doing that for e.g. 5 versions of a given library, embedded in the sources of 30 different packages.


I'm sorry to hear that it's thankless. Thank you for doing it. It is one of the pillars of my sanity, and I am not exaggerating.


Can’t you just use “update-alternatives” to set the versions you want?

https://wiki.debian.org/DebianAlternatives




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: