There are two pillars to managing RAM with virtual memory: the obvious one is is writing one program's working set to disk, so that another program can use that memory. The other one - which isn't prevented by disabling swap - is flushing parts of a program which were loaded from disk, and reloading them from disk when next needed.
That second pillar is actually worse for interactivity than swapping the working set, which is why disabling swap entirely isn't considered optimal.
By far the best approach is just to have an absurd amount of RAM - which of course is a much less accessible option now than it was a year ago.
OOM killers serve a purpose but - for a desktop OS - they're missing the point.
In a sane world the user would decide which application to shut down, not the OS; the user would click the appropriate application's close gadget, and the user interface would remain responsive enough for that to happen in a matter of seconds rather than minutes.
I understand the many reasons why that's not possible, but it's a huge failing of Linux as a desktop OS, and OOM killers are picking around the edges of the problem, not addressing it head-on.
(Which isn't to say, of course, that OOM killers aren't the right approach in a server context.)
Yeah I think SSD / NVME makes all the difference here - I certainly remember XP / Vista / Win 7 boxes that became unusable and more-or-less unrecoverable (just like Linux) once a swap storm starts.
I still have a dusty old XP box here with PageMaker 7 on it.
As long as you don't need transparency effects it's still plenty capable.
I used to use it with an Agfa Accuset imagesetter - and in that role it was more capable than InDesign, since it exposed all the options in the PPD, whereas InDesign would expose only a subset.
To quote TFA: "...outputs strictly designed to farm green squares on github, grind out baseless bug bounties, artificially inflate sprint velocity, or maliciously comply with corporate KPI metrics".
If code becomes essentially free (ignoring for a moment the environmental cost or the long term cost of allowing code generation to be tollboothed by AI megacorps) the value of code must lie in its track record.
The 5-day-old code in chardet has little to no value. The battle-tested years-old code that was casually flushed away to make room for it had value.
I can't help feeling that the old XKCD cartoon [1] about life satisfaction being proportional to the time since last opening xorg.conf could equally apply to udev.
For instance, I tinker with FPGA boards, and one board in particular presents both a JTAG and serial port over USB. Nothing unusual there, but while most such boards show up as /dev/ttyUSBn, but this one shows up as /dev/ttyACM0. I eventually figured out how to make the JTAG part accessible to the tools I was using, without having to be root, via a udev rule. The serial side was defeating me though - it turned out some kind of modem manager service was messing with the port, and needed to be disabled. OK, job done?
Nope.
A few days ago I updated the tools, and now access as a regular user wasn't working any more! It turns out the new version of one particular tool uses libusb, while the old version used rawhid (that last detail is no doubt why I had such trouble getting it to work in the first place) - and as such they require different entries in the udev rule. I'm getting too old for those kinds of side quest, especially now a certain search engine is much less use in solving them.
(Not naming the tools because I'm not ranting against them - just venting about the frustration caused by the excessive and seemingly opaque complexity. Having got that off my chest, I'll go read the article, in the hope that the complexity becomes a little less opaque!)
> it turned out some kind of modem manager service was messing with the port, and needed to be disabled.
Curious. What service was that?
I have an on-board serial port that's only working in one direction, which is something I've never encountered before. I wonder if the service you're referring to could be causing my problem.
ModemManager. You need to set the variable ENV{ID_MM_PORT_IGNORE}=“1” I. A udev rule.
Standard usb serial ports show up as ttyACM#, whereas nonstandard ports that require a driver like ftdi show up as ttyUSB#. Modems tend to be standard usb devices, so ModemManager by default scans all serial ports as if they were modems. This involves sending some AT commands to them to try and identify them.
Software implementations of serial devices tend to follow the standard, so they show up as ttyACM#.
Thanks for the tip. Unfortunately, it doesn't seem to be the cause of my one-way serial port issue. Adding the udev environment variable makes no difference, nor does stopping the ModemManager service.
ModemManager used to open() and probe every tty device attached to the system. I had a 8-channel relay card with an arduino nano wired up with my desk to control the lights and disco ball, interfaced with a custom ascii-based serial protocol. connecting it to an ubuntu machine (where modemmanager was active in the default install) turned the 2nd or 3rd channel on.
This was generally infuriating, there are many arduino forum posts about modemmanager messing up DIY setups.
Upstream fix was changing modemmanager to work on a whitelist / opt-in approach instead of blacklist / out-opt. My fix was to switch to debian.
These rules exist ostensibly for security, right? I wonder what the right model is here for interactive end-user operating systems. Just trust apps to behave and give them access to your devices? That's more-or-less what udev hacks end up amounting to in my experience... Maybe the API applications see should just ask the OS for a device that matches some description, and then the OS pops open a picker for the user, kinda like a file dialog? Selection of a device to pass to an application counts as granting permission to use it.
I don't have a problem with the concept of udev - I just find the details to be laden with papercuts. Needing different syntax depending on which subsystem is interacting with the device makes no sense from a user perspective.
To cover all bases my udev rule seems to need to contain both
I don't understand why the second rule isn't sufficient, and finding enough information online to even consider trying the first rule was extremely difficult.
I think a device picker UI is the right surface for user consent, but the hard part is turning selection into an unforgeable capability rather than a path a rogue process can guess and abuse.
A practical pattern is to have a trusted agent open the device and pass a file descriptor to the sandboxed app over a Unix domain socket or D-Bus fds, and to persist grants by stable identifiers like ID_SERIAL or /dev/disk/by-id instead of ephemeral names such as /dev/sdX.
That model gives you revocation and auditability, and it handles multi-interface devices better, but you still need explicit policies for exclusive access devices and a clear UX for transient versus persistent grants.
From what I have seen the pragmatic path is to combine a portal implementation like xdg-desktop-portal for interactive apps with a documented policy file or daemon API for automation, accepting a little UX friction to get sane, revocable device capabilities.
Devices are typically controlled per-user. So if you want to increase isolation you need to run as a different UID. It's definitely a sub-par model IMO but it seems to work well enough.
> and then the OS pops open a picker for the user, kinda like a file dialog?
How would that work when I'm in a container or at a tty with nothing more than a shell?
> How would that work when I'm in a container or at a tty with nothing more than a shell?
I only really am considering designing for graphical systems. If you're doing server work or devops configuration living in a udev rule file feels more reasonable.
> Shouldn't you manage permissions from groups instead of hacking udev?
Well my user is a member of plugdev, but by default udev has no clue that it should allow plugdev members to access some obscure third-party FPGA board. Someone has to write a udev rule for it, and if they don't share it for others to use, so does the next person. The next person happened to be me.
> And why do you have modemmanager if you don't have a modem?
And that is the right kind of question! I have absolutely no idea why my stock install of Linux Mint includes and activates ModemManager.
This indeed the real issue (not the AI angle per se, but the wholesale replacement. The licensing issue is real, but less important IMO).
Half a million lines of code have been deleted and replaced over the course of four days, directly to the main branch with no opportunity for community review and testing. (I've no idea whether depending projects use main or the stable branch, but stable is nearly 4 years old at this point, so while I hope it's the version depending projects use, I wouldn't put money on it.)
The whole thing smells a lot like a supply chain attack - and even if it's in good faith, that's one hell of a lot of code to be reviewed in order to make sure.
The test coverage is going to be entirely different, unless of course they copied the tests, which would then preclude them from changing the license. They didn't even bother to make sure the CI passed on merging a major version release https://github.com/chardet/chardet/actions/runs/22563903687/...
Woah. As someone not in this particular community but dependent on these tools this is exactly the terrifying underbelly we've all discussed with the user architecture of tools like pip and npm. It's horrifying that a major component just got torn apart, rebuilt, and deployed to anyone who uses those python ecosystems (... many millions? ... billions of people?)
Perhaps - but an argument might still be made that the result is a derivative work of the original, given that it's produced by feeding the original work through automated tooling.
But either way, deleting the original version from the repo and replacing it with the new version - as opposed to, say, archiving the old version and starting a new repo with the new version - would still be a dick move.
As someone with only the vaguest ideas of how cryptography works under the hood (and none at all about how elliptic curves might be useful) this turned out to be the primer I didn't know I needed! I found it really accessible and well-presented.
That second pillar is actually worse for interactivity than swapping the working set, which is why disabling swap entirely isn't considered optimal.
By far the best approach is just to have an absurd amount of RAM - which of course is a much less accessible option now than it was a year ago.
reply