Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Years ago I'd be reluctant of using debian stable on my desktop because it mostly meant old packages. Now, with appimages, snaps and flatpaks I can finally have a rock-solid stable system combined just released new software.

Debian are also very important events because it influences all of its descendants like ubuntu, armbian and raspbian.



I hate snap (no experience with flatpack). I have setup my Ubuntu to not use snap, and have snapd removed. I install Chrome from Debina Buster because that way it is not "snap packaged".

Since I use nearly exclusively open source software I tend to trust it. I see the extra security benefits of snap and want that for sure, but all the trouble I had using it was simply not worth it for me.

I'll come back in 10 years with snap and wayland have actually matured :)


Wayland is great, imo, but I do not want any part of these alternative package formats. I'm much happier with the guix/nix approach.


isn't guix/nix fairly alternative?


Well, it's traditional in that it doesn't promote wanton dependency vendoring


If you're going to so much effort, why run Ubuntu in the first place?


> why run Ubuntu in the first place?

Gotten used to it (first Debian), has decent desktop and server game, kinda "just works" and if not there more people stuck and solutions are published.


You might like Pop OS. It's downstream of Ubuntu but has some convenient stuff added and some annoying stuff removed (like snapd).


Yep, I just moved to Pop from macOS after many years, and while I would rephrase it "just... 'kinda works'" (but I would say that about macOS also) it is a pretty great distro from the perspective of the "I do not wish to spend more than 5 minutes tweaking my operating system" user.


flatpaks are great, you should really try them.


Personally, I'd be fine with using Sid as a desktop as long as all data was backed up - but that should be a given for any OS.

I'm a bit of a hypocrite, though, as I use a Mac for daily use. I do run Debian Stable for my servers, though! With Bullseye nearing completion, it looks like it's about time to bake some new VMs.


There’s no real danger to your data with Sid, just your productivity when your OS breaks.


Never understood by people use Sid as a daily driver when there’s arch.


I tend to agree with this. The times I've gone back to Debian, and ran testing or unstable, I still found it to be too slow for me. There are certain things where I want to closely track the latest upstream.

I also really found myself missing the Arch wiki, and ending up back there anyway. And customizing Debian so much, I might as well just ran Arch.

So back to Arch I went.


Unless you're using btrfs.


Lol, I was going to make a btrfs quip and say I’d rather trust my data to Sid on xfs or jfs rather than an alternative stable distro that uses btrfs cough OpenSuSE cough.


Is it that bad? I have been using Tumbleweed as my desktop OS for several months, but I don't really use the Btrfs features at all.


Plenty of people, including me, have never lost a single bit to btrfs.

Now, is it super fast? (No.) Is it (or any other COW FS) the right choice for an SSD or database? (Probably not.) Is it the right choice for data that get read much more often than written and you'd like to be sure 10 years from now that you haven't lost any of it? (There's a pretty good argument to be made there, I think.)


I’ve lost* data with just RAID1 thanks to btrfs bugs post-1.0 that corrupted the entire metadata on both disks. There was no good place to get support nor any instructions on attempting reconstruction of corrupted metadata at the time and I haven’t bothered with it since. Apparently I wasn't the only one that suffered such a loss and as I recall, it was blamed on OS-integrated CoW under certain circumstances but it was quite shortly after adopting it and not a particularly weird configuration so I swore it off and have been happily btrfs-free since.

I should have known better since during my initial evaluation in search of a better llvm for Linux, I set up a root non-raid btrfs volume comprised of multiple dissimilar disks and lost all the data after an unsafe shutdown (a kernel panic that may have been caused by btrfs in the first place) even though all the disks were still functioning fine. I was an early adopter of ZFS - first under OpenSolaris, then under OpenIndiana, then (and now) under FreeBSD, so I thought I understand what "initial stable release" meant but it is clear that what ZFS devs consider to be stable and what btrfs devs consider to be stable are leagues apart.

* I was able to use forensic tools and low-level fs-agnostic recovery methodologies to get some of the important stuff back, but the btrfs volumes were completely lost.


Why would CoW+SSD be bad?


COW filesystems have more write amplification than non-COW FS's, which will wear out the SSD more quickly.


They'll have some write amplification at the filesystem level but should cause less amplification at the drive level.

Either way very few workloads get anywhere near wearing out an SSD, and the upsides of CoW features are almost always much higher than the risk of wearing out a drive. I'd say they fit just fine on an SSD.


There are features of btrfs that are currently considered experimental/unsafe to use like their raid-5 implementation.

I personally use btrfs raid-1 setups and have survived actual device failures without data loss. However, I also perform regular backups so I'm not overly concerned about "eat my data" bugs in a filesystem either.


I was under the impression that the data eating RAID5/6 issues were patched "long ago", but due to the write hole issue, which isn't likely to get fixed anytime soon, which means your data is (probably) safe, except for whatever was being written during the "write hole", and your array may crash and become read-only.

The kernel wiki says the following :

    RAID56
    
    Some fixes went to 4.12, namely scrub and auto-repair 
    fixes. Feature marked as mostly OK for now.

    Further fixes to raid56 related code are applied each 
    release. The write hole is the last missing part, 
    preliminary patches have been posted but needed to be 
    reworked. The parity not checksummed note has been 
    removed.


Then that's certainly some progress!


Then sid isn't your problem...


Don't forget about the Guix/Nix package managers, available in the stable repositories.

https://packages.debian.org/bullseye/guix


Much better thab dockerland where nothing composes nicely!


That's similar to what I do, I just use the nix package manager for the stuff where I like to have newer versions. Best of both worlds...


If using Gnome, using Sid is worth it. A lot of hate directed at Gnome is because of old versions, or distribution-specific hacks. I use the latest beta of Firefox for the same reason (as a webdev, worth it).

Edit: the main risk with Sid is with proprietary software, such as Zoom. There's always a fix somewhere, but fiddling around can be distracting.


How well does system stuff work as snaps/flatpacks?

(I.e. things like LXC and multipass)


I was testing out MicroK8s on Ubuntu, as a Snap. Regarding stability, I can't say, I didn't use it long enough. What really annoyed me is that it's confusing as hell. Files are no where near where you'd expect them to be. When things break and you need to go look for configuration files, you'll find that they are hidden deep down in /var. Snaps add a level of complexity I'm not comfortable with, while I gain very little.

I am pleasantly surprised by how cleanly Snaps uninstall though.


> Files are no where near where you'd expect them to be.

> I am pleasantly surprised by how cleanly Snaps uninstall though.

These two things are connected :)


Pretty bad, by design. It cannot really integrate with the OS, that's why OS packages exists.


Hmm... that's not good, I really prefer LXC to docker

(mainly because of better security and that I use Ubuntu and debian images instead of alpine anyway)


You can instead use dedicated directories, virtualenvs, chroots, firejail sandboxing, systemd nspawn containers.

(In increasing order of effort)


Only the last one gives my the same level of security. And that one is not very user friendly.


Not at all. OS packages run scripts as root during installation, and this is why you should install software only from trusted Linux distros.

Sandboxing user applications is a completely orthogonal topic.

You can sandbox OS-installed applications or other applications. With the same tools.

Both options equally require custom configuration depending on what files and directories you want to allow or restrict access to (and syscalls and so on).

Fine grained permission are a personal choice and both firejail and nspawn make it quite easy to configure.


I've been doing that since forever with Gentoo. Old kernel and desktop but recent applications. Or the other way around, really.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: