Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I've been curious about ZFS ever since it became stable in Linux, but haven't made the jump yet because a) I have an irrational fear of data loss, and ext4 has never failed me, and b) I have an aversion towards complexity, and ZFS is much more complex than ext4. Features like snapshots and checksumming can be accomplished with userspace tools, though in arguably less elegant ways, so I didn't have a strong need for it either.

That said, it's great reading its success stories and praises, and my next system will use ZFS. :)



> I have an irrational fear of data loss, and ext4 has never failed me

You're a lot more likely to lose/corrupt data with ext4 than ZFS. Ext4 will happily corrupt data silently. The core conceit of ZFS is it doesn't trust the underlying hardware. ZFS even allows duplicated data on a single disk. You lose capacity but gain robustness.


Like I said, I've used ext* filesystems for decades now, and have yet to experience corruption or data loss. I know ZFS would detect and correct or notify this automatically, but this can be avoided with ext4 by doing frequent backups, and using tools like SnapRAID or par2. Sure, this needs to be done manually, but for my use cases this works rather well, and I enjoy the flexibility and direct control over this, rather than using a complex/magic file system that does this transparently for me.

When ZFS support in Linux was still new, I wouldn't dare rely on it, and it's the same reason I avoid btrfs today, even though its features are appealing. But now that ZFS is quite stable, and after hearing its praises for years, I do want to give it a try. :)


> this can be avoided with ext4 by doing frequent backups

lol. Nothing is stopping you from doing manual backups using ZFS. One should never rely on just one backup anyways, if the data is critical. For me, snapshots are a great way to protect from "oh, I accidentally deleted this folder", which ext4 doesn't have. Yes, you can use replication to sync these snapshots somewhere else, but nothing is stopping you from continuing to do manual backups on a file level. It's just a file system, after all. So it doesn't really make sense as a justification as to why you are hesitant to use ZFS. In fact, it's one of the reasons I liked it so much: While you can do all these cool things with it, you don't have to. It doesn't pressure you to use these features. If you're ready, they're there, but until then, it's just a file system, and a very robust one at that.


> I've used ext* filesystems for decades now, and have yet to experience corruption or data loss

You might not know. How are you validating the integrity of every file regularly?

In the 90s I lost some files to corruption which went unnoticed for years and propagated to every backup, so by the time I went looking for the file I had years worth of backups of those files, all corrupt. This is one of the reasons zfs is such a happy place.


I get that. But in practice, it hasn't been a problem. For files I consider critical and difficult to replace, I use SnapRAID or par2, and others are either used frequently and I would notice the corruption, or replacing them wouldn't be a problem.

Using a smart filesystem would be an improvement, but it comes at the expense of less flexibility and more complexity, and, until recently on Linux at least, relying on unstable software.


> relying on unstable software

Where the hell are you getting this from? What instability has existed in ZFS on Linux? Are you confusing it with BTRFS or something? I think your fears of ZFS are seriously misplaced.


Potential complexity is higher with ZFS, but in regards to RAID and the like, I honestly find the zpool tools to be much easier to work with than their mdadm equivalents.

I have a "ghetto NAS" of 24 drives plugged into my server via USB, and getting a raidz3 set up on there was one command:

`zpool create tank raidz3 drive1 drive2 drive3 ....`

Doing scrubs and replacing disks are pretty easy, just using `scrub` and `replace` commands with zpool.

I haven't really felt the need to leave ZFS, at least not in regards to RAIDs; on root I still use ext4, though I might change to btrfs on my next install.


The main limitation I've had with ZFS is expanding it or modifying it is damn near impossible, you pretty much just have to find a second equal or larger storage pool, migrate, nuke, and rebuild. Even mdadm lets you do things like convert from raid5 to raid6 without having to nuke the pool.

Storage Spaces with ReFS has had the most impressive feature set from this perspective. Add and remove an arbitrary number of drives of arbitrary sizes and it'll use the full capacity of the drives to whatever parity level you set it to. It has its own downsides of course, on top of being Windows only, but it's the only FS/pool combo that has really made me think "ZFS doesn't have quite everything perfect".


There are definitely annoyances, but there are workarounds too; you can add additional vdevs to expand the whole pool if you want, and data is striped across them; it even lets you mix and match different raids if you really want to for some reason.

There's some current work to add a disk to existing vdevs, and I think even a semi-working PR for it now: https://github.com/openzfs/zfs/pull/15022. Hopefully in theory that will make ZFS a little less frustrating.


As a sibling comment mentioned, using ZFS for RAID/storage pooling is cumbersome. On my NAS I use plain old ext4 with SnapRAID+MergerFS, which gives me the RAID and checksumming features, while having the flexibility to expand the array using any combination of disks. This works rather well, and I have no need for immediate syncing/checking I would gain with ZFS.

That's my main issue with ZFS. It does too many things, and is too clever/magic for my taste. Many things can go wrong when relying on a monolithic system with that much complexity. I much prefer the Unix "do one thing well" approach, and mixing purpose-built tools to suit my needs, rather than using one tool for everything.


I'm just learning too, so one question: why wouldn't you just consolidate and use ZFS for root? Even with just 1 disk, I understand it's still beneficial for its corruption detection and other such features. Just trying to understand that about going the btrfs way.


There's no reason not to really, other than btrfs is included in the kernel, and if you're not using RAID then I don't think there's a fundamental advantage to ZFS either. If you want to do ZFS on root, it usually requires more than "zero" levels of extra work, unlike btrfs.


> I have an irrational fear of data loss

If you have an irrational fear of data loss (it's really quite rational), zfs is the only place you should be comfortable.

I've been on zfs since its earliest days (~2004-ish, inside Sun at the time) and never lost a single byte ever since. Computers and hard drives have died but zfs just rocks on.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: