That + full-disk encryption is why I went with BTRFS inside LUKS for my NAS.
They recommend 1GB RAM per 1TB storage for ZFS. Maybe they mean redundant storage, so even 2x16TB should use 16GB RAM? But it's painful enough building a NAS server when HDD prices have gone up so much lately.
The total price tag already feels like you're about to build another gaming PC rather than just a place to back up your machines and serve some videos. -_-
That said, you sure need to be educated on BTRFS to use it in fail scenarios like degraded mode. If ZFS has a better UX around that, maybe it's a better choice for most people.
1GB RAM per 1TB storage is really only required if you enable deduplication, which rarely makes sense.
Otherwise, the only benefit more RAM gets you is better performance. But it's not like ZFS performs terribly with little RAM. It's just going to more closely reflect raw disk speed, similar to other filesystems that don't do much caching.
I've run ZFS on almost all my machines for years, some with only 512MiB of RAM. It's always been rock-solid. Is more RAM better? Sure. But it's absolutely not required. Don't choose a different file system just because you think it'll perform better with little RAM. It probably won't, except under very extreme circumstances.
RSS is dead because it’s backwards. It requires everyone you want to follow to implement it since that is the best we could do a decade ago.
We can do better than that: an LLM can ingest unstructured data and turn it into a feed. You shouldn’t need someone else to comply with a protocol just to ingest their data.
I don’t get why people keep fantasizing about a system that gave consumers no control. Scrape the website directly. You decide what’s in the feed, not them.
> an LLM can ingest unstructured data and turn it into a feed.
An LLM can try to do that, yes. But LLMs are lossy compression. RSS feeds are accurate, predictable, and follow a pre-defined structure. Using LLMs to ingest data which can easily be turned into an parseable data structure seems strange: use the LLM to do the "next part" of the formula (comprehension, decision making, etc)
I mean that your RSS feed can basically be "Go to https://techcrunch.com/latest/ and use each non-video item as a feed item" or "Go to x.com/some_user and make each tweet a feed item", and the LLM can do a perfect extraction of links from html response blobs.
The only thing you have to do is ensure it can reliably get the response html. Maybe MCP browser + proxy or mirror to seem more human.
I built this for myself. The idea is that each feed is a url + title + a prompt to tell the LLM how to extract the links you want so that it generalizes over all websites.
And each feed item is a canonicalized url + title + a local copy of the content at that url which is an improvement over RSS since so many RSS feeds don't even contain the content.
I imagine a reasonably intelligent coding agent would notice that an RSS feed already exists and use it. Possibly transformed if it's not quite the format you want?
If your problem is with your code appearing in training data, then you cannot release your code anywhere.
That link you provided only points out GitHub has integrated "create pull request with Copilot" that you can't opt out of. Since anyone can create a pull request with any agent, and probably is, that's a pretty dated complaint.
Frankly not very compelling reasons to ditch the most popular forge if you value other people using/contributing to your project at all.
I'd claim the opposite. Better models design better software, and quickly better software than what most software developers were writing.
Just yesterday I asked Opus 4.6 what I could do to make an old macOS AppKit project more testable, too lazy to even encumber the question with my own preferences like I usually do, and it pitched a refactor into Elm architecture. And then it did the refactor while I took a piss.
The idea that AI writes bad software or can't improve existing software in substantial ways is really outdated. Just consider how most human-written software is untested despite everyone agreeing testing is a good idea simply because test-friendly arch takes a lot of thought and test maintenance slow you down. AI will do all of that, just mention something about 'testability' in AGENTS.md.
OK so this comes back to the question I started this subthread with: where is this better software? Why isn't someone selling it to me? I've been told for a year it's coming any day now (though invariably the next month I'm told last month's tools were in fact crap and useless compared to the new generation so I just have to wait for this round to kick in) and at some point I do have to actually see it if you expect me to believe it's real.
How would you know if all software written in the last six months shipped X% faster and was Y% better?
Why would you think you have your finger on the pulse of general software trends like that when you use the same, what, dozen apps every week?
Just looking at my own productivity, as mere sideprojects this month, I've shipped my own terminal app (replaced iTerm2), btrfs+luks NAS system manager, overhauled my macOS gamepad mapper for the app store, and more. All fully tested and really polished, yet I didn't write any code by hand. I would have done none of that this month without AI.
You'd need some real empirics to pick up productivity stories like mine across the software world, not vibes.
It's on the people pushing AI as the panacea that has changed things to show workings. Not someone saying "I've not seen evidence of it". Otherwise it's "vibes" as you put it.
Right, I'm sympathetic to the idea that LLMs facilitate the creation of software that people previously weren't willing to pay for, but then kind of by definition that's not going to have a big topline economic impact.
Well, we don't know - that's capturing 2 scenarios: software that whose impact is low as reflected by lack of investment and legitimately useful improvements that just weren't valued (fix slow code, reduce errors and increase uptime, address security concerns) because the cost was not appreciated / papered over by patches / company hasn't been bitten yet
Why did you add that "weren't willing to pay for" condition?
Most of the software I replaced was software I was paying for (iStat Menus, Wispr Flow, Synology/Unraid). That I was paying for a project I could trivially take on with AI was one of the main incentives to do it.
I'm happy to sell it to you, though it is also free. I guided Claude to write this in three weeks, after never having written a line of JavaScript or set up a server before. I'm sure a better JavaScript programmer than I could do this in three weeks, but there's no way I could. I just had a cool idea for making advertising a force for good, and now I have a working version in beta.
I'd say it is better software, but better is doing a lot of heavy lifting there. Claude's execution is average and always will be, that's a function of being a prediction engine. But I genuinely think the idea is better than how advertising works today, and this product would not exist at all if I had to write it myself. And I'm someone who has written code before, enough that I was probably a somewhat early adopter to this whole thing. Multiply that by all the people whose ideas get to live now, and I'm sure some ideas will prove to be better even with average execution. Like an llm, that's a function of statistics.
In glad you made something with it you wanted to make, and as a fan of Aristotle I'm always happy to see the word eudaimonia out there. Best of luck. That said I don't understand what this does or why I would want the tokens it mentions.
Yeah, I gotta make a video walkthrough. Its basically a goal tracker combined with an ad filter - write what you want out of life and block ads, it replaces them with ads that actually align with your long term goals instead of distracting from them. The tokens let you add ads to the network, though you also get some for using the goal tracker.
Though this does suggest one possible answer to me: the new software is largely web applications, and the web is just a space I don't spend much time anymore other than a few retro sites like this
Would the above explanation be better? The website is there because stripe needs a landing page and the text is there because I'm trying to communicate the aspiration the instantiation I can always explain in detail if someone wants to hear how that would work.
No idea. I certainly didn't get it. Goal tracker is one thing, ad blocker is another thing. Why would I want to combine them? And why would I want to see any ads at all? Perhaps I'm just not the target audience...
Maybe not, but you might want to see ads because 1) they fund a huge part of the free internet so you would at least want other people to see them and 2) if they were targeted not at what you're most likely to buy today but at what would most help you achieve goals you'r struggling with, they'd be a constant source of useful information and motivation as you go about your day. Aligning incentives between you and advertisers turns ads from friction to tailwind, and advertisers already want to align with what incentivises you if the alternative is having their ads blocked.
That second point is the part that seems obvious to me but I have a hard time communicating.
And now you have no idea how any of the code works
AI writes bad software by virtue of it being written by the AI, not you. No actual team member understands what's going on with the code. You can't interrogate the AI for its decision making. It doesn't understand the architecture its built. There's nobody you can ask about why anything is built the way it is - it just exists
Its interesting watching people forget that the #1 most important thing is developers who understand a codebase thoroughly. Institutional knowledge is absolutely key to maintaining a codebase, and making good decisions in the long term
Its always been possible to trade long term productivity for short term gains like this. But now you simply have no idea what's going on in your code, which is an absolute nightmare for long term productivity
You can read as much or as little of the code as you want.
The status quo was that I have no better understanding of code I haven't touched in a year, or code built by other people. Now I have the option to query the code with AI to bootstrap my understanding to exactly the level necessary.
But you're wrong on every claim about LLM capabilities. You can ask the AI exactly why it decided on a given design. You can ask it what the best options were and why it chose that option. You can ask it for the trade-offs.
In fact, this should be part of your Plan feedback loop before you move to Implementation.
You can ask the AI why, but its answer doesn't come from any kind of genuine reasoning. It doesn't know why it did anything, because it doesn't exist as a sentient being. It just makes something up that sounds good
If you choose to take AI reasoning at face value, you're choosing to accept pretty strong technical debt
That's just because everyone is misusing AI. If you ask AI to do a job and you have no idea what it did, you lost ownership, which means you're asking to be replaced. You need to own the task. If you fully delegate your task to anyone else or to AI, you no longer know what's going on.
AI does not necessarily produce more tech debt, but AI might do things you don't expect because it lacks context and specificity to perform accurately.
A local-only voice to text whisper.cpp transcriber I can globally use while holding ctrl-semicolon.
A menubar app that manages blocky and can easily turn it off or change dns.
A tool like hammerspoon but I configure it via nix-darwin and it has no cruft.
All of these are apps that use 30MB memory and are better than the apps they replace, and I can make changes any time I want. That's far better than using someone else's software and giving it privileged access to my machine.
Also, perhaps the best point is that so much software is junk that is obsoleted by someone with better UX intuitions even if they are vibe-coding it. Being written by hand by an engineer means basically nothing when it comes to "is this a good app?" Which is why product-minded people are the biggest winners in the new AI era.
Huh? Their example could be just reading code in github or reading diffs. You shouldn’t need to pull code into a development environment just so you can GoToDefinition to understand what’s going on.
There’s all sorts of workflows where vim would mog the IDE workflow you’re really excited about, like pressing E in lazy git to make a quick tweak to a diff. Or ctrl-G in claude code.
I wouldn’t be so sure you’ve cracked the code on the best workflow that has no negative trade offs. Everyone thinks that about their workflow until they use it long enough to see where it snags.
You might anyway have to check them out so you can review them in depth. PR can give you an overview of what changed, but only very limited insight into the context of the change, and no oversight at all into whether something is missing.
... but you do more often that the quick & dirty approach really allows.
I was just watching the Veritasium episode on the XZ tools hack, which was in part caused by poor tooling.
The attacker purposefully obfuscated his change, making a bunch of "non-changes" such as rearranging whitespace and comments to hide the fact that he didn't actually change the C code to "fix" the bug in the binary blob that contained the malware payload.
You will miss things like this without the proper tooling.
I use IDEs in a large part because they have dramatically better diff tools than CLI tools or even GitHub.
> you’ve cracked the code on the best workflow
I would argue that the ideal tooling doesn't even exist yet, which is why I don't believe that I've got the best possible setup nailed. Not yet.
My main argument is this:
Between each keypress in a "fancy text editor" of any flavour, an ordinary CPU could have processed something like 10 billion instructions. If you spend even a minute staring at the screen, you're "wasting" trillions of possible things the computer could be doing to help you.
Throw a GPU into the mix and the waste becomes absurd.
There's an awful lot the computer could be doing to help developers avoid mistakes, make their code more secure, analyse the consequences of each tiny change, etc...
It's very hard to explain without writing something the length of War & Peace, so let me leave you with a real world example of what I mean from a related field:
There's two kinds of firewall GUIs.
One kind shows you the real-time "hit rate" of each rule, showing packets and bytes matched, or whatever.
The other kind doesn't.
One kind dramatically reduces "oops" errors.
The other kind doesn't. It's the most common type however, because it's much easier to develop as a product. It's the lazy thing. It's the product broken down into independent teams doing their own thing: the "config team" doing their thing and the "metrics" team doing theirs, no overlap. It's Conway's law.
IDEs shouldn't be fancy text editors. They should be constantly analysing the code to death, with AIs, proof assistants, virtual machines, instrumentation, whatever. Bits and pieces of this exist now, scattered, incomplete, and requiring manual setup.
One day we'll have these seamlessly integrated into a cohesive whole, and you'd be nuts to use anything else.
There’s so much more iOS apps being published that it takes a week to get a dev account, review times are longer, and app volume is way up. It’s not really a thing you’re going to notice or not if you’re just going by vibes.
The US is generally happy to make ambulances wait in traffic with all other vehicles instead of giving them a dedicated lane that’s shared with buses and/or bikes.
I was using SyncThing, and it worked, but any time you have an Obsidian vault open on two devices, or shortly after another, you're always thinking about if you're going to have to clean up a bunch of sync conflict files later. And that mental overhead is not worth saving $4/mo.
The conflicts are never hard: it's like a git merge conflict where you just take the latest of every conflict block.
I used multiple sync "solutions" (terrible idea, in retrospect); Dropsync, Syncthing, Drivesync, in addition to paying for Obsidian Sync, because I was delusional about "backing up my data". Huge mistake on my part, I've spent many, many, many hours deduplicating worthless "backups". Agree with "just pay for Obsidian Sync".
On the other hand, a worse implementation in the stdlib can make it harder for the community to crystalize the best third-party option since the stdlib solution doesn't have to "compete in the arena".
Go has some of these.
Maybe a good middle-ground is something like Rust's regex crate where the best third-party solution gets blessed into a first-party package, but it is still versioned separately from the language.
They recommend 1GB RAM per 1TB storage for ZFS. Maybe they mean redundant storage, so even 2x16TB should use 16GB RAM? But it's painful enough building a NAS server when HDD prices have gone up so much lately.
The total price tag already feels like you're about to build another gaming PC rather than just a place to back up your machines and serve some videos. -_-
That said, you sure need to be educated on BTRFS to use it in fail scenarios like degraded mode. If ZFS has a better UX around that, maybe it's a better choice for most people.
reply