Hacker Newsnew | past | comments | ask | show | jobs | submit | more noirscape's commentslogin

There's a difference between software that's "done" (it never needs updates, ever) and software that's done (it only needs maintenance for security and platform churn).

The former is extremely rare; platform churn alone will usually demand updates, even if your code is otherwise airtight. Forces generally beyond your access will demand that your code is able to conform to platform standards. The demand this places can be very variable and depends more on the platform than you. (Windows has low platform churn since it's possible to futz with compat features, Linux is extremely variable on your codebase, MacOS is fairly constant and from what I know about mobile phones, you're basically signing up to forever maintenance duty).

The latter is much more common; sure, sudo still gets updates but most of those won't be new features. Specification wise, sudo is "done". It does what it needs to, it's interface is defined and there aren't going to be any strange surprises when I run sudo between any system made in the past 10 years or so.

The problem is that when you're selling software, demanding compensation for the former is a hard sell since it's things customers won't see or necessarily care about. Demanding compensation for the latter is much more obviously acceptable.


I’m not sure truly ‘done’ exists on systems that interact with other systems unless it’s an entirely closed loop.

I reckon closed-loop systems can be ‘done’ every bit as much as hardware systems can be if the design, debugging and implementation are disciplined enough.


> platform churn alone will usually demand updates, even if your code is otherwise airtight. Forces generally beyond your access will demand that your code is able to conform to platform standards.

Platform churn updates are a failure to limit scope and dependency. If you stick with stable standards like C99/POSIX/X11/SDL, test strictly and build liberally etc., then who cares what the Web/Qt/Metal people are doing?


> MacOS is fairly constant

Except when they killed all 32bit games a few years ago with Catalina.


I think that GP meant that MacOS has a constant nonzero rate of platform churn. I might be wrong though!


Oops, yes, I meant a constant non-zero rate. It's slightly above mobile phones, where the developer is treated as the problem that needs to fix itself.

Stuff written for one version of MacOS will probably work for the next few versions, but there's just as likely a chance that Apple has decided that you need to do a full on update of all your older tools. Things like dropping Rosetta, 32-bit from the kernel and so on and so forth. There's not really any recourse, unlike Windows and Linux where you can usually finagle a workable solution without having to resort to updating everything all the time (so platform churn exists, but a user can theoretically choose to avoid it).

This is unlike phones, where there's basically no real expectations for when you need to update stuff, so it becomes a case of "you need to test every version". The lack of respect for tool stability is just one other reason why the mobile ecosystem is the user-hostile hell it is; this platform churn pretty much is one of the two roots of why mobile apps are Like That. (The other being that running your own choice of tools is treated as a privilege, not a right.)


Still feels very relevant, since I don't think much has been systematically changed.

Structurally, SSL verification is in the same category as stuff like SELinux: most people that interact with it understand why it exists, why it's needed... but the actual process of using anything related to SSL is an exercise in frustration. So the default response is to degrade it or turn it off entirely because shit isn't working for them. (The second search suggestion for SELinux is to change it to permissive, which effectively turns it off without having your distro tools yell at you for disabling it.)

Even today, OpenSSLs interfaces are horrendously designed (if you've ever chose to mess with a proper self-signed cert setup with a custom CA and everything, you'd be aware of how bad their CLI tools are). I wouldn't be surprised if this is a case of it propagating upwards from them; OpenSSLs bad interfaces lead to bad CURL flags, which in turn leads to bad checks by more high level implementations... which goes all the way until you get the HTTP library that just does away with all the fanfare and has a function that accepts all you need for a URL and under the hood handles all those other implementations. (ie. requests for urllib3.)

It also really doesn't help that SSL errors tend to be... unhelpful at the best of times in most cases. You're usually fishing out random error codes that don't seem to have any clear relationship to what you're doing; it creates an aversion to having to engage with the process at all.

And that's for the programming side of things; the dev UX may be bad, but on the user side it tends to be way worse. This isn't about browsers, but it makes no sense that a regular HTTP connection just works, but the moment an SSL certificate is expired for a single second, you have to click through big red scare screens. It'd make more sense if both the certificate and HTTP connection threw up scare screens, but they don't. Instead you just get the strike-through lock of disappointment in your address bar. Makes zero sense.


Ignoring the more stupid reasons why people dislike systemd; there's really only three reasons.

The first is just the simple fact that most people don't want to administer their distro as a hobby. Similarly, distro maintainers primarily care about shipping a complete package that they don't need to mess around with too much. Before systemd, every distro had its own bespoke choices in tools and utilities that were wired to work together. Systemd however effectively homogenized all those choices, since almost every major distro settled on systemd. The main difference between distros now is as a result not necessarily the choices the maintainers made, but things like the package manager and the release schedule, so there's less of an incentive to use other distro's. (This isn't some sort of conspiracy, which the dumber arguments against systemd tend to assume; it's just a case where systemd winds up as the easiest choice - systemd has Red Hat backing, wires complicated things together in a way where it works on most novel PC environments that usually require config fiddling when not using systemd and it's just one upstream maintainers have to submit bugs to rather than a ton of different ones. The reasons to pick systemd as opposed to "one million tools" mostly just comes down to systemd being less of a headache for maintainers.)

The second is that systemd violates some assumptions on how Linux software is "traditionally" designed. systemd is a PID 1 process, meaning it's job is to start every other process on the system. In regular Linux software design, this would be the only thing systemd does. Systemd does this, but it also provides a massive suite of services and tools for things that, historically, have been relegated to separate tools. It's a big bulky program, that while it is modular, is essentially competing with a bunch of other Linux utilities in ways that aren't really standardized. This combines with point 1, where distro maintainers near universally settled on systemd, and what happens is that a lot of non-systemd tools that do what systemd used to do aren't really being used anymore even though the systemd implementation isn't necessarily better.

Finally there is a legitimate, albeit niche, case to avoid systemd. Because it's massive and distro maintainers tend to enable a lot of things in systemd, using it means you're getting a lot of random background processes that use up CPU/memory. If you're constrained by that sort of thing, systemd becomes a pretty inefficient hulk of a program you need to tear out.

I do think a lot of the headaches involving systemd would be simplified if the Linux space had any sort of standardization on how to wire it's tooling together, but outside of the POSIX standard (which doesn't really cover this side of things; POSIX is mainly about userspace utilities and APIs, not "how should an OS's system services behave"), there isn't any. People have rose-tinted glasses about wiring together different tiny tools, when the reality is that it was usually a pain in the ass and reliant on config flags, outdated manpages and so on. Just look at the seemingly simple question of "how do I configure DNS on Linux" and the literal 5 different ways in which it can be set since the "standard" proved to be inefficient the moment things get even a little bit more complex than a single network device handling a single connection. (Which sounds like it'd be the case, but may I introduce the concept of wifi?) Systemd being a big program avoids a lot of these issues.


The spam issue is probably one of the stronger arguments against email centered design for bug trackers, code forges and the like. It's a bit crazy that in order to professionally participate in modern software development, you're inherently agreeing that every spammer with a bridge to sell you is going to be able to send you unsollicited spam.

There's a reason most code forges offer you a fake email that will also be considered as "your identity" for the forge these days.


Firefox fell behind Chrome because of aggressive marketing from Google in a way that probably violates some antitrust laws if they were actually being enforced, combined with a couple of own-goals from Mozilla.

Basically Google exploits their market dominance in Search and Mail to get people to use Chrome (and probably their other services too). When you search in a non-Chrome browser, you'll be constantly informed by Google about how much better their search is with Chrome through pop-ups and in-page notifications (not browser notifs). If you click a link in the Gmail app on iOS, rather than opening the browser, you get a Chrome advertisement if they detect it isn't your default browser.

This goes hand-in-hand with Chrome being the default Android browser (don't underestimate the power of being the default) and Mozilla alienating their core audience of power users by forcibly inserting features that those power users despise.

Chrome never won on features, it won on marketing and abuse of a different monopoly.


I'd say it's probably worse in terms of scope. The audience for some AI-powered documentation platform will ultimately be fairly small (mostly corporations).

Anubis is promoting itself as a sort of Cloudflare-esque service to mitigate AI scraping. They also aren't just an open source project relying on gracious donations, there's a paid whitelabel version of the project.

If anything, Anubis probably should be held to a higher standard, given many more vulnerable people (as in, vulnerable against having XSS on their site cause significant issues with having to fish their site out of spam filters and/or bandwidth exhaustion hitting their wallet) are reliant on it compared to big corporations. Same reason that a bug in some random GitHub project somewhere probably has an impact of near zero, but a critical security bug in nginx means that there's shit on the fan. When you write software that has a massive audience, you're going to have to be held to higher standards (if not legally, at least socially).

Not that Anubis' handling of this seems to be bad or anything; both XSS attacks were mitigated, but "won't somebody think of the poor FOSS project" isn't really the right answer here.


I don't think it's fair to hold them to the same, or higher standard. at all this is literally a project being maintained by one individual. I'm sure if they were given $5 million in seed money they could probably provide 1000x value for the industry writ large if they could hire a dedicated team for the product like all those other companies with 100,000x the budget.


If Mozilla were to kill adblockers, there's basically no reason to not use Chromium. It's pretty much the only relevant difference between Chromium and Firefox these days.

It's truly impressive how they've managed to do every user-hostile trick Google Chrome also did over the years, except for no real clear reason besides contempt for their users autonomy I suppose. Right now the sole hill Mozilla really has left is adblockers, and they've talked about wanting to sacrifice that?

It truly boggles the mind to even consider this. That's not 150 million, that's the sound of losing all your users.


Insane that they're dropping client certificates for authentication. Reading the linked post, it's because Google wants them to be separate PKIs and forced the change in their root program.

They aren't used much, but they are a neat solution. Google forcing this change just means there's even more overhead when updating certs in a larger project.


The certification serves different purposes. It might feel like a symmetric arrangement but it isn't. On the whole i think implementing this split is sensible.

I might add I've changed my mind a bit on this.


Google doesn't understand how the real world works. Big shock.


They do understand, this is moat digging.


It's a good change. I've seen at least one company that had misconfigured mTLS to accept any client certificate signed by a trusted CA, rather than just by the internal corporate CA.


I (partially) agree that it is a good change, but for a different reason. For security purposes, the certificates should include only the permissions that are required (although maybe they ought to allow you to have certificates that include both if you have a use for it (which as I have mentioned, you usually should not need because you will probably want to use different certificates instead), but unfortunately they do not allow that).


Should we remove anything that was at some point misconfigured somewhere?


I won't mind?

But in this case, the upsides are definitely greater than in the usual case.


We can get rid of computers altogether then but I'm not sure that would improve anything.


Is that a temporary situation? Is it that big a deal to implement a separate set of roots for client certs? Or do you mean that the entire infrastructure is supposed to be duplicated?


I think client certificates are a good idea, although it is usually more useful to use different certificates than those for the domain names, I think. (I still think CA/Browser Forum is not very good, despite that; however, I still want to mention my point.)


It's technically possible to get any Android app to accept user CAs. Unfortunately it requires unpacking it with apktool, adding a networkconfigoverride to the XML assets and pointing the AndroidManifest.xml to use it. Then restitch the APK with apktool, use jarsigner/apksigner and finally use zipalign.

Doesn't need a custom ROM, but it's so goddamn annoying that you might as well not bother. I know how to do these things; most users won't and given the direction the big G is heading in with device freedom, it's not looking all that bright for this approach either.


For a lot of developers, the current biggest failure of open source is the AWS/Azure/GCP problem. BigCloud has a tendency to just take well liked open source products, provide a hosted version of them and as a result they absolutely annihilate the market share of the entity that originally made the product (which usually made money by offering supported and hosted versions of the software). Effectively, for networked software (which is the overwhelming majority of software products these days) you might as well use something like BSD/MIT rather than any of the GPLs[0] because they practically have the same guarantees; it's just that the BSD/MIT licenses don't contain language that makes you think it does stuff it actually doesn't do. Non-networked software like kernels, drivers and most desktop software don't have this issue, so it doesn't apply.

Open source for that sort of product (which most of the big switches away from open source have been about) only further entrenches BigCloud's dominance over the ecosystem. It absolutely breaks the notion that you can run a profitable business on open source. BigCloud basically always wins that race even if they aren't cheaper because the company is using BigCloud already, so using their hosted version means cutting less yellow tape internally since the difficulty of getting people to agree on BigCloud is much lower compared to adding a new third party you have to work with.

The general response to this issue from the open source side tends to just be to accuse the original developers of being greedy/only wanting to use the ecosystem to springboard their own popularity.

---

I should also note that this generally doesn't apply to the fight between DHH and Mullenweg that's described in the OP. DHH just wants to kick a hornets nest and get attention now that Omarchy isn't the topic du jour anymore - no BigCloud (or for this case, shared hosting provider is probably more likely) is going to copy a random kanban tool written in Ruby on Rails. They're copying the actual high profile stuff like Redis, Terraform and whatever other examples you can recently think of that got screwed by BigClouds offering their services in that way (shared providers pretty much universally still use the classic AMP stack, which doesn't support a Ruby project, immunizing DHHs tool from that particular issue as well). Mullenweg by contrast does have to deal with Automattic not having a stranglehold on being a WordPress provider since the terms of his license weren't his to make to begin with; b3/cafelog was also under GPL and WordPress inherited that. He's been burned by FOSS, but it's also hard to say he was surprised by it, since WP is modified from another software product.

[0]: Including the AGPL, it doesn't actually do what you think it does.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: