Hacker Newsnew | past | comments | ask | show | jobs | submit | abjKT26nO8's commentslogin

No, one cannot. Modern medicine is part of our environment. Failure to make use of it is a failure to adapt to the current environment which lessens the disadvantage of individuals with higher risk of cancer, who may have other advantages letting them thrive in this environment.


Jira. My only pet-peeves are:

* it breaks native keyboard shortcuts. After disabling the shortcut overrides in settings, "/" is a NOP (which is weird, since disabling the overrides worked in Confluence)

* the markup is non-standard (but I can live with it)

* sometimes it will log me out when I want to post a comment and all of what I wrote in the comment box gets lost


It only does full-exact-word searches. If a word is written in a slightly different form or you write only a part of it, it won't find it.


Thanks. That's very poor alright, I'll keep a note of it.


On the other hand, Swift changes so frequently that book authors teaching the language can't keep up with Apple to write up-to-date books. By the time a book comes out covering version N, we're a couple months away from version N+1. In this situation I can't trust I will be able to buy a book, learn the language and become comfortable with it before there is a new version with major changes. It's a very moving ground.

As for performance, it would be cool if every single thing wasn't behind an atomic reference counter, making it slower than even garbage-collected Go: https://media.ccc.de/v/35c3-9670-safe_and_secure_drivers_in_... (the relevant part starts at 33:06).


Regarding language changes, I think this has improved quite a bit. I forget if stability was introduced in 4.x or 5.x, but code written today should be fine even when run through a new major version of the compiler. You can simply ignore new language features you aren't ready for until your codebase is due for a refactor.

Regarding ARC, you don't have to use reference types in Swift, and the community seems to agree that structs are more suitable than classes for most use cases (a SwiftUI app is made of structs that conform to certain protocols, and those structs can be initialized, copied, and "modified" with little overhead).

The cool thing about Swift is that structs can still have methods, computed properties, and custom initializers. So if you're coming from Python or Ruby or Java, or think OOP can help you organize your code, you don't have to throw away everything you've learned in order to be productive and write elegant code (and Swift brings a few new OOP tricks of its own).


> I forget if stability was introduced in 4.x or 5.x, but code written today should be fine...

They've always provided source compatibility modes in new releases of the compiler so you can delay making syntax changes to an existing codebase if you need to.

You're thinking of ABI stability, which is in place as of Swift 5, and doesn't have anything to do with what parent is talking about.


For some time; at some point newer compilers will drop support for older versions.


> Swift changes so frequently that book authors teaching the language can't keep up

This hasn't been true for a couple years.


I have terminal bound to "<super>+<enter>". From there I run "cat > title.txt" or "cat >> title.txt" and type whatever I want to save. It's as frictionless as it can be.


> Yes, in theory all the people who refuse to reproduce will be gone and are replaced by the children of those who want to reproduce.

Those, who do want to reproduce, will also be replaced by the next generations. Natural selection doesn't select individuals, it selects genes. And your genes are well-represented in the rest of the population.


Just because an average user isn't able to quite put a finger on their frustration and its source, doesn't mean it isn't there. Studies show that even though users often won't be able to see that it's the performance of the program that is infuriating, they will be more nervous using it anyway and will prefer to use a faster alternative.

When phones with touch screens entered the market, we'd often put up with the latency of touch interaction, but these were irritating nevertheless. Then the early iPhones showed how low the latency could be and how much more pleasant using it is. iPhones degraded in this regard since then and Android phones didn't catch up even to the current iPhones. And I'll never use an Android, one of the main reasons being exactly this: latency.


> the Win10 calculator somehow needs a loading screen

This right here made my day. No further comment on the state of technology today needed.


The original Emacs was written for Multics at a time when people outside of Bell Labs were largely unaware of the existence of Unix[1]. It also doesn't follow the Unix philosophy of making small CLI utilities composed with pipes. As I understand it (though I haven't used it), Acme is extended with external programs which communicate through pipes just like the traditional Unix utilities.

[1]: https://www.jwz.org/doc/emacs-timeline.html


The first version of Emacs was not made in lisp, instead it used TECO, a horrible programming editor/programming language.


I mostly agree with you and I'm not going to get even near Snaps and Flatpaks.

However, ignoring the aspect of software distribution, wouldn't you agree that the approach taken by the Linux desktop today is deficient security-wise? For example, I would like to be able to give mbsync (or Thunderbird or whatever) my IMAP password without giving it to any other program. So I don't want to store it in mbsync's config file in plain text. Neither will I use gnome-keyring (or any other keyring) because it doesn't have any kind of "program authorisation". Any program can just spawn a new "secret-tool" process and get my credentials from gnome-keyring.

I've been thinking for a while about implementing a keyring which runs as a daemon with SUID of a dedicated user and checks which program sends requests to it, using /proc/pid/exe, but I'm not sure if it's a secure source of truth: how e.g. namespaces affect what's visible in /proc/pid/exe. I know you've been developing himitsu[1]. Have you thought about this problem in that context?

[1]: https://git.sr.ht/~sircmpwn/himitsu


I agree with you, but the solution would have been Plan 9 namespaces, not Linux containers. What we're working towards today is awful.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: