Hacker Newsnew | past | comments | ask | show | jobs | submit | zokier's commentslogin

You jest, but I think there is kernel of truth here. I do think people should be doing more (friendly) forks instead of funneling everything through upstream.

Browser version numbers are in the hundreds and it doesn't seem to be a problem.

Indeed! I think both 0-based versioning, and this (maybe?) downside I bring up addresses the tension between wanting to limit the damage caused by breaking changes with retaining the ability to make them.

To me the actual book section of Oodi is not particularly interesting/inspiring/impressive. It's not bad, but it is pretty mundane and gets overshadowed by all the other stuff going on in the building.

Famously the actual main library at Pasila has a much larger book collection. Oodi is more of a community space / show piece.

Oodi was noted in the media to be particularly unimpressive when it came to actually having books back when it opened.

https://www.hs.fi/kulttuuri/art-2000005933560.html


Fair, but it is part of a pretty large library system and you can order whatever you want to pickup at Oodi

Well, book collection of the particular library doesn't matter much nowadays since you can order a book online and it will be delivered to your closest library. So it's more like a public space.

Do you see "Offset Geometric Contact" paper fitting into this project somehow? https://graphics.cs.utah.edu/research/projects/ogc/

I actually have an implementation of that too, since I was fascinated by the twisting cloth example, but need to figure out how best to incorporate it, or if it’s better in a standalone experiment.

I don't think it's cool to hotlink every single raw image from NASAs server here.

Generally yeah, but NASA seems to explicitly allow hot linking?

> NASA images may be used as graphic “hot links” to NASA websites, provided they are used within the guidelines above. https://www.nasa.gov/nasa-brand-center/

As long as you don't "explicitly or implicitly convey NASA’s endorsement of commercial goods or services" and "NASA should be acknowledged as the source of the material" they seem fine with it.

Seems this project does miss that last part though, I don't see any mention of where the images are from.


Here is a map: https://science.nasa.gov/mission/msl-curiosity/location-map/

It's nowhere near the peak and is very unlikely to reach it.


I’m aware of the map but I’m unclear on where the peak/ridge is.

The main peak is under the letter "C" in the "Gale Crater" text on that map.

Oh wow, no wonder I never found it; I was always zoomed way too far in.

on the other hand, basic surveying is centuries old. if you lose square km patch of land, it is not due lack of technology even in the previous century.

Even a sextant and decent watch should be able to get you to within a nautical mile or two.

While everyone is commenting about drm, there is another factor to consider: TLS. These old Kindles definitely do not support up-to-date TLS ciphersuites and understandably Amazon wants/needs to drop insecure ciphersuites from public endpoints at some point. I'm pretty sure that is also the reason why the Wikipedia integration for these old Kindles broke ages ago.

Software updates exist, I’m sure they could support it if they wanted to.

Ed25519 isn’t so new that the hardware wouldn’t be capable.


Switzerland also happens to have over 5x population density of USA, and 80% higher household median income based on quick google.

While Switzerland has higher median HHI than the US as a whole, the Bay Area in California does have comparable median household income.

In the Bay Area, Sonic does offer 10Gbps fiber internet in some places on new buildouts.

I struggled to find a use case for it, except as a WAN between a homelab and a remote datacenter where I could do crazy things like run an NFS server over the internet or stream training data to a GPU, etc.


> crazy things like run an NFS server over the internet

Is that so crazy? If 10G was the default, you could just plop a cheap NAS at home and nobody would need to pay monthly subscriptions for cloud services.


How is 1G insufficient for running a NAS over the internet?

1Gbps is even slower than a SATA SSD that can saturate a 6Gbps SATA link.

10Gbps gets you speeds only a few times slower than a PCIe 3.0 NVMe SSD. Except you can run that over the Internet!


I'm aware of how fast an SSD is, but you're probably only accessing your home NAS from your phone or laptop which most likely won't have more than 1gbps, if that. If you're a power user you probably already have high speed networking inside your own home to max out a 10G link.

If I only have 1Gbps, I'm not going to be using a NAS for something like /home, or other things that need to be performant.

For cloud services, I only need to match the bandwidth my cellphone has for my home NAS, so 1Gbps would be fine.

10Gbps is like.... my /home on my desktop could be served via NFS from somewhere else and it would probably be barely noticeable. That's just another level of crazy.


If only we didn't need redundancy...

Of the publicly available sources I think CloudFlares Radar is one of the better ones. Silver linings of having such wide dragnet on the internet. It puts Linux market share at 3-4%, with some regional variance

https://radar.cloudflare.com/explorer?dataSet=http&groupBy=o...

Fun tidbits, Finland is at ~10% (!), and Germany at 6.3%.


This was probably a lot more true in the past but Linux users tend to be more privacy conscious and do things like spoof their user agent, so this is almost certainly an undercount. You basically used to have to do this to browse the web before Firefox became one of the dominant browsers.

I don't know anyone who goes through the trouble to spoof their user agent and I know plenty Linux users.

Unfortunately I have to use some government websites which refuse to work when my user agent contains "Linux x86_64". So I just always spoof it.

This is the reality - most people won't spoof until they figure out it's the way to make a specific site work; and then they'll likely spoof for everything.

I'd also like to add that we forget that we're doing it, or at least I do. Once you set something up like that, there's never any reason to get rid of it; nobody is positively discriminating towards Linux.

I love when a ruleset (firewall, for example) has a "comments" field because I inevitably forget why I added something and then Chesterton's fence means I leave it forever, lest I spend hours a year later wondering why something broke.

Every time I try to change my user agent with a FF extension I get hit with brutal cloudflare captcha loops. How are you changing your user agent in a way that this is not a problem?

The archwiki Firefox privacy guide comes to mind, which mentions UA spoofing:

https://wiki.archlinux.org/title/Firefox/Privacy


Actual reason: SBC retro handheld consoles now run Linux and people are using them to play steam indie games. The China holiday had some blow out pricing.

Non primary devices more likely to run Linux. Primary still windows.


I do, to access YouTube TV on my Ubuntu HTPC.

Tons of people did and do this to get higher resolution on a certain streaming site.

Privacy minded Linux users probably also know, spoofing your user agent is likely to increase fingerprint entropy and actually decreases privacy. It may have been true in the past, but I don't think anyone even recommends it anymore.

There's still plenty of web sites that check the OS and if it's not Mac OS, Windows, or Andoid it's no service for you. Faking your UA is not always about privacy, it's about defeating stupidity.

You should only do this on websites that actually require it otherwise you're almost certainly going to cause more problems than you'll solve.

Messing with the UA header is going to get you flagged by every bot detection tool because when you change your header from "Firefox on Linux" to "Chrome on Windows" your fingerprints don't add up anymore and you look exactly like a poorly written bot. You're likely going to see more captchas, you might get blocked or rate limited more often, and get placed under increased scrutiny, orders held for verification, silently filtered or shadow banned, etc.


Browser yes, but OS? Rarely, I have issues with Firefox, but never had Chromium not working, too.

It any case, it would be silly to assume services measuring OS popularity would put up such limitations. And more likely than not, people are changing their UA as a work-around on a case-by-case basis than make it a default, since that's gonna cause trouble.

In the last decade, the only time, I actually had to touch the UA is when breaking ToS with curl :D


The only websites that really do this anymore are ones that are delivering native code for those platforms or those that require DRM that only work on those platforms.

Even when that is the case (what is a minority of the time), just because I'm using Linux, it doesn't mean that I don't want to download some Windows software.

But well, I haven't had to spoof my browser's UA for a few years. If some site refuses it, I'll just move on. (Including some that started doing it after I brought thousands of dollars worth of stuff from them.)


I'm sure there are some, but having used Linux for 32 years, it's been at least 20 years since I needed to do that.

Actually that sounds like exactly the sort of nuanced reality that “privacy-conscious Linux users” aren’t that likely to know at all.

The EFF's "Panopticlick" paper was published in 2010 [1], together with Firefox/Tor research that knowledge became mainstream. Therefore privacy guides don't recommend it. The Arch wiki linked above has this warning in bright red:

> "Changing the user agent without changing to a corresponding platform will make your browser nearly unique."

Sorry, I am not sure, if arguing about nuanced reality is the battleground, where I see you thriving.

[1] https://coveryourtracks.eff.org/ (browser test since 2014)


If you spoof user agent, you will get more captchas because it won't match their other fingerprinting.

You also get more captchas because you are on Linux, I see the Cloudflare one on my computer everytime.

Used to be worse. Something happened in the last year and I'm seeing way way less random captchas for regular use from a residential IP. In '22-'24 it used to be extremely common, now it's an event when it happens. Also went from mint to plain ubuntu so that might have something to do with it?

It's a good thing too, because when I see the Cloudflare captcha I try it once and if that doesn't work then I just close the tab and add it to the list of non-functioning websites.

Cloudflare captcha = infinite loop of captchas (if it doesn't work on the first try). You can give up the moment that happens, because you will never get to the website itself.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: