Hacker Newsnew | past | comments | ask | show | jobs | submit | super256's commentslogin

I don't think sabotaging a company just because you don't want to work with a certain framework and deploy it on k8s is a good idea.

No, this is wrong.

WSL2 distributions share the same Linux kernel. They only get their own root filesystem with a Linux userland (/bin, /usr, /lib etc), and some WSL config meta data. This is then stored as a virtual disk image (which is probably where your belief comes from). But the kernel runs in a single utility VM. The distros share that kernel instance and they are separated via namespaces only.

This makes running multiple WSL2 distributions in parallel very performant btw, as there is no world switch.


I stand corrected. It makes sense that it is a chroot/rootfs rather than fully independent VMs.

re: side-by-side running, I always get socket and/port port problems when doing that. Without having looked into it at all I figure it is NAT collisions.


Every accepted PR for supporting insecure phones eventually becomes a maintenance burden, and potentially a security vulnerability. If they don't want to spend time on it, it's okay to decline such PRs.

1) this reads like it's posted by an LLM

2) why could they not just up the prices for new deployments, like they did with their dedicated servers? I think that would be fairer to existing customers

If you have a company, I can recommend leaseweb for cheap hosting. I host my personal stuff like my email and my ente.io instance there. They are cheaper than Hetzner (already before the new price increase) if you don't need managed k8s.


You don't even need to do requests if you are the owner of the URL. Robot.txt changes are applied in retrospect, which means you can disallow crawls to /abc, request a re-crawl, and all snapshots from the past which match this new rule will be removed.


I prefer archive.today because the Internet Archive’s Wayback Machine allows retrospective removals of archived pages. If a URL has already been crawled and archived, the site owner can later add that URL to robots.txt and request a re-crawl. Once the crawler detects the updated robots.txt, previously stored snapshots of that page can become inaccessible, even if they were captured before the rule was added.

Unfortunately this happens more often than one would expect.

I found this out when I preserved my very first homepage I made as a child on a free hosting service. I archived it on archive.org, and thought it would stay there forever. Then, in 2017 the free host changed the robots.txt, closed all services, and my treasured memory was forever gone from the internet. ;(


This information is now many years out of date - they no longer have this policy.


Any idea when that changed? I've been unable to access historical sites in the past because someone parked the domain and had a very restrictive robots.txt on it.


Even so you can still just request your site to be removed: https://help.archive.org/help/how-do-i-request-to-remove-som...


OpenAI forces users to verify with their ID + face scan when using Codex 5.3 if any of your conversations was redeemed as high risk.

It seems like they currently have a lot of false positives: https://github.com/openai/codex/issues?q=High%20risk


They haven't asked me yet (my subscription is from work with a business/team plan). Probably my conversations as too boring


Try something not boring and see what happens?


I found pipepipe to be more stable, break less and have more features.

https://pipepipe.dev/


How can it be more stable if it still uses NewpipeExtractor?


Apparently they forked the extractor some years ago and have been maintaining it independently, without merging anything from the original branch.


Positive Publicity. Valve does many things that are received poorly (e.g. cancelling counterstrike fan projects, intransparent lootbox gambling, etc.), but they are doing enough good things that such things are quickly forgotten.


Also, modern native UIs became looking garbage on desktops / laptops, where you usually want a high information density.

Just look at this TreeView in WinUI2 (w/ fluent design) vs a TreeView in the good old event viewer. It just wastes SO MUCH space!

https://f003.backblazeb2.com/file/sharexxx/ShareX/2026/02/mm...

And imo it's just so much easier to write a webapp, than fiddle with WinUI. Of course you can still build on MFC or Win32, but meh.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: