Hacker Newsnew | past | comments | ask | show | jobs | submit | dietr1ch's commentslogin

Google is just using it's cozy position rather than pushing the forefront in most of their products

Are you talking about the US or Iran?

Unless i read the request incorrectly, iran.

To me jj is an ok porcelain for git, but I find it worse than magit. Sure, it has some tricks under their sleves for merging, but I just don't run into weird merges and never needed more advanced commands like rerere.

What I'd would expect of the next vcs is to go beyond vcs of the files, but of the environment so works on my machine™ and configuring your git-hooks and CI becomes a thing of the past.

Do we need an LSP-like abstraction for environments and build systems instead of yet another definitive build system? IDK, my solution so far is sticking to nix, x86_64, and ignoring Windows and Mac, which is obviously not good enough for like 90%+ of devs.


> found Meta to have inadvertently stored certain passwords of social media users on its internal systems without encryption, and fined it €91m (£75m)

WTF? I thought that on 2010 already people were diligent enough to avoid even sending the password and instead just hashed it locally before even sending it.


That is not standard even today. The main threat is in transit over the network, which https/TLS solves, but obviously this won’t stop error traces or logging on the server from including request bodies.

If you do hash locally (not sure I’ve seen any big players do this), you also need to be hashing server side (or else the hash is basically a plain text password in the database!)

That said, I’m not sure why companies don’t adopt this double hashing approach. Complexity maybe? I know it could limit flexibility a little as some services like to be able to automatically attempt capitalization variations (eg. caps lock inverse) on the server. Anyways in 2026 we should all be using passkeys (if they weren’t so confusing to end-users, and so non-portable)


if you hash locally isn't that effectively like using private/public keys but less secure? might as well use the real thing then.

There's no effectively using private/public keys whatsoever.

The communication with the server must be secure, the extra hash only provides the ability for the server's data getting leaked without compromising the password through server logs. The usual setup I'd say is to salt the password in the server and store (salt, hash(pw+salt)), but that still handles text-plain password that might get logged by mistake.


i meant in term of complexity, if i hash the key on the client, but don't want the hash to become the password then i have to do some dance with sharing a secret or something that is as or more complex than using private/public keys outright.

That's never been standard. Passwords in log files is a common issue, crazy you can get fined 8 digits for it.


I tried Emacs, but realised I need NixOS to get the packages I depend on like git to download my config. I can't use stock emacs. There's a trick to get Emacs and termux to share packages, but not for nix-on-droid :/

You can do some signing hackery and allow Emacs to see executables from termux https://gsilvers.github.io/me/posts/20250921-emacs-on-androi...

For anyone who ends up here and curious.


The only annoyance I've faced waiting for nix to compile a local build. I'd have thought that larger distros had no issues with it.

I wish they had a revenue goal to release openly, that way spending money in them would contribute to better open models in the long run.

This is how I view that the public can fund and eventually get free stuff, just like properly organized private highways end up with the state/society owning a new highway after the private entity that built it got the profits they required to make the project possible.


There are a lot of options for doing things this way:

https://en.wikipedia.org/wiki/Threshold_pledge_system


No need to type `::1` anymore, you can instead just type `The new times take now beneath the new time while new times take the new year.`

OK. That's much easier. :D

> Have you ever run GNU Parallel on a powerful machine just to find one core pegged at 100% while the rest sit mostly idle?

Not exactly, but maybe I haven't used large enough NUMA machines to run tiny jobs?

I think usually parallel saturates my CPU and I'd guess most CPU schedulers are NUMA-aware at this point.

If you care about short tasks maybe parallel is the wrong tool, but if picking the task to run is the slow part AND you prefer throughput over latency maybe you need batching instead of a faster job scheduling tool.

I'm pretty sure parallel has some flags to allow batching up to K-elements, so maybe your process can take several inputs at once. Alternatively you can also bundle inputs as you generate them, but that might require a larger change to both the process that runs tasks and the one that generates the inputs for them.


parallel works fine so long as the time per job is on the order of seconds or longer.

Let me give you an example of a "worst-case" scenario for parallel. Start by making a file on a tmpfs with 10 million newlines

    yes $'\n' | head -n 10000000 > /tmp/f1
So, now lets see how long it takes parallel to push all these lines through a no-op. This measures the pure "overhead of distributing 10 million lines in batches". Ill set it to use all my cpu cores (`-j $(nproc)`) and to use multiple lines per batch (`-m`).

    time { parallel -j $(nproc) -m : <f1; }

    real    2m51.062s
    user    2m52.191s
    sys     0m6.800s
Average CPU utalization here (on my 14c/28t i9-7940x) is CPU time / real time

    (172.191 + 6.8) / 171.062 = 1.0463516152 CPUs utalized
Note that there is 1 process that is pegged at 100% usage the entire time that isnt doing any "work" in terms of processing lines - its just distributing lines to workers. If we assume that thread averaged about 0.98 cores utalized, it means that throughout the run it managed to keep around 0.066 out of 28 CPUs saturated with actual work.

Now let's try with frun

    . ./frun.bash
    time { frun : <f1; }

    real    0m0.559s
    user    0m10.409s
    sys     0m0.201s
CPU utilization is

    ( 10.409 + .201 ) / .559 = 18.9803220036 CPUs utalized
Lets compare the wall clock times

    171.062 / 0.559 = 306x speedup
Interestingly, if we look at the ratio of CPU utilization (spent on real work):

    18.9803220036 / 0.066 = 287x more CPU usage doing actual work
which gives a pretty straightforward story - forkrun is 300x faster here because it is utilizing 300x more CPU for actually doing work.

This regime of "high frequency low latency tasks" - millions or billions of tasks that make milliseconds or microseconds each - is the regime where forkrun excels and tools like parallel fall apart.

Side note: if I bump it to 100 million newlines:

    time { frun : <f1; }

    real    0m4.212s
    user    1m52.397s
    sys     0m1.019s
CPU utilization:

    ( 112.397 + 1.019 ) / 4.212 = 26.9268 CPUs utalized
which on a 14c/28t CPU doing no-ops...isnt bad.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: