Hacker Newsnew | past | comments | ask | show | jobs | submit | more jcgl's commentslogin

Jujutsu too[0]:

> Jujutsu keeps track of conflicts as first-class objects in its model; they are first-class in the same way commits are, while alternatives like Git simply think of conflicts as textual diffs. While not as rigorous as systems like Darcs (which is based on a formalized theory of patches, as opposed to snapshots), the effect is that many forms of conflict resolution can be performed and propagated automatically.

[0] https://github.com/jj-vcs/jj


That sounds great. Was that proprietary tooling? I'd be interested in some such thing.


The tool (iron) isn't open source, but there are a bunch of public talks and blogs about how it works, many of which are linked from the github repo[1].

It used to be "open source" in that some of the code was available, but afaik it wasn't ever possible to actually run it externally because of how tightly it integrated with other internal systems.

[1] https://github.com/janestreet/iron


If I understood correctly, the same can be done on VS Code with the github plugins (for github PRs)

It's pretty straightforward: you checkout a PR, move around, and either make some edits (that you can commit and push to the feature branch) or add comments.


Good to know about its existence. I think I'll have to do my own sleuthing though, since I'm a (neo)vim user who dislikes GitHub.


Yeah, it's called git: make your own branch from the PR branch, commit and push the nitpick change, tell the author, and they can cherry-pick it if they approve.

Gitlab has this functionality right in the web UI. Reviewers can suggest changes, and if the PR author approves, a commit is created with the suggested change. One issue with this flow it that's it doesn't run any tests on the change before it's actually in the PR branch, so... Really best for typos and other tiny changes.

Alternatively you actually, you know, _collaborate_ with the PR author, work it out, run tests locally and/or on another pushed branch, and someone then pushes a change directly to the PR.

The complaints about nitpicks slowing things down too much or breaking things sound like solo-hero devs who assume their god-like PRs should be effectively auto-approved because how could their code even contain problems... No wonder they love working with "Dr Flattery the Always Wrong Bot".

*(Hilarious name borrowed from Angela Collier)


I think you misunderstood the tooling I was asking about. This is what was mentioned:

> at my last job code review was done directly in your editor (with tooling to show you diffs as well).

That's not covered by git itself. And it's not covered by Gitlab, GitHub, or any other web-based forge.

> Alternatively you actually, you know, _collaborate_ with the PR author, work it out, run tests locally and/or on another pushed branch, and someone then pushes a change directly to the PR.

Of course you should collaborate with the author. This tooling is a specific means to do that. You yourself are of course free to not like such tooling for whatever reason.

> The complaints about nitpicks slowing things down too much or breaking things sound like solo-hero devs who assume their god-like PRs should be effectively auto-approved because how could their code even contain problems... No wonder they love working with "Dr Flattery the Always Wrong Bot".

Did you maybe respond to the wrong person? I'm not sure how that relates to my comment at all.


> I see this over and over again, wish there was some way to bet on it.

One can play with bond markets and various ETFs or other derivatives, depending on what you envision. But even if your bet is qualitatively correct (that trust in the US ebbs for decades), it's hard to get the timing right to make an actual bet.


Fwiw (in case it hadn't occurred to you already), there's no technical requirement to run your NAT64 on your router/modem/CPE. You could run the NAT64 on a Raspberry Pi or some other little device for instance.


And that kind of NAT effectively doesn't exist in practice, so that's quite beside the point. Such a NAT doesn't scale to more than 24 devices behind it.


>> You can have a stateless NAT: device x.x.x.y will get outbound source ports rewritten to (orignal port) << 8 + y.

> And that kind of NAT effectively doesn't exist in practice […]

Anyone using IPv6 ULA and NPT would disagree.

* https://en.wikipedia.org/wiki/IPv6-to-IPv6_Network_Prefix_Tr...


See my reply to your sibling commenter. My comment was not about NAT in general, i.e. I was not denying the very real existence of stateless NAT. Rather, I was disputing the usefulness of the NAPT solution proposed above as a solution to public IPv4 address exhaustion.


> proposed above as a solution to public IPv4 address exhaustion.

It was not proposed as a solution (although, it would work). I'm pointing out that in networking many names are conflated/used generally against their specific definition. NAT/Firewall; Router/Access Point/Gateway; etc.


No, it very much does. If you want to join two network segments such that on one side all devices are on 10.1.X.X and the other all devices are 10.2.X.X, you'd use a mapping between 10.1.a.b and 10.2.a.b

See https://en.wikipedia.org/wiki/Network_address_translation#Me...


The general context here is about NATting to the public internet at large, not between particular segments. And the parent of my comment was talking specifically about NAPT, which is different from the non-port-based NAT that you're talking about.


Yeah, as a solo dev quite new to frontend, that made me nope out of React almost immediately. Having to choose a bunch of critically important third-party dependencies right out of the gate? With how much of a mess frontend deps seem to be in general? No thanks.

I settled on Svelte with SvelteKit. Other than stumbling block that was the Svelte 4 -> 5 transition, it's been smooth sailing. Like I said, I'm new here in the frontend world and don't have much to judge by. But it's been such a relief to have most things simply included out of the box.


I've been doing frontend since 2012 and I still don't understand why React became so popular.

No two React projects are the same. Like, even the router has at least three different mainstream options to choose from. It's exhausting.


Even when it's the same router package, these things break backward compatibility so often that different versions of the same package will behave differently


That router thing seems crazy. I'm all for having options that are available. But not having, at the minimum, some blessed implementations for basic stuff like routers seems nuts. There is so much ecosystem power in having high-quality, blessed implementations of things. I'm coming from working primarily in Go, where you can use the stdlib for >80% of everything you do (ymmv), so I feel this difference very keenly.


> There is so much ecosystem power in having high-quality, blessed implementations of things.

Indeed. I work mainly in Angular because while it's widely regarded as terrible and slow to adapt, it's predictable in this regard.

Also now with typed forms, signals and standalone components it's not half bad. I prefer Svelte, but when I need Boring Technology™, I have Angular.

90%+ of all web apps are just lists of stuff with some search/filtering anyway, where you can look up the details of a list entry and of course CRUD it via a form. No reason to overthink it.


> widely regarded as terrible and slow to adapt

I know you are saying you do work mainly in Angular, but for others reading this, I don't think this is giving modern Angular the credit it deserves. Maybe that was the case in the late 20-teens, but the Angular team has been killing it lately, IMO. There is a negative perception due to the echo chamber that is social media but meanwhile, Angular "just works" for enterprise and startups who want to scale alike.

I think people who are burned on on decision fatigue with things like React should give Angular another try, might be pleasantly surprised how capable it is out of the box, and no longer as painful to press against the edges.


Strong disagree. Angular is cursed to the bone. It got a bit better recently but its still just making almost everything totally overcomplicated and bloated.


I'd say what you call bloated is in many cases basic functionality that I don't have to go looking for some third party package to fill. There is something to be said for having a straightforward and built-in way to do things, which leads to consistency between Angular projects and makes them easier to understand and onboard to.

IMO, it is only as complicated or simple as you want to make it these days, and claiming otherwise likely is due to focusing on legacy aspects rather than the current state of the framework.

FWIW, I'm not arguing that it's the "best" or that everyone should use it. Or that it doesn't still have flaws. Just that it is still firmly in the top set of 3-5 frameworks that are viable for making complex web apps and it shouldn't be dismissed out of hand.


Not only did it provide that background pressure, but desktop software is a complex domain. So it often pushes the bounds of Linux software overall. Systemd is the example I have in mind here, but I’m sure there are others too that I’m not thinking of.


IIRC, that limitation of Jool can be avoided by running instances in different network namespaces. Some examples here: https://nicmx.github.io/Jool/en/usr-flags-instance.html

Jool’s site also has really great articles and diagrams on different translation techniques. Highly educational. I know it’s really helped me.


The model only sees a stream of tokens, right? So how do you signal a change in authority (i.e. mark the transition between system and user prompt)? Because a stream of tokens inherently has no out-of-band signaling mechanism, you have to encode changes of authority in-band. And since the user can enter whatever they like in that band...

But maybe someone with a deeper understanding can describe how I'm wrong.


When LLMs process tokens, each token is first converted to an embedding vector. (This token to vectors mapping is learned during training.)

Since a token itself carries no information about whether it has "authority" or not, I'm proposing to inject this information in a reserved number in that embedding vector. This needs to be done both during post-training and inference. Think of it as adding color or flavor to a token, so that it is always very clear to the LLM what comes from the system prompt, what comes from the user, and what is random data.


This is really insightful, thanks. I hadn't understood that there was room in the vector space that you could reserve for such purposes.

The response from tempaccsoz5 seems apt then, since this injection is performed/learned during post-training; in order to be watertight, it needs to overfit.


You'd need to run one model per authority ring with some kind of harness. That rapidly becomes incredibly expensive from a hardware standpoint (particularly since realistically these guys would make the harness itself an agent on a model).


I assume "harness" here just means the glue that feeds one model's output into that of another?

Definitely sounds expensive. Would it even be effective though? The more-privileged rings have to guard against [output from unprivileged rings] rather than [input to unprivileged rings]. Since the former is a function of the latter (in deeply unpredictable ways), it's hard for me to see how this fundamentally plugs the whole.

I'm very open to correction though, because this is not my area.


My instinct was that you would have an outer non-agentic ring that would simply identify passages in the token stream that would initiate tool use, and pass that back to the harness logic and/or user. Basically a dry run. But you might have to run it an arbitrary number of times as tools might be used to modify/append the context.


You just add an authority vector to each token vector. You probably have to train the model some more so it understands the authority vector.


Do you have any experience self-hosting SourceHut? I’d really like to do so, but I get weak knees every time I look at the docs for it.


I don't, but I would be curious.

But I'm happy to contribute (money) to SourceHut, they're doing a good job.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: