Does it fix the current UX issue with Squash & Merge?
Right now I manually do "stacked PRs" like this:
main <- PR A <- PR B (PR B's merge target branch is PR A) <- PR C, etc.
If PR B merges first, PR A can merge to main no problems. If PR A merges to main first, fixing PR B is a nightmare. The GitHub UI automatically changes the "target" branch of the PR to main, but instantly conflicts spawn from nowhere. Try to rebase it and you're going to be manually looking at every non-conflicting change that ever happened on that branch, for no apparent reason (yes, the reason is that PR A merging to main created a new merge commit at the head of main, and git just can't handle that or whatever).
So I don't really need a new UI for this, I need the tool to Just Work in a way that makes sense to anyone who wasn't Linus in 1998 when the gospel of rebase was delivered from On High to us unwashed Gentry through his fingertips..
Yes, we handle this both in the CLI and server using git rebase --onto
git rebase --onto <new_commit_sha_generated_by_squash> <original_commit_sha_from_tip_of_merged_branch> <branch_name>
So for ex in this scenario:
PR1: main <- A, B (branch1)
PR2: main <- A, B, C, D (branch2)
PR3: main <- A, B, C, D, E, F (branch3)
When PR 1 and 2 are squash merged, main now looks like:
S1 (squash of A+B), S2 (squash of C+D)
Then we run the following:
git rebase --onto S2 D branch3
Which rewrites branch3 to:
S1, S2, E, F
This operation moves the unique commits from the unmerged branch and replays them on top of the newly squashed commits on the base branch, avoiding any merge conflicts.
Conflicts spawn most likely because PR A was squashed, and once you squash Git doesn't know that PR B's ancestors commits are the same thing as the squashed commit on main.
No idea if this feature fixes this.
Edit: Hopefully `gh stack sync` does the rebasing correctly (rebase --onto with the PR A's last commit as base)
> Conflicts spawn most likely because PR A was squashed, and once you squash Git doesn't know that PR B's ancestors commits are the same thing as the squashed commit on main.
Yeah, and I kind of see how git gets confused because the squashed commits essentially disappear. But I don't know why the rebase can't be smart when it sees that file content between the eventual destination commit (the squash) is the same as the tip of the branch (instead of rebasing one commit at a time).
The tip of B is the list of changes of both A and B, while the tip of main is now the squashed version of the changes of A. Unless a branch tracks the end of A in the PR B, It looks like more you want to apply A and B on top of A again.
A quick analogy to math
main is X
A is 3
B is 5
Before you have X + 3 + 5 which was equivalent to X + 8, but then when you squash A on on X, it looks like (X + 3) + (3 + 5) from `main`'s point of view, while from B, it should be X + (3 + 5). So you need to rebase B to remove its 3 so that it can be (X + 3) + 5.
Branches only store the commits at the top. The rest is found using the parent metadata in each commits (a linked list. Squashing A does not remove its commits. It creates a new one, and the tip of `main` as its parent and set the new commit as the tip of `main`. But the list of commits in B still refer to the old tip of `main` as their ancestor and still includes the old commits of A. Which is why you can't merge the PR because it would have applies the commits of A twice.
I agree that this is annoying and unintuitive. But I don’t see the simplest solution here, so:
All you need to do is pull main, then do an interactive rebase with the next branch in your stack with ‘git rebase -i main’, then drop all the commits that are from the branch you just merged.
I'm not sure I follow your workflow exactly. If PR B is merged, then I'd expect PR A to already be merged (I'd normally branch off of A to make B.)
That said, after the squash merge of A and git fetch origin, you want something like git rebase --update-refs --onto origin/main A C (or whatever the tip of the chain of branches is)
The --update-refs will make sure pr B is in the right spot. Of course, you need to (force) push the updated branches. AFAICT the gh command line tool makes this a bit smoother.
If I'm following correctly, the conflicts arise from other commits made to main already - you've implicitly caught branch A up to main, and now you need catch branch B up to main, for a clean merge.
I don't see how there is any other way to achieve this cleanly, it's not a git thing, it's a logic thing right?
I've no issue with the logic of needing to update feature branches before merging, that's pretty bread and butter. The specific issue with this workflow is that the "update branch" button for PR B is grayed out because there are these hallucinated conflicts due to the new squash commit.
The update branch button works normally when I don't stack the PRs, so I don't know. It just feels like a half baked feature that GitHub automatically changes the PR target branch in this scenario but doesn't automatically do whatever it takes for a 'git merge origin/main' to work.
> the "update branch" button for PR B is grayed out because there are these hallucinated conflicts due to the new squash commit
Those are not hallucinated. PR B still contains all the old commits of A which means merging would apply them twice. The changes in PR B are computed according to the oldest commits belonging to PR B and main which is the parent of squashed A. That would essentially means applying A twice which is not good.
As for updating PR B, PR B doesn't know where PR A (that are also in PR B) ends because PR A is not in main. Squashed A is a new commit and its diff corresponds to the diff of a range of commits in PR B (the old commits of PR A), not the whole B. There's a lot of metadata you'd need to store to be able to update PR B.
I guess to me, I'm looking at it from the perspective of diffing the repo between the squashed commit on main and the tip of the incoming PR. If there are merge conflicts during the rebase in files that don't appear in that diff, I consider that a hallucination, because those changes must already in the target branch and no matter what happened to those files along the way to get there, it will always be a waste of my time to see them during an interactive rebase.
I don't think we need to store any additional metadata to make the rebase just slightly more smarter and able to skip over the "obvious" commits in this way, but I'm also just a code monkey, so I'm sure there are Reasons.
No, it's a Git thing arising from squash commits. There are workflows to make it work (I've linked the cleanest one I know that works without force pushing), but ultimately they're basically all hacks. https://www.patrickstevens.co.uk/posts/2023-10-18-squash-sta...
Yep that's how I do it if I have to deal with stacked PRs. I also just never use rebase once anything has happened in a PR review that incurs historical state, like reviews or other people checking out the branch (that I know of, anyways). I'll rebase while it's local to keep my branch histories tidy, but I'll merge from upstream once shared things are happening. There are a bunch of tools out there for merging/rebasing entire branch stacks, I use https://github.com/dashed/git-chain.
You "just" need to know the original merge-base of PR B to fix this. github support is not really required for that. To me that's the least valuable part of support for stacked PRs since that is already doable yourself.
The github UI may change the target to main but your local working branch doesn't, and that's where you `rebase --onto` to fix it, before push to origin.
It's appropriate for github to automatically change the target branch, because you want the diff in the ui to be representative. IIRC gitlab does a much better job of this but this is already achievable.
What is actually useful with natively supported stacks is if you can land the entire stack together and only do 1 CI/actions run. I didn't read the announcement to see if it does that. You typically can't do that even if you merge PR B,C,D first because each merge would normally trigger CI.
EDIT: i see from another comment (apparently from a github person) that the feature does in fact let you land the entire stack and only needs 1 CI run. wunderbar!
Oh that's annoying, seems to me there wouldn't have been an issue if you just merged B into A after merging A into main, or the other way around but that already works fine as you pointed out.
I mean if you've got a feature set to merge into dev, and it suddenly merges into main after someone merged dev into main then that's very annoying.
It seems to parse just fine? They create some unknown mixture of methanol/ethanol (who knows what the ratio is, who cares, like you said, depends what you're making it from) and then raise it past the boiling point of methanol, throwing away everything that comes over while still under the boiling point of ethanol. It sounds like basic distillation to me.
> This is actually a myth. I’ll have to see if I can find the papers I read but mass spectrometry has shown that methanol comes out throughout the entire process. The idea that things come out at their boiling temperature is a drastic oversimplification.
Please do find those papers! They may be describing a radical new chemistry that I'm not familiar with.
To be clear - methanol boils at 64C and ethanol boils at 78C. Are you suggesting that in standard distillation, there is still some non-trace methanol coming over at 78C? If I personally observed that in a laboratory setting, I'd quickly assume measurement error or external contamination.
I suspect that the vapor of the mash is always a mix of the components, and even above the boiling point of methanol, it still produces a mixed vapor. At room temperature, all of the components produce some vapor and will evaporate. This continues as the temperature rises.
It's not clear to me that simple distillation of a methanol/ethanol mixture can produce either pure ethanol or pure methanol at any point, just as it's impossible to distill ethanol and water to pure ethanol (absolute alcohol) if the water is above a small percentage of the mixture.
You can't distill out pure methanol, as at the boiling point of methanol ethanol also has some vapor pressure, so you distill a mix. However above that boiling point you distilled out all methanol (with a mix of ethanol), and the remaining ethanol should be free from methanol.
This also matches what happens when distilling ethanol from water. You can't distill pure ethanol, but you csn distill ethanol-free water afterwards.
"This also matches what happens when distilling ethanol from water."
Right, normal commercial ethanol production is 95% EtOH, 5% H2O (the constant boiling mixture/azeotrope). That's good enough for most uses but not all. The only problem the average person would ever likely encounter from the residual H2O would be in the application of alcohol-based coatings such as shellac where it can cause whitish discoloration. Painters will occasionally use 99% EtOH which is substantially more expensive (removing that residual H2O requires an altogether different proxess).
Yup, distillation never produces a pure product. Cask-strength whiskeys contain quite a lot of water, even though nobody is stupid enough to distill at 100C. Even an industrial column still can't go over 96% ABV.
There is always some amount of vapor pressure, even below the boiling point of a substance. Otherwise, neither water nor alcohol would evaporate by themselves at room temperature! The temperature we call the "boiling point" is just the temperature at which the vapor pressure equals the ambient pressure.
>To be clear - methanol boils at 64C and ethanol boils at 78C. Are you suggesting that in standard distillation, there is still some non-trace methanol coming over at 78C?
From what I remember, the highest concentration of methanol is in the tails. That should tell you everything.
Yes. It doesnt work the way you think. When you mix chemicals together and then boil, the result isn’t that simple.
Think of it this way: ethanol boils at 78.5. Water at 100. But when I’m distilling, the first stuff out of the still is coming out at like 80/20 ethanol to water, long before I’m near 100C. The later stuff still has some ethanol in it, even as I near 100C. (You can easily measure while distilling.)
So why would it be surprising that methanol behaved that way as well?
Temperature is just an average, the individual molecules can have a higher or lower temperature and can therefore evaporate already below boiling point.
>They may be describing a radical new chemistry that I'm not familiar with.
It's probably pot still vs. reflux still. Chemists use fractionating columns to get better separation. Home distillers won't necessarily do so, so official advice has to assume they will not.
Yeah column stills exist for home use but they’re not very popular. They’re big and expensive and strip flavor. It’s probably because Home distilling, like home brewing, is largely focused on the craft side rather than trying to get drunk cheaply.
If you’re trying to get drunk cheaply, and without tasting liquor, you cannot beat the product and efficiency of a column still.
But I want my whiskey or apple brandy to have the characteristics of the mash I distill it from. A column still would reduce that.
I mean—depending how much methanol was in the mix to begin with…
It’s been a long time, but I thought there was a whole Raoult’s Law thing, about partial pressures in the vapor coming off the solution combining in proportion to each component’s molar fraction * its equilibrium vapor pressure (at that temperature, presumably). Or something.
Point being, if you’re starting with a bunch of volatiles in solution, there’d be quite a bit of smearing between fractions boiling off at any given temperature/pressure. And you’d be very unlikely to get clean fractions from a single distillation anywhere in that couple-dozen-degree range.
Probably mangled the description, but isn’t that why people do reflux columns?
I would assume it depends on what you are distilling.
If you are making brandy from clarified wine, it probably separates better than rotten grape mash.
It is still a continuum with some methanol molecules likely remaining even in the tails.
For all intents and purposes, the distiller's rule of thumb of throwing away the angels' share is still going to work because low methanol concentrations are never an issue —for the antidote for methanol is ethanol.
You throw away the foreshots because they also contain things like acetone that taste bad and may be harmful. They’re highly unpalatable so people can be relied on to do a sufficient job.
Also “Angel’s share” isn’t what you throw away, it’s what evaporates from the barrel when you age. What you throw away are the foreshots and parts of the heads and tails
From what I understood ethanol and methanol form an azeotrope and boil together at a mixed temperature. And the going blind stuff is just prohibition propaganda both to make home distilled alcohol seem dangerous and to scapegoat the fact that the government was actively poisoning "industrial" ethanol.
methanol and ethanol do not form an azeotrope with each other, they only (both, each) bind to water. that's why separation of methanol and ethanol by holding key temperatures works at all.
furthermore, the azeotrope effect only becomes relevant at concentrations beyond 90% alcohol. so when you're producing pure methanol and ethanol, then distillation won't cut it beyond 90+% as water+(m)ethanol then *at these high concentrations* boil and evaporate together. that's the grain of truth in your statement.
last not least going blind from methanol is _very_ real.
Methanol will certainly make you go blind if you consume it at too high a ratio, it just isn’t a risk when distilling because you can’t feasibly make that happen on accident and it would be hard to even do it on purpose. I think that’s what parent likely meant.
Look at it this way:
The boiling point of ammonia is -33 C.
Would you drink a jug of household cleaning ammonia just because it's been heated to +20C?
But anyway, I don't think there's hazardous levels left after normal distillation+cutting, the reason for not buying booze from some guy behind a barn usually has more to do with lead contamination risks.
Fundamentally it's a fuzzy signal and people shouldn't rely on it. The general public does not understand Boolean logic (oh, so the SynthID is not there, therefore this image is real). The sooner AI watermarking faces its deserved farcical demise the better.
Also something about how AI is not special and we haven't added or needed invisible watermarks for other ways media can be manipulated deceptively since time immemorial, but that's less of a practical argument and more of a philosophical one.
People think that just because they have a way to prove that an image is AI, their worries of misinformation are solved. Better to acknowledge that wherever you look people will be trying to deceive you even if their content won't have as obvious an indicator as SynthID.
Because it’s meaningless for what it’s being marketed for. It’s conceptually inverted. It’s a detector that will detect 100% of the stuff that doesn’t mind being detected, and only the dumbest fraction of stuff that doesn’t want to be detected.
No fault of the extremely smart and capable people who built it. It’s the underlying notion that an imperceptible watermark could survive contact with mass distribution… it gives the futile cat-and-mouse vibes of the DRM era.
Good guys register their guns or whatever, bad guys file off the serial numbers or make their own. Sometimes poorly, but still.
All of which would be fine as one imperfect layer of trust among many (good on Google for doing what they can today). The frustrating/dangerous part is that it seems to be holding itself out as reliable to laypeople (including regulators). Which is how we end up responding to real problems with stupid policy.
People really want to trust “detectors,” even when they know they’re flawed. Already credulous journalists report stuff like “according to LLMDetector.biz, 80% of the student essays were AI-generated.” Jerry Springer built an empire on lie detector tests. British defense contractor ATSC sold literal dowsing rods as “bomb detectors,” and got away with it for a while [2].
It’s backward to “assume it’s not AI-origin unless the detector detects a serial number, since we made the serial number hard to remove.” Instead, if we’re going to “detector” anything, normalize detecting provenance/attestation [e.g. 0]: “maybe it’s an original @alwa work, but she always signs her work, and I don’t see her signature on this one.”
Something without a provable source should be taken with a grain of salt. Make it easy for anyone to sign their work, and get audiences used to looking for that signature as their signal. Then they can decide how much they trust the author.
Do it through an open standards process that preserves room for anyone to play, and you don’t depend on Big Goog’s secret sauce as the arbiter of authenticity.
I hear that sort of thinking is pretty far along, with buy-in from pretty major names in media/photography/etc. The C2PA and CAI are places to look if you’re interested [1].
It would be a better analogy if tobacco companies sold ad space on their packs and chose not to do business with a private for-profit anti-smoking solicitation group.
No it would not. Meta is an advertising company that sells ad space. More specifically, Meta is the dominant firm in the social advertising market which is an oligopoly.
It is "the business", not an imagined side revenue stream.
Any good payload analysis been published yet? Really curious if this was just a one and done info stealer or if it potentially could have clawed its way deeper into affected systems.
This article[0] investigated the payload. It's a RAT, so it's capable of executing whatever shell commands it receives, instead of just stealing credentials.
Ironically Sony wanted those artists online for streaming, and in those days the only way labels had to transport the music to distribution services was sending the CDs. So the CDs landed on my desk because they'd been rejected by the data ingestion teams. I had some more[0] stern words with a very apologetic man from Sony that day.
[0] they were constantly sending CDs that were fucked-up in totally new ways every time
I still haven't bought a Sony labelled product since... though I may or may not have consumed Sony content. They've definitely lost more than they gained.
That's a pretty good sized ego you got yourself there. The number of people that cared about the rootkit in the general populace was insignificant to Sony. Only tech nerds like us even knew about the rootkit or how insane it was to use. Unless you were a huge flagship purchaser of Sony's latest/greatest each year, they don't even notice you when you buy a TV or any other item.
People barely remember the studio getting hacked and releasing a film
They faced multiple lawsuits and had to do product recalls, so clearly they lost something. What exactly did they gain? IIRC you could avoid it by just turning off autoplay in Windows (which any sane person already did, or you could hold shift I think), and they were otherwise valid audio CDs (otherwise they wouldn't work in players), so it did exactly nothing to stop the CDs from being ripped and shared. And back then everyone knew about p2p so it really only took one person ripping it for it to spread. So even ignoring the lawsuits, even one person boycotting them probably makes it a net loss. Actually the development costs probably made it a loss.
Not sure how interpreted what I said as anything other than the implied you. No matter how much money you did or no longer do spend with Sony is not anything they'd notice. The caveat being you were a flagship purchaser from them which I doubt was the case.
You assumed it was a point of ego, even said as much.
I don't have to buy shit from Sony if I don't want to, and you can't make me.
They definitely lost more on potential hardware sales the past few decades than I would have spent on content... even if it's not enough for them to notice.
And honestly this is more than they really should even have to do. I think it does go above their obligation. They're doing Offcom a favor here, they don't even have to figure out how to block it themselves.
> there's a sense that blocking these imports is an affront to base philosophical freedom in a way that prohibiting physical imports isn't.
It would serve UK legislators well to explore that tingling sense some more before they consider any further efforts in this direction, but that's just my two pence.
Right now I manually do "stacked PRs" like this:
main <- PR A <- PR B (PR B's merge target branch is PR A) <- PR C, etc.
If PR B merges first, PR A can merge to main no problems. If PR A merges to main first, fixing PR B is a nightmare. The GitHub UI automatically changes the "target" branch of the PR to main, but instantly conflicts spawn from nowhere. Try to rebase it and you're going to be manually looking at every non-conflicting change that ever happened on that branch, for no apparent reason (yes, the reason is that PR A merging to main created a new merge commit at the head of main, and git just can't handle that or whatever).
So I don't really need a new UI for this, I need the tool to Just Work in a way that makes sense to anyone who wasn't Linus in 1998 when the gospel of rebase was delivered from On High to us unwashed Gentry through his fingertips..
reply