The 92 % stat looks really interesting! It’s rarely the spectacular crash that knocks a cluster over. Instead, the “harmless” retry leaks state until everything breaks at 2 a.m on one fateful Friday. Evidently, we should budget more engineering hours for mediocre, silent failures than for outright disasters. That’s where the bodies are buried.
Or survivorship bias: the major issues, that have been addressed, do not cause problems cause they were addressed. Some of the minor issues that are not addressed randomly do cause major issues.
Every polished template looks the same, but each handrolled site is weird in its own way. I’ll happily take wabi-sabi HTML for personal projects over yet another Tailwind landing page!
It spikes HIF-1α → EPO for a day or two, but meta-analyses still doesn't show a real performance bump, let alone safety at 8 km. Feels less like innovation and more like mountaineering’s own carbon-plate shoes, except the failure mode here is cerebral edema, not a slow marathon time.
> but meta-analyses still doesn't show a real performance bump
I wish all of these news articles would discuss the actual studies instead of lazily parroting the claims of the one guy who is trying to sell expensive Xenon-assisted Everest hikes.
These articles are always PR pieces for Lukas Furtenbach’s expensive Everest tours. Every single time I see the words “Xenon” and “Everest” in a headline, his name is in the article as the source.
> I wish all of these news articles would discuss the actual studies
They do, if you read them.
> While some doctors have used the gas in the past to “precondition” patients to low oxygen levels — for example, before major heart surgery — the practice hasn’t really caught on because “it hasn’t been as protective as one would hope,” he said.
> Mike Shattock, a professor of cellular cardiology at King’s College London, said “xenon probably does very little and there is virtually no reputable scientific evidence that it makes any difference.”
> Some research has shown that xenon can quickly acclimatize people to high altitudes, even as some experts say the benefits, if any, are negligible and the side effects of its use remain unclear.
All the quotes you posted basically say the same thing; There is no evidence for the efficacy of xenon. That's scientist speak for: "Xenon doesn't work".
Depends, it can also mean "this hasn't really been studied well" if there just hasn't been much research into it. I just pulled all the times it talked about studies or another expert talking about it being ineffective.
I did read the article, which is how I knew Lukas Furtenbach was involved. Please don’t accuse people of not reading the article when they’re specifically talking about content of the article.
Anyway, my point was that if these articles wanted to be serious about the science, they’d lead with the studies and science.
Instead, they tack on weasel words (literally “some experts say” and “some research” as in your quotes ) in an attempt to make it feel like a both-sides style journalism while leaving Furtenbach’s claims as the headline and the main story.
It's not a scientific publication though... it's the NYTimes. The main story is that the guys managed a summit in 3 days. For all the controversy around Xenon and it's effects as a PED in sport the combo of hypoxia tents and Xenon provably worked at least this time to enable the rapid summit.
LLMs are amazing at writing code and terrible at owning it.
Every line you accept without understanding is borrowed comprehension, which you’ll repay during maintenance with high interest. It feels like free velocity. But it's probably more like tech debt at ~40 % annual interest. As a tribe, we have to figure out how to use AI to automate typing and NOT thinking.
Or would be, if the LLM actually understood what it was writing, using the same definition of understanding that applies to human engineers.
Which it doesn't, and by its very MO, cannot.
So, every line from an LLM that is accepted without understanding, is really nonexistent comprehension. It's a line of code, spat out by a stochastic model, and until some entity that actually can comprehend a codebases context, systems and designs (and currently the only known entity that can do that is a human being), it is un-comprehended.
This is a very good analogy. And this interest rate can probably be significantly reduced by applying TDD and reducing the size of isolated subsystems. That may start to look like microservices. I generally don’t like both for traditional development, but current LLMs both make them easier and more useful.
And the “rule of three” basically ceases to be applicable between components — either the code has localized impact, or is a part of rock-solid foundational library. Intermediate cases just explode the refactoring complexity.
This sort of can’t last though right? The whole advantage in a hedge fund is being on the opposite side of a trend. If everyone is using AI, those returns must start to diminish quickly.
I think at the hedge funds, AI will play out similar to how quant has in the past, i.e. as a technology that will still have alphas unlockable by good usage.
High-performing funds may "beat the market" at models, data, fine-tuning, RAG or even figuring out which AI tool should be deployed for which tasks.
Interesting parallel, but as it is packaged today - "AI" looks much lighter on technical customization than quant.
What can you really meaningfully change on the large models today? a bit of RAG, some prompting - sure. But that sounds like a much lighter lift compared to the tasks done by large teams of top-tier CS grads in older quants firms.
I suspect a lot of HN readers aren't familiar with cricket. But as a fellow cricket fan, I really appreciate the opportunity to nerd about the gentleman's game here! :-)
Question: Did you get a chance to check if geography has any meaningful impact on how impactful the toss is? e.g. at some venues in the subcontinent where dew kicks in the evening, or the greentop venues in England where the first morning becomes almost unplayable in front of swinging pacers?
My first question on reading this was how it varied by format. I naively thought that the shorter the format, the more important the toss; i.e. T20 would show the biggest difference. But this data shows it is in the ODI where the toss has the greatest effect. I guess that makes sense, since the ODI--being 50 overs/Innings--maximizes the chance of differing conditions over the course of the game. One team will bowl in the morning-early afternoon, the other in the afternoon-evening. Win the toss and pick right, and you have a distinct advantage.
This is very cool! Given how messy and busy many websites have become, we really need a robust markdown converter that lets readers focus on reading the content. Nice to see something stepping up where Readability left off.
Current AI coding tools don't seem to have any real impact on true development productivity/ efficiency yet.
Apparently you cannot just one-shot everything to production. who'd have thunk? :-)