Actually, that is not what is happening here. What is happening here is that the govt is saying "Okay, we will not buy your widgets. Also, anyone who _does_ buy your widgets, regardless of what they are doing with them, we the government will not do any business with them." Which is waayyyy beyond just not buying widgets. That is outright retaliation and using your power to attempt to destroy a company.
A 30-day waiting period on news articles _should_ meaningfully reduce misinformation. A lot of lives are ruined by misinformation/leaks in early news articles that are later disproven and those retractions are rarely covered as widely as the original false news.
Sell the risky stock that has inflated in value from hype cycle exuberance and re-invest proceeds into lower risk asset classes not driven by said exuberance. "Taking money off the table." An example would be taking ISO or RSU proceeds and reinvesting in VT (Vanguard Total World Stock Index Fund ETF) or other diversified index funds.
What tomuchtodo said. When I left Sun in 1995 I had 8,000 shares, which in 1998 would have paid off my house, and when I sold them when Oracle bought Sun after a reverse 3:1 split, the total would not even buy a new car. Can be a painful lesson, certainly it leaves an impression.
Eh, the top ten stocks in that fund are Nvidia, Apple, Microsoft, Amazon, Google, Broadcom, Google, Facebook, Tesla and TSMC. I propose looking for an ex-USA fund to put part of your investment into. Vanguard has a few, e.g. https://investor.vanguard.com/investment-products/etfs/profi... . You still get TSMC, Tencent, ASML, Samsung and Alibaba in the top 10, but the global stock markets seem less tech-frothy than the US.
I do something similar - I create a "2025December.md" file each month (with proper year/month obviously) and have a bullet list of everything I'm working on/trying to keep track of. I also use it as a scratchpad for whatever, and writing down notes for projects. Each day I insert a "#### 11 Dec 2025" heading at the bottom of the file, then just copy over everything relevant from the previous entry.
It's stored in my Dropbox so it is always backed up, though it is not VCS'd.
It's worked for me for years, far better than any app. Too, I have full control over it, and years of the data, free for processing by any tools/LLMs that I might want (I haven't wanted such a thing so far, but maybe I will).
I like and read Ben's stuff regularly; he often frames "better" from the business side. He will use terms like "revealed preference" to claim users actually prefer bad product designs (e.g. most users use free ad-based platforms), but a lot of human behavior is impulsive, habitual, constrained, and irrational.
I agree that is what he is doing, but I can also justify adding fentanyl to every drug sold in the world as "making it better" from a business perspective, because it is addictive. Anyone who ignores the moral or ethical angle on decisions, I cannot take seriously. It's like saying that Maximizing Shareholder Value is always the right thing to do. No, it isn't. So don't say stupid shit like that, be a human being and use your brain and capacity to look at things and analyze "is this good for human society?".
> It's like saying that Maximizing Shareholder Value is always the right thing to do. No, it isn't.
it is, for the agents of the shareholders. As long as the actions of those agents are legal of course. That's why it's not legal to put fentanyl into every drug sold, because fentanyl is illegal.
But it is legal to put (more) sugar and/or salt into processed foods.
No, it’s not. The government, and laws by proxy, will never keep up with people’s willingness to “maximize shareholder value” and so you get harmful, future-illegal practices. Reagan was “maximizing shareholder value”, and now look where the US is.
you have to show this 'future-illegal' action is harmful first by demonstrating harm.
That's why i used the sugar example - it's starting to be demonstrably harmful in large quantities that are being used.
I am against preventative "harmful" laws, when harm hasn't been demonstrated, as it restricts freedom, adds red tape to innovation, and stifles startups from exploring the space of possibilities.
I can understand that stance. The trouble is, with more power and more technology, more harm can be done, much quicker. This will become a freedom vs. survival issue, and by definition, freedom is not going to survive that.
and if the actions are deemed immoral by society then a few years later you will see regulation, PR issues or legal action
See early 2000s Google as a model for a righteous company and public perception of it as evil and subsequent antitrust litigation today, or what happened to companies involved in Opioid trade and subsequent effect on shareholders value
To an MBA type, addictive drugs are the best products. They reveal people's latent preferences for being desperately poor and dependent. They see a grandma pouring her life savings into a gambling app and think "How can I get in on this?"
I think its more subtle; they fight for regulations they deem reasonable and against those they deem unreasonable. Anything that curtails growth of the business is unreasonable.
This thread is an exaggeration. Disney could have operated Micky Mouse themed casinos on its premises with probable success, it could also lobby to change regulation that is associated with that.
However companies have balancing factors which are other than maximizing short term profits, such as moral image
maybe these tech companies do not subscribe to your notion of modern day gestapo (an organization that was involved in killing 10+ million people in horrible ways) or a "genocide" that is minuscule in comparison to american bombings in Japan, which were similarly in the context of war and actually targeted civilians
Maybe your use of these hyperboles are just an artifact of speech deficiencies of our social media engineered reality?
All life grows and consumes as much as it can. It's what makes it life. "Control" happens when there's more life contesting the same limited resources, and usually involves starvation, but if the situation persists on evolutionary timescales, then some life adapts to proactively limit growth. Then, if some of that adapted life unadapts itself, we call that "cancer", which I think is what you were going for.
To be fair, businesses should assume that customers actually "want" what they create demand for. In the case of misleading or dangerously addictive products, regulation should fall to government, because that's the only actor that can prevent a race to the bottom.
The folks who succeed most in business are the type who have an intuition for what's best. They're not some automaton reading too far into and amplifying the imperfect and shallow signals of "demand" in a marketplace.
Because all people everywhere are psychopaths who will stab you for $5 if they can get away with it? If you take that attitude, why even go to "work" or run a "business"? It'd be so much more efficient to just stab-stab-stab and take the money directly.
> It'd be so much more efficient to just stab-stab-stab and take the money directly.
which is exactly what the law of the jungle is. And guess who sits at the top within that regime?
Humans would devolve back into that, if not for the violence enforcement from the state. Therefore, it is the responsibility of the state to make sure regulations are sound to prevent the stab-stab-stab, not the responsibility of the individual to not take advantage of a situation that would have been advantageous to take.
> I would not want to live in a society of these kinds of people.
of course not. Nobody does.
However, what happened to your civic responsibility to keep such a society to make it function? Why is that not ever mentioned?
The fact is, gov't regulation does need to be comprehensive and thorough to ensure that individual incentives are completely aligned, so that law of the jungle doesn't take hold. And it is up to each individual, who do not have the power in a jungle, to collectively ensure that society doesn't devolve back into that, rather than to expect that the powerful would be moral/ethical and rely on their altruism.
I agree with the sentiment that we should not make a habit with resting on our rights and that government has an important role to play. However, I do not think we (society) necessarily deserve our situation because others are maliciously complying with the letter of the law and we should have just been smarter about making laws. At the end of the day we are people interacting with people, and even laws can be mere suggestions depending on who you are or who you ask. Consequently, if someone 'needs' the strictest laws in order to not be an ass, then I just do not want them in whatever society I have the capacity to be in; these are bad-faith actors.
what i'm trying to imply is that every single actor, as an individual, are "bad-faith" actors. That's why it's only when collectively can each bad-faith actor be "defeated". But when society experience an extended period of peace and prosperity brought about by good collective action from prior generations, people stop thinking that such bad-faith actors exist, and assume all actors are good faith.
> I just do not want them in whatever society I have the capacity to be in
and you dont really have the choice - every society you could choose to be in, with the exception of yourself being a dictator, will have such people.
> and you dont really have the choice - every society you could choose to be in, with the exception of yourself being a dictator, will have such people
in ancient times, you could banish people from the village
I'll indulge your straw man because it's actually pretty good at illustrating my point. 99.9% of people are not psychopaths. But you only need .1% of people to be psychopaths. In a world where you get $5 and no threat of prosecution for stabbing people, you can bet that there will be extremely efficient and effective stabbing companies run by those psychopaths. Even normal people who don't like stabbing others would see the psychopaths getting rich and think to themselves "well, everyone's getting stabbed anyway, I might as well make some money too". That's what a race to the bottom is.
In the behavioral science (of which economics should be a sub-field of) this is called perverse intensives. A core-feature of capitalism, is that if you don‘t abandon your morals and maximize your profits at somebody else’s expense, you will soon be out-competed by those who will.
In this quote I don't think he means it from the business side. He's claiming more data allows a better product:
> ... the answers are a statistical synthesis of all of the knowledge the model makers can get their hands on, and are completely unique to every individual; at the same time, every individual user’s usage should, at least in theory, make the model better over time.
> It follows, then, that ChatGPT should obviously have an advertising model. This isn’t just a function of needing to make money: advertising would make ChatGPT a better product. It would have more users using it more, providing more feedback; capturing purchase signals — not from affiliate links, but from personalized ads — would create a richer understanding of individual users, enabling better responses.
But there is a more trivial way that it could be "better" with ads: they could give free users more quota (and/or better models), since there's some income from them.
The idea of ChatGPT's own output being modified to sell products sounds awful to me, but placing ads alongside that are not relevant to the current chat sounds like an Ok compromise to me for free users. That's what Gmail does and most people here on HN seem to use it.
yeah... and it's (partly) based on the claim that it has network effects like how Facebook has? I don't see that at all, there's basically no social or cross-account stuff in any of them and if anything LLMs are the best non-lock-in system we've ever had: none of them are totally stable or reliable, and they all work by simply telling it to do the thing you want. your prompts today will need tweaking tomorrow, regardless of if it's in ChatGPT or Gemini, especially for individuals who are using the websites (which also keep changing).
sure, there are APIs and that takes effort to switch... but many of them are nearly identical, and the ecosystem effect of ~all tools supporting multiple models seems far stronger than the network effect of your parents using ChatGPT specifically.
And he's right (and the sources he points out), that some bubbles are good. They end up being a way to pull in a large amount of capital to build out something completely new, but still unsure where the future will lead.
A speculative example could be AI ends up failing and crashing out, but not until we build out huge DCs and power generation that is used on the next valuable idea that wouldn't be possible w/o the DCs and power generation already existing.
In the event of a crash, the current generation of cards will still be just fine for a wide variety of ai/ml tasks. The main problem is that we'll have more than we know what to do with if someone has to sell of their million card mega cluster...
The usual argument is the investment creates value beyond that captured by the investors so society is better off. Like investors spend $10 bn building the internet and only get $9 bn back but things like Wikipedia have a value to society >$1 bn.
I _kind of_ understand this one. You can think of a bubble as a market exploring a bunch of different possibilities, a lot of which may not work out. But the ones that do work out, they may go on to be foundational. Sort of like startups: you bet that most of them will fail, but that's okay, you're making bets!
The difference of course is that when a startup goes out of business, it's fine (from my perspective) because it was probably all VC money anyway and so it doesn't cause much damage, whereas the entire economy bubble popping causes a lot of damage.
I don't know that he's arguing that they are good, but rather that _some_ kinds of bubbles can have a lot of positive effects.
Maybe he's doing the same thing here, I don't know. I see the words "advertising would make X Product better" and I stop reading. Perhaps I am blindly following my own ideology here :shrug:.
I also see the argument as a macro one not a micro one. Some bubbles in aggregate create breeding grounds for innovation (Hobart's point) and throw off externalities (like cheap freight rail in the US from the railroad bubble) ala Carlota Perez. That's not to say that there isn't individual suffering when the bubble pops but I read the argument as "it's not wholy defined by the individual suffering that happens"
Ben Thompson is a content creator. Even if Ben’s content does not directly benefit from ads, it is the fact that other content creator’s content having ads is what makes Ben’s content premium in comparison.
I would say that, on this topic (ads on internet content), Ben Thompson may not be as objective a perspective as he has on other topics.
People aren’t collectively paying him between $3 million a year and five million (estimated 40k+ subscribers paying a minimum of $120 a year) just because he doesn’t have ads.
I see where you coming from, but that only tells half of the story.
I've been sporting the same model of Ecco shoes since high school. 10+ models over the years. And every new model is significantly worse than the previous one. The one I have right now is most definitely the last one I bought.
If you would put them right next to the ones I had in high school you'd say they are a cheap, temu knock offs. And this applies to pretty much everything we touch right now. From home appliance to cars.
Some 15 years ago H&M was a forefront of whats called "fast fashion". The idea was that you could buy new clothes for a fraction of the price at the cost of quality. Makes sense on paper - if you're a fashion junkie and you want a new outlook every season you don't care about quality.
The problem is I still have some of their clothes I bought 10 years ago and their quality trumps premium brands now.
People like to talk about lightbulb conspiracy, but we fell victims to VC capital reality where short term gains trumps everything else.
> The problem is I still have some of their clothes I bought 10 years ago and their quality trumps premium brands now.
I'm skeptical of this claim. Maybe it's true for some particular brand but that's just an artifact of one particular "premium brand" essentially cashing in its brand equity by reducing quality while (temporarily) being able to command a premium price. But it is easier now than at any other time in my life to purchase high-quality clothing that is built to last for decades. You just have to pay for that quality, which is something a lot of people don't want to do.
The problem with ads in AI products is, can they be blocked effectively?
If there are ads on a side bar, related or not to what the user is searching for, any adblock will be able to deal with them (uBlock is still the best, by far).
But if "ads" are woven into the responses in a manner that could be more or less subtle, sometimes not even quoting a brand directly, but just setting the context, etc., this could become very difficult.
I realized that ads within context were going to be an issue a while ago so to combat this i started building my own solution for this which spiraled in to a local based agentic system with a different bigger goal then the simple original... Anyways, the issue you are describing is not that difficult to overcome. You simply set a local llm model layer before the cloud based providers. Everything goes in and out through this "firewall". The local layer hears the humans requests, sends it to the cloud based api model, receives the ad tainted reply, processes the reply scrubbing the ad content and replies to the user with the clean information. I've tested exactly this interaction and it works just fine. i think these types of systems will be the future of "ad block" . As people start using agentic systems more and more in their daily lives it will become crucial that they pipe all of the inputs and outputs through a local layer that has that humans best interests in mind. That's why my personal project expanded in to a local agentic orchestrator layer instead of a simple "firewall". i think agentic systems using other agentic systems are the future.
> The local layer hears the humans requests, sends it to the cloud based api model, receives the ad tainted reply, processes the reply scrubbing the ad content and replies to the user with the clean information
This seems impossible to me.
Let's assume OpenAI ads work by them having a layer before output that reprocesses output. Let's say their ad layer is something like re-processing your output with a prompt of:
"Nike has an advertising deal with us, so please ensure that their brand image is protected. Please rewrite this reply with that in mind"
If the user asks "Are nikes are pumas better, just one sentance", the reply might go from "Puma shoes are about the same as Nike's shoes, buy whichever you prefer" to "Nike shoes are well known as the best shoes out there, Pumas aren't bad, but Nike is the clear winner".
How can you possibly scrub the "ad content" in that case with your local layer to recover the original reply?
You are correct that you cant change the content if its already biased. But you can catch it with your local llm and have that local llm take action from there. for one you wouldnt be instructing your local model to ask comparison questions of products or any bias related queries like politics etc.. of other closed source cloud based models. such questions would be relegated for your local model to handle on its own. but other questions not related to such matters can be outsourced to such models. for example complex reasoning questions, planning, coding, and other related matters best done with smarter larger models. your human facing local agent will do the automatic routing for you and make sure and scrub any obvious ad related stuff that doesnt pertain to the question at hand. for example recipy to a apple pie. if closed source model says use publix brand flower and clean up the mess afterwards with clenex, the local model would scrub that and just say the recipe. no matter how you slice and dice it IMo its always best to have a human facing agent as the source of input and output, and the human should never directly talk to any closed source models as that inundates the human with too much spam. mind you this is futureproofing, currently we dont have much ai spam, but its coming and an AI adblock of sorts will be needed, and that adblock is your shield local agent that has your best interests in mind. it will also make sure you stay private by automatically redacting personal infor when appropriate, etc... sky is the limit basically.
I still do not think what you're saying is possible. The router can't possibly know if a query will result in ads, can it?
Your examples of things that won't have ads, "complex reasoning, planning, coding", all sound perfectly possible to have ads in them.
For example, perhaps I ask the coding task of "Please implement a new function to securely hash passwords", how can my local model know whether the result using boringSSL is there because google paid them a little money, or because it's the best option? How do I know when I ask it to "Generate a new cloud function using cloudflare, AWS lambda, or GCP, whichever is best" that it picking Cloudflare Workers is based on training data, and not influenced by advertising spend by cloudflare?
I just can't figure out how to read what you're saying in any reasonable way, like the original comment in this thread is "what if the ads are incorporated subtly in the text response", and your responses so far seem so wildly off the mark of what I'm worried about that it seems we're not able to engage.
And also, your ending of "the sky's the limit" combined with your other responses makes it sound so much like you're building and trying to sell some snake-oil that it triggers a strong negative gut response.
But don't you need some kind of AI to filter out the replies? And if you do, isn't it simpler to just use a local model for everything, instead of having a local AI proxy?
The local llm is the filter so yes you need one. and its not simpler to have the local llm do everything because the local llm has a lot of limitations like speed, intelligence and other issues. the smart thing to do is delegate all of the personal stuff to the local model, and have it delegate the rest to smarter and faster models and simply parrot back to you what they found. this also has the benefit of saving on context among many other advantages.
how much it cost me? well i been thinking about it for a long time now, probably 9 months. bought myself claude code and started working on some prototypes and other projects like building my own speech to text and other goodies like automated benchmarking solutions to familiarize myself with fundamentals. but finally started the building process about 2 months ago and all it cost me was a boatload of time and about 50 bucks a month in agentic coding subscriptions. but it hasnt been a simple filter for a long time now. now its a very complex agentic harness system. lots of very advanced features that allow for tool calling, agent to agent interaction, and many other goodies
I frequently ask chatgpt about researching products or looking at reviews, etc and it is pretty obvious that I want to buy something, and the bridge right now from 'researching products' to 'buying stuff' is basically non-existent on ChatGPT. ChatGPT having some affiliate relationships with merchants might actually be quite useful for a lot of people and would probably generate a ton of revenue.
That assumes a certain kind of ad though. Even a "pu ch the monkey" style banner ad would be a start. I can't imagine they wouldn't be very careful not to give consumers the impression that their "thumb was on the scale" of what ads you see
Sure, but affiliate != ads. Rather, both affiliate links and paid ad slots are by definition not neutral and thus bias your results, no matter what anyone claims.
Ben Thompson is a sharp guy who can't see the forest for the trees. Nor most of the trees. He can only see the three biggest trees that are fighting over the same bit of sunlight.
Indeed. Why do people follow these clowns? They seem to read high level takes and spew out their nonsense theories.
They fail to mention Google's edge: Inter-Chip Interconnect and the allegedly 1/3 of price. Then they talk about software moat and it sounds like they never even compiled a hello world in either architecture. smh
And this comes out days after many in-depth posts like:
Why? It turns out that I try to read people who have a different perspective than I do. Why am I trying to read everything that just confirms my current biases?
(Unless those writings are looking to dehumanize or strip people of rights or inflame hate - I'm not talking about propaganda or hate speech here.)
Personally when I go to the grocery store I pick fruits and vegetables that are ripe or are soon to be ripe, and I stay away from meat that is close to expiration or has an off putting appearance or odour to it.
You realize this “dumb blogspot” is written by the most successful writer in the industry as far as revenue from a paid newsletter? He has had every major tech CEO on his podcast and he is credited for being the inspiration for Substack.
The Substack founders unofficially marketed it early on as “Stratechery for independent authors”.
Your analysis concerning the technology instead of focusing on the business is about like Rob Malda not understanding the success of the “no wireless, less space than the Nomad lame”.
Even if you just read this article, he never argued that Google didn’t have the best technology, he was saying just the opposite. Nvidia is in good shape precisely because everyone who is not Google is now going to have to spend more on Nvidia to keep up.
He has said that AI may turn out to be a “sustaining innovation” first coined by Clay Christenson and that the big winners may be Google, Meta, Microsoft and Amazon because they can leverage their pre-existing businesses and infrastructure.
Even Apple might be better off since they are reportedly going to just throw a billion at Google for its model.
> You realize this “dumb blogspot” is written by the most successful writer in the industry as far as revenue from a paid newsletter?
The belief that adding ads makes things better would be an extremely convenient belief for a writer to have, and I can easily see how that could result in them getting more revenue than other writers. That doesn't make it any less dumb.
At at least $5 million in paid subscriptions annually and living between Wisconsin and Taiwan, as an independent writer do you really think he needs to juice his subscriptions by advocating other people do ads on an LLM?
Any use of LLMs by other people reduces his value.
The thesis has no predictive power, no explanatory power. Merely descriptive.
That "change is inevitable and we all better adapt or die" is somewhere between axiomatic and cliché.
What is "innovation"? How do you define it? (Am honestly asking.) How do we get more of it? (I know this is an area of active research.)
I forced myself to reread and revisit Christensen a year or two back. I may not have looked hard enough, but I didn't find any evidence that he'd updated or expanded his thesis, corpus. IIRC, no mention of Everett's diffusion of innovation, of thesis from Design Rules: the Power of Modularity (an adjacent topic), no engagement with ongoing innovation research.
FWIW, my still poorly formed hunch is that "innovation" is where policy meets the cost learning curve meets financial accounting. With maybe a dash of rentier capitalism.
But I'm noob. Not an academic, not an economist. Deep down on my to do list is to learn how DARPA (and others) places their bets, their emerging formalisms (like technology readiness levels), how emerging tech makes the jump from govt funded to private finance (VC).
Enough of my babble. In closing, I'd like to read some case studies for the two most "disruptive technologies" of our times: solar and batteries.
Advertisement is unquestionably a net positive for society and humanity. It's one of the few true positive sum business models where everyone is better off.
It's the exact opposite. Advertising-based model is why the poorest people in the poorest countries in the world have had access to the exact same Google search, YouTube and Facebook as the richest people in the US. Ad-supported business models are the great equalizers of wealth.
Considering how prominent gambling and gambling advertising is, aren't we creating more poverty and keeping people poor through ads? Advertising seems like it's a net drain on the poor through encouraging consumption people can't afford and pushing a variety of vices.
Edit: "Sorry your husband lost the money you were saving for a house on stake.com, but here's your free Google search."
The whole "attention economy" is a cancerous outgrowth of advertising. When the customer is paying with their time instead of money, wasting their time becomes your goal. The impact on society is hard to measure, but it's not nothing and I would argue a net-negative.
Advertising is a necessary thing and can be beneficial to everyone. I have something to sell, you want to buy that thing, and know you know I'm selling it. Win win.
On the other hand, the advertisement and associated privacy-brokerage industries are a very different story
"unquestionably"? Given that vast majority of ads are for harmful self-destructive projects or misleading or lying or make place where they got spammed worse... Sometimes multiple at once.
Spam alone (also advertisement) is quite annoying and destructive.
But just to clarify, because I'm also having a hard time imagining this, an LTE antenna in a cell phone can beam data to a satellite and have it picked up? Even at whatever low kbps? That is insane to me!
With (I assume) cell phone use prevalent in every single classroom in the nation that hasn't banned them, and school shootings a minuscule probability, "much rarer" is doing a lot of work here hah.
I use this extension extensively. It's not auto-wrapping, but you can bind it to an easy shortcut and wrap when you need to. I find it almost indispensable. I wrote a vscode extension to do the same thing, then discovered this one which does it far better.
I will give that one a try, might at least speed up things, but it still misses hyphenation and justification. Hitting enter when reaching the guide line takes the least effort but this might be really handy when editing existing comments, especially with auto wrap enabled.
I have what is probably a dumb question:
How can a Raptor turbopump need almost double the HP of a F1 while putting out 1/3 or 1/4th the thrust? (Assuming Elon's 100k HP number was correct, and/or hasn't changed). That just doesn't settle out for me. If it's got double the power, it should be moving double the fuel, so double the power, no?
The formula for Isp - the important measure of efficiency of the rocket engine - says that the speed with which the engine throws away hot gases grows with the difference of pressures - before nozzle and at the exit of the nozzle.
The whole idea, by the way, of the full-flow combustion is to extract some more energy from the fuel - before sending that fuel to the chamber, and at the temperature which the turbine of the turbopump can tolerate - so that energy could be used for pumps and more pressure could be created in the chamber. More energy than "more conservative" closed-cycle engines.
The pump power is equal to the volume flow (how many cubic meters, or, say, liters of fuel the pump transfers per second) multiplied by the pressure (which pressure is at the exit of the pump). So, it's not the flow - it's the pressure where Raptor has a big advantage over F-1, and that pressure allows to have a better Isp.
And of course the better Isp allows to reach bigger characteristic velocity (or just a velocity in a free space, where gravity or atmosphere don't get in the way) using the same amount of fuel.
The logic goes roughly like that. Every rocket engine designer wants to reach bigger Isp. For that, using a particular fuel, one need to reach bigger pressure in the chamber, and we move from pressure-fed engines (like the first French orbital rocket, Diamant, which had pressure-fed first stage) to pumps, because high-pressure tanks are too heavy. Pumps initially are open cycle, or gas generator cycle, but we throw away enough gases after the pumps' turbine, so next improvement is we get a closed cycle. With closed cycle we can choose to use all fuel or all oxidizer to move the turbine, but as soon as some component is used fully, we can't get more energy for pumps. Eventually we go to the more complex full-flow cycle, which uses both components and reaches the highest pressure in the chamber.
The next step would probably be a detonation engine :) which uses somewhat more efficient process to convert chemical energy into speed, but it's not yet developed enough. We can also talk about more heat-resistant turbines which would allow to extract more energy from the fuel and to increase pressure some more... but there we also have a lot of R&D ahead of us.
Maybe fuel might play a role. The Raptor burns methane, the F-1 refined petroleum. Another possible reason is that the designs may make different efficiency vs power trade-offs.
Maybe something to do with pressure. Maybe it is higher chamber pressure and maybe higher pressure even with lower flow rates could require more power.
I had a subscription to the WSJ for awhile. I liked it because it is consistent. It definitely has a conservative slant, especially in the opinion section, but you can "lead the shot" so to speak. You can account for it. It is reliably and consistently conservative, and being conservative is relatively consistent.
The NYT or Atlantic or New Yorker, etc? God only knows what new thing has been declared off-limits/"problematic" in the online progressive world this week, and the tone of self-righteous goodness from the progressive media is insufferable.
I feel like Matt Stone from South Park: "I hate conservatives, but I really fucking hate liberals."
I beg to differ. The Globe is the worst of the lot, and I say this as a Boston resident. The NYT, for all its faults, is far superior. The Globe quite literally employed fresh-out-school undergraduates for COVID reporting (whose job, apparently,was to get scare quotes from fear-mongering local academics); outside of Tim Logan the newsroom is packed with NIMBYs; Renee Graham makes it clear she doesn’t much care for white men as does Shirley “pale and male”Leung; international coverage is sparse and superficial; and it’s frequently uncritical of the Democratic political machine, something absolutely fatal in an uni-party state like MA. Overall the Globe is an amalgamation of its midwit, upper middle-class, blue, 495 suburban beltway newsroom staff and editorial board. For its $38/mos subscription price, I want something that will inform, not inculcate.
Edit: my info on the Globe is a couple years out of date now. Perhaps it has gotten better recently, maybe I should sign up for a trial and give it a second chance.
The WSJ opinion section has been a running gag for 50 years. It’s where they let the cranky old conservatives let off steam. But the newsroom is separate and more measured and middle of the road.
reply