* Bundlers drastically improve runtime performance, but it's tricky to figure out what to bundle where and how.
* Linting tools and type-safety checkers detect bugs before they happen, but they can be arbitrarily complex, and benefit from type annotations. (TypeScript won the type-annotation war in the marketplace against other competing type annotations, including Meta's Flow and Google's Closure Compiler.)
* Package installers are really important and a hugely complex problem in a performance-sensitive and security-sensitive area. (Managing dependency conflicts/diamonds, caching, platform-specific builds…)
As long as developers benefit from using bundlers, linters, type checkers, code formatters, and package installers, and as long as it's possible to make these tools faster and/or better, someone's going to try.
And here you are, incredulous that anyone thinks this is OK…? Because we should just … not use these tools? Not make them faster? Not improve their DX? Standardize on one and then staunchly refuse to improve it…?
I'm being a little coy because I do have a very detailed proposal.
In want the JS toolchain to stay written in JS but I want to unify the design and architecture of all those tools you mentioned so that they can all use a common syntax tree format and so can share data, e.g. between the linter and the formatter or the bundler and the type checker.
Hasn't that already been tried (10+ years ago) with projects like https://github.com/jquery/esprima ? Which have since seen usage dramatically reduced for performance reasons.
Yeah, you are correct. But that means I have the benefit of ten years development in the web platform, as well as having hindsight on the earlier effort.
I would say the reason the perf costs feel bad there is that the abstraction was unsuccessful. Throughtput isn't all that big a deal for a parser at all if you only need to parse the parts of the code that have actually changed
All I can say for sure is that the reason the old tools were slow was not that the JS runtime is impossible to build fast tools with.
And anyway, these new tools tend to have a "perf cliff" where you get all the speed of the new tool as long as you stay away from the JS integration API sued to support the "long tail" of uses cases. Once you fall off the cliff though, you're back to the old slow-JS cost regime...
> […] the reason the old tools were slow was not that the JS runtime is impossible to build fast tools with.
I don't have them at hand right now but there are various detailed write-ups from the maintainers of Vite, oxc, and more, that are addressing this specific argument to point out that indeed the JavaScript runtime was a hard limitation on the throughput they could achieve, making Rust a necessity to improve build speeds.
Why do you need high throughput though? Isn't that a metric of how fast a batch processing system is?
Why are we still treating batch processing as the controlling paradigm for tools that work on code. If we fully embraced incremental recomputation and shifted the focus to how to avoid re-doing the same work over and over, batch processing speed would become largely irrelevant as a metric
> For security, other than what the MCP protocol itself provides, what should be defined?
The MCP protocol itself provides no security at all.
The MCP specification includes no specified method of authorization, and no specified security rules. It lists a handful of "principles," and then the specification simply gives up on discussing the problem further.
3.2 Implementation Guidelines
While MCP itself cannot enforce these security principles at the protocol
level, implementors **SHOULD**:
1. Build robust consent and authorization flows into their applications
2. Provide clear documentation of security implications
3. Implement appropriate access controls and data protections
4. Follow security best practices in their integrations
5. Consider privacy implications in their feature designs
it's just an http or stdio server, would there be considerations beyond that of any other http server or cli app? shouldn't the security be dependent on deployment details? Like you wouldn't require OAUTH if it is deployed on localhost only, or if there is a reverse proxy handling that bit.
There is a reason it cannot enforce those principles, an MCP is a web service. it could use SQL as a backend for some reason, or use static pages. it might be best to use mTLS, or it might make sense to make it open to the public with no authentication or authorization whatsoever, and your only concern might be availability (429 thresholds). the spec can't and shouldn't account for wildly varying implementation possibilities right?
The difference is that MCP introduces a third party: the agent isn't the user and isn't the service, but it's acting on behalf of one to call the other. Standard HTTP auth assumes two parties. That's the gap the spec needs to address.
Are you saying legal US citizens are having a tough time in Minnesota with ICE? My cousins and their families aren't. They're too busy leading their own normal, daily lives.
Yes; my neighbors had trouble going to the grocery store. From appearances, you might think they're on vacation from Mexico. They have been here for generations, and one of their family is a high enough ranking member of the military that I won't say more to avoid the risk of doxxing them.
Have you considered they could maybe just stop interfering with federal law enforcement and let them do their jobs as they have been doing for decades under all sorts of administrations? You'll be hard pressed to find a tear shed for agitators protecting illegal immigrant criminals with deportation orders.
Neither you nor anyone else believes this is how immigration enforcement has been done "for decades under all sorts of administrations."
You can make it appear as if you have a better grasp on reality by just acknowledging that this is a much different enforcement mechanism than we've seen in the past, but you think that's okay.
Anyway there are now several known cases of people being detained or deported without deportation orders. This is another point that you could at least give the appearance of honesty and grasp on reality by acknowledging.
DHS's own data proves that current enforcement priorities have changed.
So what's more probable in your mind?
( Hypothesis A ) -- Mobs trying to interfere with law enforcement has caused DHS to focus on arresting and deporting immigrants without criminal background
( Hypothesis B ) -- DHS's focus on arresting and deporting immigrants without criminal background has required significant scale-up of personnel with minimal training (validated by DHS's own data) and required tactics that a large number of Americans believe to strike an unacceptable cost-benefit balance
( Hypothesis C ) -- The two facts (enforcement approach and public response) are not causally related to each other at all
One was returning from dropping off her 6 year old child at school.
The other was videotaping ICE activity with one hand while holding out the other hand to show he was no was no threat.
What is your point, exactly? Neither was doing anything illegal, neither was directly trying to interfere with ICE actions. (The first wasn't trying to interfere at all.)
Although normally I'd say wait for the full evidence to be revealed, in this case (1) there's already a wealth of evidence from bystanders, and (2) the investigations are actively being interfered with so official evidence is not forthcoming.
Those are the 2 citizens killed. CBP and ICE killed at least 25 other people in the field and at least 30 died in custody (one source cites 30-32, another 44).
Apparently, the violence is necessary to deport at (checks notes) a lower rate than Biden's. It might make sense if the current enforcement was aimed at serious criminals, but only the rhetoric is. The current enforcement is much less selective. More damage, less gain.
A corollary I don't see mentioned enough by the morons who believe there are roving hordes of violent illegal criminals:
Let's assume there was. Then what on earth is the administration doing tracking down and putting cuffs on so many people who do not fit in that category?
Every seat in a detention center, courtroom, or plane filled by a random guy stopped in the Home Depot parking lot is a seat taken away from one of these allegedly numerous violent rapist/murders/whatever.
So even if you were stupid enough to believe all the transparent bullshit from this gang of liars, they'd still be fucking awful!
All this stuff does, in addition to squelching public appetite for immigration enforcement writ large, is keeps the actual bad guys inside the country even longer!
It's not a lie to point out the truth. Words have meaning and wantonly applying the most scariest sounding words you can find does not help your cause.
> As conduction and convection to the environment are not available in space, this means the data center will require radiators capable of radiatively dissipating gigawatts of thermal load. To achieve this, Starcloud is developing a lightweight deployable radiator design with a very large area - by far the largest radiators deployed in space - radiating primarily towards deep space...
They claim they can radiate "633.08 W / m^2". At that rate, they're looking at square kilometers of radiators to dissipate gigawatts of thermal load, perhaps hectares of radiators.
They also claim that they can "dramatically increase" heat dissipation with heat pumps.
So, there you have it: "all you have to do" is deploy a few hectares of radiators in space, combined with heat pumps that can dissipate gigawatts of thermal load with no maintenance at all over a lifetime of decades.
This seems like the sort of "not technically impossible" problem that can attract a large amount of VC funding, as VCs buy lottery tickets that the problem can be solved.
Yes, on the face of it, the plan is workable. Heat radiation scales linearly with area and exponentially (IIRC) with temperature.
It really is as simple as just adding kilometers of radiatiors. That is, if you ignore the incredible cost of transporting all that mass to orbit and assembly in space. Because there is quite simply no way to fold up kilometer-scale thermal arrays and launch in a single vehicle. There will be assembly required in space.
All in all, if you ignore all practical reality, yes, you can put a datacenter in space!
Once you engage a single brain cell, it becomes obvious that it is actually so impractical as to be literally impossible.
I kind of want to play it out though... if someone did do this (for whatever reasons), what would the real benefits even be? Something terrestrial operations wouldn't be able to catch up to in 5-10 years?
This article includes a graph with a negative slope, claiming that AI tools are useful for beginners, but less and less useful the more coding expertise you develop.
That doesn't match my experience. I think AI tools have their own skill curve, independent of the skill curve of "reading/writing good code." If you figure out how to use the AI tools well, you'll get even more value out of them with expertise.
Use AI to solve problems you know how to solve, not problems that are beyond your understanding. (In that case, use the AI to increase your understanding instead.)
Use the very newest/best LLM models. Make the AI use automated tests (preferring languages with strict type checks). Give it access to logs. Manage context tokens effectively (they all get dumber the more tokens in context). Write the right stuff and not the wrong stuff in AGENTS.md.
I'd rather spend my time thinking about the problem and solving it, than thinking about how to get some software to stochasticaly select language that appears like it is thinking about the problem to then implement a solution I'm going to have to check carefully.
Much of the LLM hype cycle breaks down into "anyone can create software now", which TFA makes a convincing argument for being a lie, and "experts are now going to be so much more productive", which TFA - and several studies posted here in recent months - show is not actually the case.
Your walk-through is the reason why. You've not got magic for free, you've got something kinda cool that needs operational management and constant verification.
I’ve seen otherwise intelligent and capable people get so addicted to the convenience and potential of LLMs, that they start to lose their ability to slowly go through problems step by step. it’s sad.
Agreed. My work is mandating Claude Code usage this week for everyone. I spent all day today getting it to write tickets, code, and tests for something I knew how to do. I don’t understand the appeal. Telling the AI “commit those changes and then push,” then waiting for the result, takes way longer than gcmsg <commit msg> && gp.
If you're not developing an iOS/macOS app, you can skip Xcode completely and just use the `swift` CLI, which is perfectly cromulent. (It works great on Linux and Windows.)
There'a great indie app called Notepad.exe [1] for developing iOS and macOS apps using macOS. You can also write and test Swift apps for Linux easily [2]. It also supports Python and JavaScript.
If you hate Xcode, this is definitely worth a look.
So wait this thing is real? Calling it notepad.exe gave me the impression that it's just an elaborate joke about how you can code any program in Notepad...
Even if you're developing for macOS you can skip xcode. I've had a great time developing a menubar app for macOS and not once did I need to open xcode.
I would avoid it for Linux and Windows. Even if they are "technically supported", Apple's focus is clearly macOS and iOS. Being a second- (or even third-) class citizen often introduces lots of issues in practice ("oh, nobody teenaged that functionality on Windows"...)
Also, a real nightmare for the municipal trade unions. (Do you know why every NYC subway train needs to have not one but two operators, even though it could run automatically just fine?)
Huh. I wonder if that makes any sense. It doesn't seem to make sense to keep employing people if you no longer need them. It sucks to be layed off, but that's just how it works.
It also shows a lack of imagination. If you have to provide a union with a job bank, why not re-deploy employees to other roles? With one person per train, re-deploy people to run more trains therefore decreasing the interval between trains. Stations used to have medics but this was cut. How about re-train people to be those medics? The subway could use a signaling upgrade and positive train control. Installing platform screen doors to greatly reduce the incidence of people falling onto the tracks is going to need a lot of labor.
Mass transit is a capacity multiplier. If 35 people are headed in the same direction compare that with the infrastructure needed to handle 35 cars. Road capacity, parking capacity, car dealerships, gas stations, repair shops, insurance, car loans.
First, these cities should be fixed by removing the traffic magnets. It's far past the point where we used the old obsolete ideology of trying to supply as much traffic capacity as possible.
But anyway, your statement is actually not true anywhere in the US except NYC. Even in Chicago, removing ALL the local transit and switching to 6-seater minivans will eliminate all the traffic issues.
Car traffic magnets like highways inside urban cores? Or people traffic magnets like office buildings, colleges, sports stadiums, performing arts venues, shopping malls?
Large stadium arenas are a special case, but they don't create sustained traffic, and their usage periods typically do not overlap with the regular rush hour.
That's the testing matrix we have to do for iOS and Android apps today. The screen sizes don't go all the way up to ultrawide, but 13" iPad (portrait and landscape) down to 4" iPhone Mini, at every "Dynamic Type" display setting is required.
It's not that tough, but there can be tricky cases.
I think the industry settled on pretty good answers, using lots of XML-like syntax (HTML, JSX) but rarely using XML™.
1. Following Postel's law, don't reject "invalid" third-party input; instead, standardize how to interpret weird syntax. This is what we did with HTML.
2. Use declarative schema definitions sparingly, only for first-party testing and as reference documentation, never to automatically reject third-party input.
3. Use XML-like syntax (like JSX) in a Turing-complete language for defining nested UI components.
Think of UI components as if they're functions, accepting a number of named, optional arguments/parameters (attributes!) and an array of child components with their own nested children. (In many UI frameworks, components literally are functions with opaque return types, exactly like this.)
Closing tags like `</article>` make sense when you're going to nest components 10+ layers deep, and when the closing tag will appear hundreds of lines of code later.
Most code shouldn't look like that, but UI code almost always does, which is why JSX is popular.
Each of these tools provides real value.
* Bundlers drastically improve runtime performance, but it's tricky to figure out what to bundle where and how.
* Linting tools and type-safety checkers detect bugs before they happen, but they can be arbitrarily complex, and benefit from type annotations. (TypeScript won the type-annotation war in the marketplace against other competing type annotations, including Meta's Flow and Google's Closure Compiler.)
* Code formatters automatically ensure consistent formatting.
* Package installers are really important and a hugely complex problem in a performance-sensitive and security-sensitive area. (Managing dependency conflicts/diamonds, caching, platform-specific builds…)
As long as developers benefit from using bundlers, linters, type checkers, code formatters, and package installers, and as long as it's possible to make these tools faster and/or better, someone's going to try.
And here you are, incredulous that anyone thinks this is OK…? Because we should just … not use these tools? Not make them faster? Not improve their DX? Standardize on one and then staunchly refuse to improve it…?
reply