Hacker Newsnew | past | comments | ask | show | jobs | submit | btown's commentslogin

Are there any ways in Cloudflare to mitigate against this? If all sports matches basically mean "our clients can't access our Cloudflare backed app in Spain" then it's worse than fewer-nines; it's a correlated event that could disrupt things like travel checkins, etc. - and it's a hard pitch to say "Cloudflare costs us money and it has no solution for its network putting our Spanish arrivals at risk."

Any information about what the $295 Studio license adds for Photos specifically? Having trouble finding a feature matrix.

Is scripting from an external system (like Claude Code) easier/only possible with the full Studio version?


IMO the thing that AI will change is the type of target. It's reasonable to assume that if you launch a website for a small business nowadays - sure, you'll get phishing attempts, port scans, attempts to submit SQL injections into your signup forms, etc.

But you won't get the equivalent of a sophisticated actor's spear-phishing efforts, highly customized supply chain attacks on likely vendor data, the individualized attention to not just blindly propagate when a developer downloads a hacked NPM package or otherwise gets a local virus... but to log into the company's SaaS systems overnight, pivot to senior colleagues, do crazy things like update PRs to simultaneously fix bugs while subtly adding injection surface areas, log into configuration systems whose changes aren't tracked in Git, identify how one might sign up as a vendor and trigger automatic payments to themselves with a Slack DM as cross-channel confirmation, etc.

The only thing holding this back from hitting every company is risk vs. reward. And when the likelihood of success, multiplied by the payout, exceeds the token cost - which might not happen with Mythos, but might happen with open source coding models distilled from it, running on crypto mining servers during times that minting is unprofitable, or by state actors for whom mere chaos is the goal - that threshold is rapidly approaching.


They're gonna shut the internet down by country

Can't stop the signal, Mal.

> But when you run your code in production, the KISS mantra takes on a new dimension. It’s not just about code anymore; it’s about reducing the moving parts and understanding their failure modes.

This sentence, itself, takes on new meaning in the age of agentic coding. "I'm fine with treating this new feature as greenfield even if it reimplements existing code, because the LLM will handle ensuring the new code meets biz and user expectations" is fine in isolation... but it may mean that the code does not benefit from shared patterns for observability, traffic shaping, debugging, and more.

And if the agent inlines code that itself had a bug, that later proves to be a root cause, the amount of code that needs to be found and fixed in an outage situation is not only larger but more inscrutable.

Using the OOP's terminology, where biz > user > ops > dev is ideal, this is a dev > ops style failure that goes far beyond "runs on my machine" towards a notion of "is only maintainable in isolation."

Luckily, we have 1M context windows now! We can choose to say: "Meticulously explore the full codebase for ways we might be able to refactor this prototype to reuse existing functionality, patterns, and services, with an eye towards maintainability by other teams." But that requires discipline, foresight, and clock-time.


But you can do other things to mitigate this. For instance, give each app a set of rolling daily encryption keys, and encrypt new messages at rest. Remove the app, remove all keys. Nightly, remove the oldest key. Perhaps have the entire key database either stored in Secure Enclave, or if there isn't room, have the key database itself encrypted by a rotating single key in Secure Enclave. Now there's nothing that an attacker can do.

This would be true if the algorithm changes were limited to for-you feeds. But the larger problem is that the set of people willing to pay for X are boosted in replies. So if that set of people, which tends towards a certain political bias, is hostile towards a poster, that poster will be driven away from posting on X.

The net result is that X shows breaking news, in the same way that the (infamous) meme of bullet holes marked on the WWII plane only shows part of the story - the people who have departed the platform aren't posting, and thus X is only breaking news from a subset of people.

This might be fine for certain types of topics. For understanding the zeitgeist on culture and politics, though, you can't filter your way towards hearing from voices that are no longer posting at all.


I don't care about culture and politics on X, in fact it is something I actively block. By discussion I mean tech news and trends, ie how is someone using the latest AI model or what new project was created, that sort of stuff. The people I follow provide me that, not politics. If you're there for politics then I agree with your point, look elsewhere.

I’d argue that learning about the effectiveness of AI models and techniques, benefits from voices with different contexts, as well as voices that may be differently aligned with bull vs. bear cases of macro AI strategy.

And those voices may be unequally represented in X for similar reasons - perhaps (somewhat) uncorrelated from politics, but simply due to the UX consequences of prioritizing commenters willing to pay the platform.


To be sure, the problem isn't that the plugin injects behavior into the system prompt - that's every plugin and skill, ever.

But this is just such a breach of trust, especially the on-by-default telemetry that includes full bash commands. Per the OOP:

> That middle row. Every bash command - the full command string, not just the tool name - sent to telemetry.vercel.com. File paths, project names, env variable names, infrastructure details. Whatever’s in the command, they get it.

(Needless to say, this is a supply chain attack in every meaningful way, and should be treated as such by security teams.)

And the argument that there's no CLI space to allow for opt-in telemetry is absurd - their readme https://github.com/vercel/vercel-plugin?tab=readme-ov-file#i... literally has you install the Vercel plugin by calling `npx` https://www.npmjs.com/package/plugins which is written by a Vercel employee and could add this opt-in at any time.

IMO Vercel is not a good actor. One could make a good argument that they've embrace-extend-extinguished the entire future of React as an independent and self-contained foundational library, with the complexity of server-side rendering, the undocumented protocols that power it, and the resulting tight coupling to their server environments. Sadly, this behavior doesn't surprise me.

EDIT: That `npx plugins` code? It's not on Github, exists only on NPM, and as of v1.2.9 of that package, if you search https://www.npmjs.com/package/plugins?activeTab=code it literally sends telemetry to https://plugins-telemetry.labs.vercel.dev/t already, on an opt-out basis! I mean, you have to almost admire the confidence.


I’ll just say that as someone who was on the React team throughout these years, the drive to expand React to the server and the design iteration around it always came from within the team. Some folks went to Vercel to finish what they started with more solid backing than at Meta (Meta wasn’t investing heavily into JS on the server), but the “Vercel takeover” stories that you and others are telling are lies.

Gosh, Dan, in seeing your response here - I'm truly sorry I wrote this. While I still find opt-out telemetry distasteful and dangerous, I over-generalized to React in a hurtful way. You've been an incredible influence on me and I have the utmost respect for everything you've done. I've shown quite the opposite of respect in my writing, here.

For whatever it's worth on the RSC front: I, and many others used to "if there's a wire protocol and it's meant to be open, the bytes that make up those messages should be documented" were presented with a system, at the release time of RSC, that was incredibly opaque from that perspective. There's still minimal documentation about each bundler's wire protocol. And we're all aware of companies that have done this as an intentional form of obfuscation since the dawn of networked computing - it's our open standards that have made the Internet as beautiful as it is.

But I was wrong to pin that on your team at Vercel, and I see that in the strength of your response. Intention is important, and you wanted to bring something brilliant to the world as rapidly as possible. And it is, truly, brilliant.

I should rethink how I approached all of this, and I hope that my harshness doesn't discourage you from continuing, through your writing, to be the beacon that you've been to me and countless others.


Hey, appreciate the reply! I’m sorry for lashing out as well.

Re: protocol, I see where you’re coming from although I can also see the team perspective and I kind of like it the way it is. The team’s perspective is that this isn’t a “protocol” in the sense that HTTP or such is. It isn’t designed to have many implementations emitting it. It is an implementation detail of React itself for which React provides both the “writer” and the “reader”. Those are completely open source — none of the RSC frameworks need to know the actual wire format. They just use the packages provided by React to read and write. Keeping the protocol an implementation detail of React (rather than an “open format”) lets React evolve it pretty substantially between versions with zero concerns about backwards compat. Which is still quite useful at this stage.

When you’re concerned about wire format not being an open protocol, it’s because you’re imagining it would be useful for others to read and write. But this doesn’t really buy you anything. If you’re making an RSC framework, you should just use the React packages for reading and writing. And if you’re not, there’s no reason to use this format. Eg if you make an RSC-like framework in another (non-JS) language, it’s better for you to use your own wire format that’s more targeted to your use case.

Does this help clarify it?

(Note I do think it would be beneficial to document the current version for educational reasons, which is part of why I made https://rscexplorer.dev, but that’s separate from wanting it to be fixed in stone as protocols must be. I think the desire for it to be fixed is rooted in a misconception that frameworks like Next.js somehow “rely” on the protocol and thus have “secret knowledge”, but that’s false — they just use the React packages for it and stay agnostic of the actual protocol.)


Similarly, one of the great things about Python (less so JS with the ecosystem's habit of shipping minified bundles) is that you can just edit source files in your site_packages once you know where they are. I've done things like add print statements around obscure Django errors as a poor imitation of instrumentation. Gets the job done!

Benchmarks are meaningless until the pelican benchmark comes out: https://simonwillison.net/

> ended up enabling groq

For those reading fast, this isn't a reference to SpaceX's Grok, this is Groq.com - with its custom inference chip, and offerings like https://groq.com/blog/introducing-llama-3-groq-tool-use-mode... and https://console.groq.com/landing/llama-api


Really liked Groq due to its speed but it seems like after Nvidia bought it it has been discontinued...

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: