Is it though? Claude's reliability is now at an all-time low of 98.7%. It's not a stretch to think that large companies will have second doubts about about adopting claude for their production environments.
> Apple could certainly make some changes to prevent this being an issue at all.
Why Apple still hasn't fixed this in 2026 baffles me. The fact that a company the size of Tailscale has to find workarounds for an Apple blunder like this speaks volumes about how terrible Apple's software management is.
I don't understand how this can be economically viable. If this takes off, it will allow businesses to use openclaw-like functionality at non-api prices (pro, max).
Do you know for sure if the pro / max plans are unprofitable at full usage? I did a brief back of the envelope calculation for minimax m2.5 comparing its api pricing to my token usage on a full quota max 20x Claude plan, it worked out around 260 ish which assuming some margin would put the Claude max around breakeven.
It doesn't matter if they are unprofitable at full usage, as long as there are enough users (like me!) who barely ever max out but still pay the $100/month. The people who love Claude Code enough to max out the 20x plan every day, that's probably the best influencer marketing campaign you could ever buy anyways.
I mean... it's not a microblogging service, or even Github. Think of the resources you are marshaling and tapping into when you request Opus inference.
Think of the statefulness of the systems necessary to manage prompt caching.
Not to mention all the enterprise SLAs they have to meet before they serve traffic to gen pop.
Additionally:
- I believe Anthropic is running a planned promotion where short-term limits are doubled? Could be off on the dates though.
- It's two weeks before EOQ, and everyone's no doubt modeling and planning EVERYTHING last minute, because with Claude they can
- It is valuable infrastucture and thus a potential target of attackers
Sure, but they get less sympathy when a lot of their high profile employees talk about using Claude to write 100% of their code and yet Claude Code has loads of issues and their services go down every 10 minutes.
The problem with this is that it all runs local on someone's computer, whereas with openclaw you can involve your teammates (e.g. on slack) which is much more powerful.
What % of sites have a content update volume that exceeds what you can get respecting crawl delay?
If your delay is 1s and you publish less than 60 updates a minute on average I can still get 100%. Most crawls are not that latency sensitive, certainly not the ai ones.
HFT bots, now that is an entirely different ballgame.
> Most crawls are not that latency sensitive, certainly not the ai ones.
They certainly behave like they are. We constantly see crawlers trying to do cache busting, for pages that hasn't change in days, if not weeks. It's hard to tell where the bots are coming from theses days, as most have taken to just lie and say that they are Chrome.
I'd agree that the respecting robots.txt makes this a non-starter for the problematic scrapers. These are bots that that will hammer a site into the ground, they don't respect robots.txt, especially if it tells them to go away.
All of this would be much less of a problem if the authors of the scrapers actually knew how to code, understood how the Internet works and had just the slightest bit of respect for others, but they don't so now all scrapers are labeled as hostile, meaning that only the very largest companies, like Google, get special access.
Not really, given that the work we do in that direction isn't exactly public. You can recreate the scenario though. Spin up a wiki of some sort, scrapers love wikis, ideally enable some form of caching, and just sit back and watch scrapers throw random shit in the URL parameters.
Is it though? Claude's reliability is now at an all-time low of 98.7%. It's not a stretch to think that large companies will have second doubts about about adopting claude for their production environments.
reply