Hacker Newsnew | past | comments | ask | show | jobs | submit | ed_mercer's commentslogin

> That's a ridiculous suggestion.

Is it though? Claude's reliability is now at an all-time low of 98.7%. It's not a stretch to think that large companies will have second doubts about about adopting claude for their production environments.


> Apple would have to disable a GPU core on these chips to ensure that they have only a 5-core GPU, like all other MacBook Neo units sold to date.

This is wild. Why not just leave it a 6-core GPU? Would there really be bad press from having an extra GPU core?


Possibly lower battery life? Not sure how they fuse it off but if you have 17% more GPU even at idle you could possibly feel it.

That said, I'd much rather have the extra core.


You'd have a class action lawsuit from 5-core owners saying that later adopters got a better performing unit for free.

Couldn't apple just release a Macbook Neo 2 (or 1.1) or something to mitigate this?

> Apple could certainly make some changes to prevent this being an issue at all.

Why Apple still hasn't fixed this in 2026 baffles me. The fact that a company the size of Tailscale has to find workarounds for an Apple blunder like this speaks volumes about how terrible Apple's software management is.


It really is very simple. Because people keep purchasing their products.

Orbiter, the space simulation predecessor to KSP, has exactly such a mission where you see the moon in front of you as you ascent into the sky.

> Thanks for stopping by!

Missed chance to use "slopping by"


Amazing, that's getting its own PR

I don't understand how this can be economically viable. If this takes off, it will allow businesses to use openclaw-like functionality at non-api prices (pro, max).


Do you know for sure if the pro / max plans are unprofitable at full usage? I did a brief back of the envelope calculation for minimax m2.5 comparing its api pricing to my token usage on a full quota max 20x Claude plan, it worked out around 260 ish which assuming some margin would put the Claude max around breakeven.


It doesn't matter if they are unprofitable at full usage, as long as there are enough users (like me!) who barely ever max out but still pay the $100/month. The people who love Claude Code enough to max out the 20x plan every day, that's probably the best influencer marketing campaign you could ever buy anyways.


Anthropic previously shared that they make ~60% margin on API access. So they're losing money on plan whales.


How is this still a problem? This has happened so many times now.


I mean... it's not a microblogging service, or even Github. Think of the resources you are marshaling and tapping into when you request Opus inference.

Think of the statefulness of the systems necessary to manage prompt caching.

Not to mention all the enterprise SLAs they have to meet before they serve traffic to gen pop.

Additionally:

- I believe Anthropic is running a planned promotion where short-term limits are doubled? Could be off on the dates though.

- It's two weeks before EOQ, and everyone's no doubt modeling and planning EVERYTHING last minute, because with Claude they can

- It is valuable infrastucture and thus a potential target of attackers


Sure, but they get less sympathy when a lot of their high profile employees talk about using Claude to write 100% of their code and yet Claude Code has loads of issues and their services go down every 10 minutes.


Not my experience so your exaggeration doesn't really help make your point.


The problem with this is that it all runs local on someone's computer, whereas with openclaw you can involve your teammates (e.g. on slack) which is much more powerful.


> Honors robots.txt directives, including crawl-delay

Sounds pretty useless for any serious AI company


What % of sites have a content update volume that exceeds what you can get respecting crawl delay?

If your delay is 1s and you publish less than 60 updates a minute on average I can still get 100%. Most crawls are not that latency sensitive, certainly not the ai ones.

HFT bots, now that is an entirely different ballgame.


> Most crawls are not that latency sensitive, certainly not the ai ones.

They certainly behave like they are. We constantly see crawlers trying to do cache busting, for pages that hasn't change in days, if not weeks. It's hard to tell where the bots are coming from theses days, as most have taken to just lie and say that they are Chrome.

I'd agree that the respecting robots.txt makes this a non-starter for the problematic scrapers. These are bots that that will hammer a site into the ground, they don't respect robots.txt, especially if it tells them to go away.

All of this would be much less of a problem if the authors of the scrapers actually knew how to code, understood how the Internet works and had just the slightest bit of respect for others, but they don't so now all scrapers are labeled as hostile, meaning that only the very largest companies, like Google, get special access.


> We constantly see crawlers trying to do cache busting

Do you have a source for this? Not saying you're wrong, I'd just like to know more


Not really, given that the work we do in that direction isn't exactly public. You can recreate the scenario though. Spin up a wiki of some sort, scrapers love wikis, ideally enable some form of caching, and just sit back and watch scrapers throw random shit in the URL parameters.


"My apologies! I should not have picked that girl school as a target. Updated my NOTES.md"


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: