Should we really buy the many months of switching difficulty argument?
Surely the main API surface is a HTTP API like ChatCompletions? If it's the exact shape of Anthropic's API, the difference is surely minor. There are likely up to 2 API surfaces, that's it. If the OpenAI model APIs are more flexible (esp. with the new 1M context of GPT-5.4), then it should have little difficulty adapting. Then there is LiteLLM and similar that make it even easier, half of their tooling should be using something that abstracts like that anyway. Yes it needs evals and prompt engineering work to optimise it, but they should be used to that by now. Presumably they could even clean-room fine-tune an OpenAI model to match the same Claude shape with low loss. So I don't buy it.
It’s not the syntax of the API that’s the issue, it’s the behaviour and performance of the model. You can create code, images, and video with just about any model, but there’s reasons people prefer Claude Code or Sora for particular tasks
As is pointed out in my links, they are using Palantir's solution which Palantir has built around Claude AI (including custom agents/chatbots/etc.)
After Trump's tantrum with Anthropic, no doubt Palantir will be switching to OpenAI based models/agents/chatbots.
From the pov of data analysis and inference, they should be comparable though Anthropic's AI predictions _might_ be better than OpenAI's (maybe the reason why Palantir chose them in the first place).
In the case of chardet though it wouldn't it be more like you were the publisher of the godfather novel, withdrawing it from print and releasing a novel with the same name with much of the same plot and characters but claiming the new version was an independent creation?
If the new maintainers used Claude as their “fancy code generator” (there’s a Claude.md file in the repository so it seems so) then it was almost certainly trained with the chardet source code.
> The articles usually start with a case description followed by “learning points” that include statistics, clinical observations and data from CPSP.
I can see the reason where fictional cases could be used here as teaching aid - based on real cases/ilnesses but simplified to make the learning points succinctly, but surely if the cases are being cited elsewhere someone should have raised the issue earlier?
Since it was for teaching I expect the case studies were always showing typical features of real cases, so there's nothing in the case vignette itself to give it away unless the author picks a funny name or something like that.
Rather it would be the entire form of these short highlight articles that would make you keep searching for a proper citation, unless you're lazy or pressed for time.
Wouldn't citing actual cases be a HIPAA violation? I can see why they would invent example cases, based on real ones, especially if they are fairly pedestrian cases.
I mean. Except if your pedestrian example does not reflect reality, then that is bad.
It's a privacy violation to reveal information that identifies the patient. It is not a violation (and is extremely common) to recount details without noting names, places, or even dates. Unless you already have access to a database of records you won't be able to track it down.
It's even common during talks to display diagnostic images that have had any identifying marks redacted.
Let us clarify here as it is very different indeed.
The Jolla C2 Community Phone is done in collaboration with Reeder, who is the HW vendor. This means Reeder sources the components, plans the production and does the manufacturing in Turkey. Jolla provides the complete software stack (Sailfish OS) which is installed by Reeder in the manufacturing.
In the new Jolla Phone everything is different. Jolla is the vendor, has designed the product itself, done the component sourcing and pays directly to the component vendors. We control the pipeline. Further, we have secured our position for the initial memory batch with advance purchase.
Also, to be clear: Reeder has no involvement in the new Jolla Phone.
Thank you for asking, very good points to clarify!
Manifold actually explicitly encourages insider trading, arguing that it leads to more accurate pricing. This was possibly defensible back when it was a cute funtime project run by a Bay Area polycule, but it’s probably going to get them in deep shit sooner or later, even though they don’t even use real-money betting.
The vast majority of insider trading schemes are not prosecuted, many leave no evidence trail at all without going deep into black-op classified territory.
Thanks for making me aware of another federal agency :)
Seems to me prosecuting or regulating this sort of activity is futile, and pretty much serve only the interests of the mob. These markets make additional data open source, which otherwise might exclusively belong only to mob, so that's pretty cool. We democratized buying airstrikes.
It You may know how bad things really are, but if you don't, the lawboys are pretty much just playing pretend at this point, and have been for a while.
Mob wants me to add: if you try to buy an airstrike with our very based and functional cryptocurrency systems, you will probably just find mob. We have mob priced in, anybody with a significant amount of cryptocurrency knows this too.
It's not as simple as "buy an airstrike" comrade (we are referencing the person writing this post)
If you have been in the industry for a few decades you will be able to think of several hundred "silver bullets" that made great promises - some even turned out to be great ideas, but none where the 10x revolution that they promised.
The article is a good summary of major movements through the decades without so much that whole point is lost in the details. I would have put in a slightly different set of things if I wanted to write that article, but the point would still stand and I would leave out many things that could be put in but would be too much noise.
Maybe that's due to Amdahl's law applied to software. Everybody imagines that task X which is improved by 10x is 100% of the total work, so you will get 10x overall benefit, when in fact might be something like 20% of the work, so your overall benefit is only 1.21x .
I'm not familiar with Software Reuse but if it's about re-using software itself one advantage of a live codebase is that it's understood in the head of a human being. That means when an issue is opened, a person remembers if it's a new issue or not. It's not "just" semantic search where that person knows only if it's genuinely new or not (and thus can be closed) but rather why it exists in the first place. Is it the result of the current architecture, dependency choice, etc or rather simply a "shallow" bug that can be resolved with fixing a single function.
They (along with Waymo) plan to launch services in London this year - it will be very interesting how they cope with the often complex non grid roads, huge number of pedestrians, buses and cyclists, not to mention the militant black cab drivers.
I don't see self-driving cars ever working in the UK.
It's hard enough for a human driver to negotiate their way through for example York, never mind a computer that can only react painfully slowly to outside influences.
I fully expect to see a lot of written-off self-driving cars scattered along the A82 through Glencoe, Cluanie, and Inverinate, as they entirely fail to cope with deer, sheep, and feral goats.
So a Google AI pro/ultra account is intended to be used from their cli or tools (like their open-gravity agent front end).
Their API usage isn't included in these plans, although under the hood open-gravity uses the API.
People have been using the API auth credential intended for anti-gravity with open claw, presumably causing a significant amount of use and have been caught.
The Google admin tools and process haven’t quite been able to cope with this situation and people have been overly banned with poor information sent to the them.
I don’t think either OpenAI or Anthropic any API use in their ‘pro’ plans either?
This reminds me of the customers of “unlimited broadband” of yesteryear getting throttled or banned for running Tor servers.
> The Google admin tools and process haven’t quite been able to cope with this situation and people have been overly banned with poor information sent to the users.
I can’t recall any success story of Google’s support team or process coping with a consumer’s situation, many have been posted here. this isn’t a new outcome, just a new cause
I do want to understand what’s happening with the $250/mo fees of users caught in this. will it be automatically cancelled at some point?
reply