It seems like that at first glance. But in reality, GPUs have had extremely slow adoption for real-world operational meteorology applications. Because of the fundamental design and architecture of most NWP systems, it was very difficult to leverage GPUs as compute accelerators; most efforts barely eked out any performance gains once you account for host/device memory transfers. It really wasn't until some groups started to design new weather modeling systems from the ground up that they could architect things in such a way that GPUs made a significant difference.
Obviously AI / ML weather modeling is a different story.
As someone working in a field that has used NLP for quite some time - yeah, I generally agree that those investments are worth their weight in gold... which is unfortunate because before ChatGPT came along they were viewed as niche unprofitable money-sinks. The astronomical investments we've seen lately have been in general models which can be leveraged to outperform some of our older models but had we wanted purely to improve those models there were much more efficient ways to do so.
Hopefully we can retain a lot of this value when the bubble bursts but I just haven't seen any really good success stories of converting these models into businesses. If you try and build as a middleman where you leverage a model to solve someone's problem they can always just go to the model runner and get the same results for cheaper - and the model runners seem (so far - this may change) to be unable to price model usage at a level that actually makes it sustainable.
Those older models running specialized tasks seem to be trucking along just fine for now - but I remain concerned that when the bubble bursts it's going to starve these necessary investments of capital.
I think it's pretty clear to all the big operators that they will need to go whole hog into ads and take some of the Google/Meta pie. It's just a matter of time.
You're missing the point. Those kind of narrow AI applications are not the motivation for the trillions of dollars being poured into AI. Of course AI has a variety of applications many disciplines, as it has for decades. The motivation behind the massive investment in AI is as forgetfulness said, reap the benefits from "revolutionizing the workplace"
Eh, those applications (incl. protein folding) existed for a decade-plus before LLMs came onto the scene, and there was absolutely nothing like the scale of capex that we're seeing right now. It's like literally 100-1000x larger than what GPU hosting providers were spending previously.
That’s copium, as the kids say nowadays. The massive planetary investment is a 100% for AI chats. All those other things are taking the crumbs where they can.
The really weird thing is that Big Business actually is buying supercomputer clusters to do just that. I can't really talk to the government side but a lot of businesses' early forays into AI was just slapping a chatbot on their product and hoping it'd attract a lot more business. I also think you'd be surprised how integrated really dumb chatbots are into business communication these days.
I think most smart people are looking seriously at different models to try and improve the accuracy of any existing ML uses they had in their business but the new features built post-ChatGPT tend to often just be fancied up chats.
That's happening of course, but that's not really the whole picture. Any org that already invests in R&D is likely considering or already implementing modern AI tech into their existing infrastructure. A big oil or pharmaceutical or materials company likely doesn't care much about chat bots, or any customer-facing tech for that matter.
Actually, big orgs are doing exactly that; slapping a chatbot onto their support ticket backlog. Being really, actually “data driven” is hard, and must happen from the bottom up. So instead there’s chatbots in their frontend and support backend, but the backend doing the actual lifting probably hasn’t changed one bit.
This has changed a lot in the past decade -- any modern Fedora box has SELinux enabled by default now and so I would wager the majority of Fedora/CentOS/AlmaLinux/RHEL boxes have SELinux enabled and in enforcing mode. openSUSE/SLES is also switching to SELinux in 16.0.
Yeah, there are some botnets I've been seeing that are much more stealthy, using 900-3000 IP's with rotating user agents to send enormous amounts of traffic.
I've resorted to blocking entire AS routes to prevent it (fortunately I am mostly hosting US sites with US only residential audiences). I'm not sure who's behind it, but one of the later data centers is oxylabs, so they're probably involved somehow.
I mean, forcing them to spend engineering effort the make their bot stealthy (or to be able to maintains 10's of thousands of open ports), is still driving up their costs, so I'd count it as a win. The OP doesn't say why the bot is hitting their endpoints, but I doubt the bot is a profit centre for the operator.
In this case I don't think they do - unless the legitimate users are also hitting your site at 700 RPS (in which case, the added load from the bot is going to be negligible)
Once the bot is stealthy (the current sub-thread if I haven't misread) they absolutely do. A couple examples where I've been flagged as a bot for normal traffic:
1. Discord's telemetry was broken on my browser, and on failure they immediately retried. It didn't take many actions queued up on the site before my browser was initiating over 100RPS, on their behalf.
2. Target and eBay still flag my sessions as bot traffic (presumably because they don't recognize the user agent or because I use Linux or something). Target allows browsing their site for a few items before heavily rate-limiting me for a day or so, and eBay just resets my password a day or two after I log in, every single bloody time.
The problem is that from time to time normal users will generate large traffic volumes, and if the bot owner uses many IPs then you're forced to use less reliable signals for that ban hammer (i.e., no single user will be near 700 RPS).
The massive planetary investment is not to make more AI chats that summarize text. That's just short sighted.