This argument makes no sense. Consumer GPU pricing (which I'm assuming is what you're referring to) has very little to do with the pro market (industry, research etc.)
The researchers are using things like the DGX or RTX A-series. These, while quite expensive, are not that unreasonable when it comes to pricing.
An individual could afford computing power for such research activities (not exactly like this one, but e.g. for personal ML experiments) in 2018-2019 for an adequate price. You were able to buy 2 new RTX2080s for the today price of a used single unit. If you want to tinker and need GPU power today, your best option is to rent special datacenter-approved(tm) GPUs for the really expensive $/h. And you don't own anything afterwards (except if you bought GPU before the end of 2020). Does this make no sense? Is this how technological progress should work?
2080s? With only 8GB of VRAM that's not even ECC backed?
Even for ML model training back then, 8GB was on the small side (a lot of the research repos even had special parameter sets to allow running on consumer level VRAM GPUs). Also, for something like long running bio simulations, you'd probably want to be sure that your memory bits aren't being flipped by other sources -- the extra upfront cost is well worth preventing potentially wrong research results...
Nvidia consumer products have been a better value proposition in the past for sure. But they've always done market segmentation. It's not merely a matter of "datacenter-approved(tm) GPU" (though they do also do driver-based segmentation).