torturing a model with human stupidity probably doesn't align with their position on model welfare; wondering if they tried bullying it into hacking its way out of the slop gulag
They weren't claiming it was dangerous because "AGI soon", that didn't come until later.
OpenAI were claiming GPT-2 was too dangerous because it could be used to flood the internet with fake content (mostly SEO spam).
And they were somewhat right. GPT-2 was very hard to prompt, but with a bit of effort it could spit out endless pages that were good enough to fool a search engine, and even a human at a first glance (you were often several paragraphs in before you realised it was complete nonsense.
otherwise you end up with "get a $20 subscription for 1000% more value -- equivalent to $200 in API usage!!![1]; [1] -- compared to API pricing for american companies on the first weekend of the month between 18:00 and 22:00 UTC+8 during full moon"
from what they wrote, they're just changing how they measure the usage; might even be a good thing if you manage your context right:
> This format replaces average per-message estimates for your plan with a direct mapping between token usage and credits. It is most useful when you want a clearer view of how input, cached input, and output affect credit consumption.
reply