4o was very sycophantic so was very willing to play along with and validate the users roleplay. OpenAI even noticed enough to talk about it in a blog: https://openai.com/index/sycophancy-in-gpt-4o/
I suspect that OpenAI knew that their product was addictive, potentially dialed up the addictiveness as a business strategy, and is playing dumb about the whole thing.
That's an actively harsh response, pushing these people away from the idea GPT is in a relationship with them. So even if the initial tune was meant to increase the attach and retention rate their actions show they don't like the way it turned out to influence people who were using it as a friend/lover bot.
Then why would they have toned it down in future releases? If they really wanted to make it addictive they'd have turned it up, like social media companies do with their algorithms.