Which is unfortunate, but at least the players who play this bot hopefully have a more enjoyable game than the ones who play a depth-limited stockfish, for example.
Detecting deepfakes and generating them are just adversarial training that will make deepfakes even better and then our society won’t trust any video or audio without cryptographically signed watermarks.
It's too early to say that. The big problems are coming as deepfaking gets cheaper and easier. Scammers are using deepfake photos to aid in their scams. It's going to get worse, especially anyplace that photos are used as proof or evidence.
It's a little different with videos and audio, though. People place much more trust in them, for now at least, and people do generally still trust photos more than text, so there is a wider challenge as we become more able to suborn higher levels of truthiness for propoganda/memes.