General AI tools also work the same for all users. It's not as if the average AI company is optimizing for celebrity deepfake nudes or spambots.
I mean, there are really two categories of software:
* Free and/or open source software. In this case, I think there is no good reason to make the developer liable, unless they're promoting illegal use. No person wants to be attacked for giving away something for free. That's why the LICENSE.
* Commercial/paid software. In this case, it is reasonable to argue that companies should be liable if end users are harmed by the software. For paid software especially, disclaimers cannot be absolute.
But I do not think it is acceptable to hold developers liable for second-order effects - i.e., a user doing something illegal with the software and harming a third party - unless it was obvious to them that the user was going to do something illegal.
If they are knowingly including large numbers of celebrity photos in their training data, slurping it into their models, and doing nothing to block users from abusing what is a clearly foreseeable harm? That's on the companies making the product, not on the users.
If Honda put a big spike on the front of their vehicles because they thought it looked good and would sell more cars, but the spike was good at skewering pedestrians, they'd be at fault too. It wouldn't matter that their designers thought the spike was sexy and would sell more cars. You can't make something you know to be dangerous and expect to sell it to the public without being regulated.
Want to avoid the regulation, don't steal a bunch of celebrity photos and an provide your users with a tool that that creates celebrity porn deepfakes on demand.
This isn't controversial. Go to Microsoft's AI chatbot today and try to get it to create a naked image of Taylor Swift. Microsoft has spent non-trivial engineering resources making that fail. Not doing that work is irresponsible and likely to lead to lawsuit that may or may not be winnable but that Microsoft and others clearly want to avoid.
Counterpoint: tons of tools are dangerous yet are still sold without much if any regulation. Knives are dangerous, but you don't need an ID to buy one from the store. We sell dangerous products all the time! We just put warnings and disclaimers on them (which AI models tend to come with).
That said, I dispute the idea that these models are "dangerous" in the first place. A box that generates texts and images is not even remotely as dangerous as a sharp spike strapped to a car. Such a comparison is hyperbolic.
People act like these models are going to be the end of US when they're literally just "instant photoshop." A dangerous model would be one designed to run a military drone or automatic weapons, not a random text and image machine.
All that aside, the deepfake issue has nothing to do with the model datasets including celebrity photos (in fact, it would work fine without any of them). And no, downloading public photos is not stealing either.
I mean, there are really two categories of software:
* Free and/or open source software. In this case, I think there is no good reason to make the developer liable, unless they're promoting illegal use. No person wants to be attacked for giving away something for free. That's why the LICENSE.
* Commercial/paid software. In this case, it is reasonable to argue that companies should be liable if end users are harmed by the software. For paid software especially, disclaimers cannot be absolute.
But I do not think it is acceptable to hold developers liable for second-order effects - i.e., a user doing something illegal with the software and harming a third party - unless it was obvious to them that the user was going to do something illegal.