> We can’t rely on the market to protect our privacy.
You don't get from your first point to here.
The cause of the market failure is that once you give your data to someone, you can't know what they do with it. The solution is for them to never have it in the first place.
This has technical solutions. Your data stays on your device, not their servers, or if it is on their servers then it's encrypted. Don't do anything client-server that could be federated or P2P etc. Publish the source code.
This needs a business model. But "you pay money to fund development and then get software including source code that you run on your device" is a business model. If people want this they can have it. Go stuff cash into some open source projects by subscribing to their Patreon or Substack or whatever people are using now, and then use them.
The alternative doesn't actually solve the problem. You give your data to Google, the government says Google can't do X with it, but you still have no way to verify that they're not doing X because once they have your data, X happens entirely at Google where you have no way of observing it.
It also fails to protect against covert defections by both parties where the government gets all your data in exchange for looking the other way while the corporation does whatever they want with it too. You need to be able to prove that it's not happening, or it is.
Seems to me that depends on the kind of regulation. If it's just "trust the regulator to keep ahead of Google" than that's one thing. But we can add other constraints on top of that. E.g., we could require that Google's privacy-relevant code be open source, and that they must give you data all data related to you, such that individuals could audit things and prove or disprove that Google's behavior matches their claims.
Especially if we add bounties for catching Google's transgressions, I expect we could do quite well open-source, personalized regulation.
> E.g., we could require that Google's privacy-relevant code be open source, and that they must give you data all data related to you, such that individuals could audit things and prove or disprove that Google's behavior matches their claims.
What happens if they lie? They have the data, they give you the code that does the user-facing thing with the data, then they copy the data to some other system where some unspecified foreign subsidiary uses it for arbitrary nefarious purposes without telling anybody.
And as much as it might help to have a law requiring cloud services to publish all their source code so people can verify that they're doing at least that part of what they say they're doing, do you really expect that to be enacted?
I think the right regulatory fix depends a lot on which particular service we're talking about and what the threats are. But the general goal of mandatory transparency reporting is to minimize the size of the possible lie. And I think that works even better when individuals and civil society groups have the opportunity to verify that. E.g., look at how many companies have been caught hoovering up data thanks to individual investigators looking at app behavior.
I don't think a law requiring all code to be published would get passed. But key code for, say, personalization algorithms? That seems doable. Places like health departments, ag inspectors, and workplace safety agencies get to inspect the physical machinery of production all the time. No reason we can't start extending that in to the virtual realm. Companies won't be excited for it, but they might prefer it to some of the more heavy-handed proposals going around now. (E.g., section 230 reform, antitrust concerns.)
You don't get from your first point to here.
The cause of the market failure is that once you give your data to someone, you can't know what they do with it. The solution is for them to never have it in the first place.
This has technical solutions. Your data stays on your device, not their servers, or if it is on their servers then it's encrypted. Don't do anything client-server that could be federated or P2P etc. Publish the source code.
This needs a business model. But "you pay money to fund development and then get software including source code that you run on your device" is a business model. If people want this they can have it. Go stuff cash into some open source projects by subscribing to their Patreon or Substack or whatever people are using now, and then use them.
The alternative doesn't actually solve the problem. You give your data to Google, the government says Google can't do X with it, but you still have no way to verify that they're not doing X because once they have your data, X happens entirely at Google where you have no way of observing it.
It also fails to protect against covert defections by both parties where the government gets all your data in exchange for looking the other way while the corporation does whatever they want with it too. You need to be able to prove that it's not happening, or it is.