Why should the attempt to remove some harmful content from their platform create a requirement to be absolutely perfect at doing so? Was there any doubt that twitter had the technical capabilities to remove some tweets?
If you ever help a person, are you then required to help any other person in the same situation?
What's the outcome you want? Do you believe it's possible to moderate billions of tweets without a mistake? If not, what's so great about social media that is overrun with swastikas and porn after the toxicity scared away all normal people? How does that help?
Do twitter employees maybe have the free speech right of not being involved in the distribution of stuff they find horrible? What's worse, your tweets being deleted by twitter or being forced to make statements you know to be wrong and harmful (i. e. coerced speech)?
Do you realise the standard for newspapers in the US is "actual malice", meaning the editors must know something they are publishing is false and harmful, and they must act with the intent to cause some harm. Is that how you'd describe "lost liability protection"?
Why should the attempt to remove some harmful content from their platform create a requirement to be absolutely perfect at doing so? Was there any doubt that twitter had the technical capabilities to remove some tweets?
If you ever help a person, are you then required to help any other person in the same situation?
What's the outcome you want? Do you believe it's possible to moderate billions of tweets without a mistake? If not, what's so great about social media that is overrun with swastikas and porn after the toxicity scared away all normal people? How does that help?
Do twitter employees maybe have the free speech right of not being involved in the distribution of stuff they find horrible? What's worse, your tweets being deleted by twitter or being forced to make statements you know to be wrong and harmful (i. e. coerced speech)?
Do you realise the standard for newspapers in the US is "actual malice", meaning the editors must know something they are publishing is false and harmful, and they must act with the intent to cause some harm. Is that how you'd describe "lost liability protection"?