I know this is probably going to sound like magic to some, but you know, one can control, if they so choose to, what one's browser (if one uses a FOSS browser) sends over the network by modifying the code and recompiling it for oneself…
Who said anything about disabling… some actors spoof information all the time to appear as if they are somewhere/someone else or even a someone (bots) in the first place.
On the topic of disabling, if one disabled javascript by default, one would get the immediate effect of not seeing most of the advertising on the web (outside of submarine articles).
I bet people telling others that they "need" this shit to use the web probably has significant overlap with those who say they "need" Facebook (or any other random app) or else there would be no other way to find events or things to do. Like without facebook groups, one would die! lol
Indeed, obfuscation and sending noise is one obvious countermeasure and that space needs more exploring. Google banned Adnauseam on Chrome just for tinkering with the idea.
while theoretically possible there are enough people monitoring Google's JavaScript to make sure that they will never use fingerprinting on you. Or do they?
i just tried but sadly it did not finish without allowing js on their site and for both of the thirdparty tests. It might would have finished if i would not have enabled this test though.
However, even if it would show better results then my user agent and accept header is apprently pretty unique anyhow. But why bother.. My ip addresses do not change that often.
Yes, Google has all this information. No, it hasn't been as careless with it as Facebook. If that counts for nothing in your threat model then your model is useless and you're just spreading FUD.
Google tracking users down to this level is beyond creepy.
Google being careful with user data, may be true now. But what happens one day when Google goes bankrupt? And they have to sell all their information about their users at firesale prices.
Yahoo went bankrupt, and sold off its data about their users.
Imagine the treasure trove of information that can be gleamed from every single American's online activities. Imagine all the "innocent" people that can be blackmailed, and have their lives destroyed.
Imagine a company like no other in human history, that is amassing information on every single American, on every single person in the world, and on a scale that has never been done before. Imagine.. Google.
Google needs to be regulated. The public needs to know what they are doing. And we need restrictions on what can be done with that data. If they go bankrupt one day, then they cannot sell it. It must be destroyed. It is too dangerous to be sold off to the highest bidder.
While your statement is true, it's best to talk probabilties.
If you calculate the likelihood that companyX is a) hoarding sensitive data about you AND b) isn't competent at securing said data, I'd rate Google as less of a threat because I estimate (b) is much less likely.
The best way to win is not to play. I block cookies, scripts and also don't use FB or much of Google except search & email.
Being “competent at securing your data” is useless when they are incentivized to utilize it.
Usually, when an entity motivated against our best interests is considered “competent”, we calculate that they are more of a threat. This makes fine sense to me. Why would one reason differently about Google? It looks illogical to me.
Google is a company. Google is optimized for profit, not data privacy. Quite the contrary, of course. And that isn’t changing any time soon. Right now, they have a lot of surplus, but that can change, which will effect the nature of their optimization’s. For that reason, we should consider the incentives since they are evidently the core motivation of Google’s behavior.
If I’m not mistaken, you are encouraging others to trust an unrelated entity known to be incentivized against their best interests. Sounds like a tough sell to put it lightly.
What parties that are buying data from Facebook do, Google themself can do the same. Manipulate search, Youtube suggested video, using data for their own benefit. May be they did not do anything. But, that does not mean they can't do that in future.
Saying it's fine for Google have so much power just because they are not working like Facebook, is stupid. History repeats, but we never learn our lesson.
There's so many things that can change your fingerprint that their methods are useless, unless you continue to use their products willingly and don't block their scripts. Without your willful participation, they are quite helpless.
Am I the only one who thinks that nothing should be done about the "fake news" issue?
This debate is based on all sorts of fallacies. First, that fake news elected someone. It did not. Second, that this is something new or inherent to the Internet. It is not. Fake rumours and gossip have existed ever since we have existed. People's judgement has been the sole thing holding the whole system together for hundreds of years. Bankers, for example, spread rumours that Nikola Tesla had sex with pigeons. It was fake but it spread like fire in NYC. Did we ban free speech because of such things? No.
From most the discussion threads here it seems that censorship is the only way forward, which scares me because people are falling for the trap and they are justifying censorship.
Who will decide what videos are "controversial"? If we agree that fake news should be regulated, what is the limit of state regulation? Why not then allow them to censor most other politically incorrect forms of speech?
It seems to me like this is headed down a very dangerous path and it's scary how people fall for it. Fake news is absolutely irrelevant and how it's been blown up and is being exagerated in order to justify censorship is a conspiracy in its own right. I'd be more concerned about this particular conspiracy: someone is trying to justify censorship because one particular candidate won an election.
Fake News is the media trying to cover up the embarrasment about not doing their background checks before copy-pasting something and publishing it. Now someone has to take the blame.
I'm sure someone is very happily converting this into a chance for control over the censoring.
This bears no resemblance to reality... You can disagree with the “fake news” narrative, but you should have the decency to attack a good faith interpretation of it, not whatever straw man you’re alluding to.
Here’s the gist, just in case you are actually confused: there are some sites both large and small, from shady operations in the balkans to infowars, that publish stories completely made up. One example might be the child abuse enterprise run by Hillary Clinton from the basement of a pizzeria.
As we all know, that pizzeria didn’t even have a basement. So we can agree, hopefully, that this was an insane conspiracy theory.
Such stories were/are extensively shared on social media in the run up to the last election. And while they may not have been decisive, it’s hard to argue that they had no effect whatsoever. At the very least, they fed the cynicism and distrust already rotting in the core of society.
That’s the idea of „fake news“. You can dispute that it has any effect. You can argue that some good comes out of this free-for-all. But note that traditional media publishers simply play no role in this. To make this about some perceived failure of the New York Times takes quite a lot of logical gymnastics.
This is exactly what I mean. If it was on some obscure webpage, noone would know about it but main stream media copy-paste news like this just because they know people will click on it, not bothering to check if it is true or not. For the newssite that is less interesting as long as they get clicks. Of course this does not mean there are responsible journalists that does fact checking on everything, they just get less with time.
The problem is not that there exist people who are creating fake content and hosting it on youtube. If that were all, it would be (mostly) fine.
The problem is that gamification and engagement mechanisms pull people deeper into that rabbit hole. Go see what the recommendation algorithm does with "fake news" style content sometime. It'd be harder to build a system to "radicalize" people towards extremist ideas if you tried.
As one guy on twitter put it: "my dad and his iPad went from vaguely interested in the knights templar's operations in portugal to believing in ancient aliens seeding global cultures in the space of like three weeks"
I probably agree with you that there shouldn't be straight up censorship. But neither should youtube blindly and algorithmically identify that someone is susceptible to conspiracy-style content and then drive engagement by serving them a never-ending stream of the stuff.
You are not the only one. The thing seemingly appeared out of thin air sometime between the Brexit vote and Trump running for the Whitehouse. That the news organisations are talking about it more and more is tragic. At least Trump twists the terms to make it more relevant to them.
Fact: There was a small case of "troll farms" producing totally click bait content. But the effect was minimal. This is what was originally meant by "fake news".
Now fake news can literally be anything, and the resulting moves to counteract anything will be bad. What also concerns me is that most of the voices against whatever fake news is, reside on the left hand side of the political spectrum.
Okay. Now. lets assume that Fake News is responsible for manipulating and influencing stupid people into voting wrongly. The way to counteract that is by education, by a free press, not by counter propaganda and censorship.
You’re stating “the effects were minimal” as a fact, but that’s impossible to prove. Some of these stories featured prominently on reddit, for example. And nobody knows what others’ Facebook feeds look like. It’s not inconceivable that some of these stories may have moved the needle in some regions or communities.
Just look at how much political capital Obama had to spend on countering the “birther” nonsense. That was 8years ago, and it crowded out all sorts of actual policy discussions, arguably gave rise to the Tea Party and everything that followed from it, and probably had effects on every election starting from the 2010 midterms.
The fundamental problem is trusting pieces of information people read on non-authoritative sources like random social network posts being liked or twitted. People are granting such trust, which is bad but not so much. More dangerously, however, we have authoritative sources such as reputed news venues granting more and more trust, or at least, giving credit, to such garbage sources.
I don't know if the government should get involved, but if it is to, then it should only be a educative and preventive job, to warn people that:
* Other than reputed news sources using their Twitter account to spread information, it is to be assumed than information from social networks is garbage
* When a reputed news source is referencing information from Twitter or other social networks, they're doing a bad job
I’m not sure which “reputed media sources” are granting any trust to any unsubstantiated stories they find on social media. Fox News does. And while they have a reputation. I doubt it’s the sort of reputation you mean.
As for Twitter: it’s just a tool, like email. If the president uses Twitter, and actual policy news break there, it’s perfectly fine to report on it.
Do you use search engines? All of them use their own sense of judgment to decide which content is best for your query. You rely on curators in most areas of your life, why would news be exempt?
No, you are not the only one. I cringe every time I read pieces about this. Calling out "fake news" is just the newest variant of placing blame on everyone/thing but ourselves.
They cross it with your location, your movements, your emails, expenses, everything.
There is no single more spooky company than Google.