People don't mind it, because they're not criminals.
This should be called the "other people" fallacy, the idea that bad things only happen to someone else. Similar to the just world fallacy. How could we get people to understand these fallacies?
probably aren't looking through all of their stuff
We understand that the risks are not just from humans creeping on us, but also inscrutable secret algorithms monitoring and/or influencing people in ways we may not agree with. Also, potential abuse just about as important as actual abuse, since we don't know who will have access in the future.
minor invasion of privacy with a huge law-and-order benefit.
Maybe this can be countered by demanding numbers, with documentation backing up the numbers. How many crimes were solved, or terrorist plots foiled, by invading our privacy, that could not have been solved otherwise? And even if that number is significantly greater than zero, what is the societal cost, and is that societal cost justified by the numbers?
Is your "other people" fallacy actually a fallacy?
I mean, if the authorities really are just looking for criminals, and you really are not a criminal, then it doesn't seem to me that this reasoning is fallacious at all.
Where it falls down is when the authorities overreach and go after non-criminals, or when "criminal" gets so broadly defined that everybody qualifies when the authorities want them to.
> Where it falls down is when the authorities overreach
Which in practice, in the west doesn't happen that often (to people who live there - I am not considering drone strikes etc for now).
The concerns about what the NSA are doing are serious, real and I fully share them. But I will happily admit that it is hard to convince someone who doesn't care at the moment, because
a) Everything done with the data is secret, so giving people examples of abuses is hard.
b) The worst problems are mostly what could happen rather than what is happening.
Even Edward Snowden called it "turn-key totalitarianism", that is, there's huge potential but the US is not yet a totalitarian state.
Of course the problem is, if one day the problems caused by massive automated surveillance are so clear that everyone dislikes it, almost by definition it'll be too late.
I must confess that I don't find the (b) arguments very convincing. I generally oppose these mass surveillance programs, but just on principle and because they seem like a massive waste of resources.
The notion of "turn-key totalitarianism" just doesn't make much sense to me. Let's say the US elected Hitler II. (Hitler Jr.?) With the current situation, he could turn the NSA surveillance apparatus into a method of control. In a world where the government didn't have these mass surveillance capabilities, then he couldn't. Except I imagine the first thing he'd do would be to build them. So we'd get, what, a year or two under a totalitarian regime without mass surveillance, then we're back where we'd be anyway. Yay?
It feels a lot like the gun nuts' argument that the Second Amendment exists so we can overthrow the government if it turns tyrannical. It's planning for failure, and it won't work anyway. Much better to concentrate on getting good people into political office, and keeping the system itself in good shape so that bad people can't fuck it up too much.
"He'd just build them" is a huge stretch. Remember who we're dealing with here - guys like Trump are not technology experts and do not really have any conception of what's possible, until the tech guys actually build it and say, hey, check this out. We can hack every phone in the USA and turn its microphone on remotely.
or when "criminal" gets so broadly defined that everybody qualifies when the authorities want them to.
Right, this is the scenario I was trying to suggest. And as someone who has had some above median but still indirect involvement with politics in the past, I have to worry about the probability of this going up with proximity to people planning on running for office, or journalists, etc.
This should be called the "other people" fallacy, the idea that bad things only happen to someone else. Similar to the just world fallacy. How could we get people to understand these fallacies?
probably aren't looking through all of their stuff
We understand that the risks are not just from humans creeping on us, but also inscrutable secret algorithms monitoring and/or influencing people in ways we may not agree with. Also, potential abuse just about as important as actual abuse, since we don't know who will have access in the future.
minor invasion of privacy with a huge law-and-order benefit.
Maybe this can be countered by demanding numbers, with documentation backing up the numbers. How many crimes were solved, or terrorist plots foiled, by invading our privacy, that could not have been solved otherwise? And even if that number is significantly greater than zero, what is the societal cost, and is that societal cost justified by the numbers?