> Fun game to play: Take statements from Comey et al. Replace "smartphones" with "brains"/"memories"/"thoughts". Technology will get us there!
> "Everybody is walking around with a Swiss bank account in his brain if government can't get in. You cannot take an absolutist view on this."
> "How do we solve or disrupt a terrorist plot if law enforcement can't access the memories and thoughts inside suspected terrorists' brains?"
Both funny and scary.
Time to rewatch the TV animation version of Ghost in the Shell [0] again. Released in 2000s, it portrayed and predicted our world remarkably well, and it'll give you a lot of inspirations of what would the future society look like when everyone uses an electronic brain.
Well, the TV version I linked (Stand Alone Complex, which comes later) and the original 1995 movie are two independent timelines. The movie is philosophical in comparison, while the TV version focuses on exploring the practical consequences of a brain-networked society, which was why I mentioned it. Definitely watch it if you have time and haven't seen it yet.
The hardware is the difficult part. Decoding is actually quite easy once you can record neural activity. Apply your favorite Kalman filter, dimensionality reduction algorithm, or increasingly some flavor of neural network.
You should remember that the context of the jokes was about the statements from ex-FBI's head James Comey on encryption backdoor in the middle of the Crypto War 2 back in 2016.
> "(with encryption) Everybody is walking around with a Swiss bank account in his smartphone if government can't get in. You cannot take an absolutist view on this (, and we must add backdoors)."
Replace "smartphones" with "brains".
> "(with encryption) Everybody is walking around with a Swiss bank account in his brain if government can't get in. You cannot take an absolutist view on this (, and we must add backdoors)."
It'll likely remain only marginally better than polygraphs for decades or longer, but it seems probable that eventually it's going to become so effective that law enforcement organizations will feel like they need to use it to keep up with other law enforcement organizations. Once one or two big organizations see high success rates with it, many will feel like they're letting a lot of criminals get off scot free because they're behind on technology, and most departments/agencies will have very strong incentives to jump on board as they start to see more and more of their peers adopting it (just like with DNA collection and testing).
Even if consent is required, it'll just be like traffic stop no-win situations: if they ask you to consent to a search due to suspicion of drug possession and you say yes, they'll search, and if you say no, they'll wait until a drug dog comes, and then the dog will effectively conduct the search and/or will be used as a tool to permit a full search. Or if you refuse to take a breathalyzer test, they essentially presume guilt and treat you like you took the test and failed. Anyone who refuses to consent to the Truth-O-Matic test will probably suffer similar consequences. It's mostly an illusion of consent. I think this future is inevitable without some very serious legislation; perhaps a constitutional amendment.
A counter-point is that one could imagine a potential distant future exigent circumstance where some sophisticated criminal might have information that could truly prevent an imminent attack that'll very likely kill millions of people, or something, and which could be discovered through such a detector. In those fanciful scenarios I honestly do find myself wanting to agree with Comey: fuck privacy, siphon those fucking thoughts. It's certainly a much more ethical and effective alternative to torture, at least, which is how the US government would currently handle such a scenario (if they could cover it up).
Security and privacy/liberty are always a tradeoff, but even if you weigh privacy exponentially higher, there's always still some theoretical risk that would tip the scale back towards security, in my view. Nuclear armageddon, for example. The privacy hill to die on shouldn't be Mega-Corpse Mountain.
This scenario is absurdly far-fetched in 2019, of course, but in 2079? 2119? Who knows? If Aum Shinrikyo and ISIS can do what they did and plan what they planned in the 1990s and 2010s, what about new incarnations of zealous death cults in a world that may have extremely intelligent AIs and/or DIY-potential for almost anything?
Even a single instance of losing the final bastion of privacy is such an incredibly slippery slope that it's hard to imagine any safe way to allow its use in very extreme situations like these without setting up the conditions for horrific abuse and a Stasi/1984-esque kowtowing of all of society. But I think this aspect of the law enforcement perspective needs to be taken into consideration as well, even while simultaneously acknowledging that law enforcement will inevitably cry wolf and exaggerate the likelihood or impact of imminent threats. Of course this'll happen, but what happens if they do see an actual wolf one day?
Terrorist attacks seen scary. Many will glady give away any freedoms to take the fear away.
A terrorist attack is a result of pressure that has been building up finally erupting. If you want to stop attacks new outlets need to be created to channel that energy elsewhere.
The country is a democracy but when dealing with other nations things are more of a dictatorship. That power inbalance makes the otherside powerless. Powerless people without hope do stupid things like blow themselves/friends/family up. Find ways to give them a voice.
The number don't add up. Being scared of terrorist attacks is like being afraid of winning the lottery. Your chances are lower than you think.
> "The number don't add up. Being scared of terrorist attacks is like being afraid of winning the lottery. Your chances are lower than you think."
Don't want to detract from the rest of your comment. But, one of the fundamental "points" of terrorism is to make this this the case. I.e. to instill fear (even if not remotely likely for it to happen.) The fear is what they want, unless they're trying to wage a guerilla war.
The number don't add up. Being scared of terrorist attacks is like being afraid of winning the lottery. Your chances are lower than you think.
I’ve heard this said before, but I don’t know if I believe it. How do the chances change if, let’s say, you live in a major city that has been attacked before (perhaps more than once)?
Of course. I'm referring to a hypothetical distant future world where a random, moderately intelligent lone wolf, with any motive or no motive, could potentially inflict nightmarishly efficient and asymmetric damage given the resources, knowledge, and technology at their disposal. Let alone a group of such individuals.
Right now, terrorism is a very minimal risk for an average individual. Even if every single US terrorist attack caused as much death and destruction as occurred on 9/11, it still would be. But what if, due to future circumstances we can't currently predict, every attack had the potential to be many orders of magnitude worse than 9/11, even if there's only a single perpetrator, or just a few? Of course, we'd cross that bridge if and when we get to it, but, in general, the cost of terrorism/mass murder will continue to decrease, and its force-multiplying potential will continue to increase.
In this hypothetical case, it makes no difference if the terrorist is from a different country, or your own country, or motivated by ideology or just a Joker-esque mentality or because their girlfrienf broke up with them a month ago. If it really could take just one nut to cause more suffering in a day than Hitler or Stalin caused in their entire lives, we'll need to rethink the role of government and the boundaries of privacy. Right now, your description is accurate, and it probably will be for the next 40+ years, but the distant future is tricky to predict. Even if the odds of an attack occurring remain roughly the same, the overall "expected value" of terrorism could still skyrocket.
I'm particularly worried about hypothetical new fringe, cult-ish elements like Aum Shinrikyo or even the Manson Family, rather than an organization akin to al-Qaeda or ISIS. I've long harbored a persistent fear that Aum Shinrikyo may be a prototype or inspiration for a future organization or movement which could thrust all of human civilization into darkness through relatively little effort, and before anyone fully realizes the risk (like the risk of planes as suicide bombs before 9/11). All of these copycat mass shootings - while not currently "scalable" (in the worst way possible) - don't inspire optimism on that front, either. Al-Qaeda has specific political goals and strategies, but what about people who sincerely just want to kill every or almost every person in existence? How do you deal with that? Those people scare me far more than any violent religious or political extremist, when considered in the long run.
No matter how much you try to remediate disenfranchisement and discontent and mental illness, some number of people like that with violent urges will always exist. And some people will also perpetrate mass murders without sharing any of those charsct. Aum members didn't exactly seem to be the pressured underclass:
>Aum Shinrikyo recruited approximately 300 scientists with degrees in medicine, biochemistry, biology, and genetic engineering.
>Around 75 researchers were discovered who were working on radioactive materials and other nuclear related studies. [Materials] found in the raids indicated that Aum members measured radioactivity levels at a cult compound
They had esteemed professors and scientists manufacturing biological, chemical, and nuclear WMDs to cause. These weren't powerless people without hope; these were powerful, well-off people with a fervent hope for a new world forged from the ashes of the one they attempted to annihilate.
They may have been very close to killing tens of thousands of more people than they did, if not for a few mistakes, and happenstance of bystanders noticing their dispersion devices pre-activation on several occasions.
Thank you, this is a great paper and contains ideas that have been rattling around in my head since I was a child. The distant future of humanity may be a very dark and scary place. In such a world, an all-encompassing surveillance state may be an unfortunate necessity compared to the alternative. Without it, there might not be a world at all.
This is very difficult to think about as someone who's very pro-individual liberty and privacy, but our current way of thinking may become obsolete if it won't be that difficult for a single, ordinary civilian to kill millions if they become motivated to do so. "Give me liberty, or give me death" will make a lot less sense if liberty implies almost certain death. We should start thinking about how a surveillance state could be created and maintained in the most ethical way possible. For example, automated collection and detection without human analysts invading privacy until something is already flagged by the system (to prevent "LOVEINT"-type abuses).
More people need to be exposed to Bostrom's ideas. I hope he does more podcasts.
I thought so, too, but I've watched a lot of Live PD clips from the past few years where this seems to happen, or at least seems to be portrayed, exactly as I described. Maybe it has something to do with the "reasonable suspicion" standard mentioned, but I have no idea.
This is what is actually said below. In one study it raises to 100% when they already knew true or false for previous questions. But you don't have known baselines because what is your name doesn't have the emotional energy of did you kill him.
Counter measures is to store false data as true in the brain.
"That statistic has risen, in one study, to 100% when predicting a lie in an individual when baseline lie/truth levels were closely studied with training from pattern recognition technology (machine learning)."
And people can train themselves to pass lie detectors? ML isn't a solution here unless the data on each individual is substantial. Collecting and activing on that amount of data seems unlikely.
I would assume anyone caught looking up data on how to pass lie detectors will automatically assumed to be a liar by the state in the first place. If your brain is the only bastion of privacy, your internet history is already a weapon against you.
Also read Alfred Bester's The Demolished Man where the main character comissions a songwriter to write an earworm song so that it would jam the telepaths reading his thoughts.
To play devil's advocate, one important difference is that brains are not (yet) networked. We don't have telepathy, you can't communicate with other people using your brain alone, so you cannot use it to coordinate terrorist attack either - whereas with encrypted messengers, you absolutely can.
... not that I'd expect this point to make any difference if the technology to inspect brains were on the horizon.
>you can't communicate with other people using your brain alone, so you cannot use it to coordinate terrorist attack either
You can use it to plan and carry one on your own though. And of course you'll know about an attack if your team is organizing one. Shouldn't all people get brain-screened (from experienced TSA brain scanning operators) to ensure the government gets that information?
Sure, if it is simple enough or your plans are simple enough. Otherwise, most folks at least use paper because actual terrorist plans are complicated with lots of bits.
At least some number of people will have the crucial parts of the plan in their minds, though. The parts that could thwart the whole plot if those parts were to be discovered by law enforcement. Of course, law enforcement may not know who those people are or even that an attack is imminent without pre-emptively monitoring every single person's thoughts so it can be detected with sufficient time to stop it, like how cop cars and (I think) speed cameras pre-emptively and automatically scan every license plate they can detect.
Well, I was talkig about literally "the brain". Even for language, you need a mouth attached to that.
My point is that having a brain is not enough to communicate - you also need some kind of transmission medium that connects you to other brains.
So even if you would agree to the opinion that any and all communication must be monitored (because terrorists), that would still no allow you to monitor the brain, because the brain itself is not what enables communication.
Monitoring a smartphone would be more justified because you legitimately need access to it to monitor an end-to-end-encrapted connection.
(My "devil's advocate" was to note that brains and smartphones are not equivalent, not that monitoring brains should be allowed. On the contrary, my point is that even if you were ok with monitoring smartphones, there are still arguments against monitoring brains.)
This is why at the end of the day come what may the state must die. Technology won't stop advancing, and if it's possible the state will push it with the exact arguments you just modded correctly.
It's simply too dangerous to allow that level of control to exist in the hands of a proven malevolent actor that will use it to ensure the extension and maintenance of its power. That power must be seen for the intolerable burden it's becoming before that point and destroyed, replaced with other structures which have no right to make any such unreasonable intrusions.
And what would make CEOs of private corporations any more likely to deal with this power responsibly?
At least government agencies are in theory bound to laws, moral obligations and a democratic process. Meanwhile, private entities don't even have to pretend to work in anyone else's interest than their own.
And "you don't have to buy it" is no excuse. If you seriosly want to abolish the state completely, then the same problems that are currently solved by public infrastructure will still exist. You'll be just as unable to opt-out of private roads are you are currently unable to opt-out of public roads.
Of course we're actually talking about chips in your brain, not roads. So good luck opting out of that.
> And what would make CEOs of private corporations any more likely to deal with this power responsibly?
The same reason none of the previous quotes apply to actors without political power. They do not have the ability to compel people to surrender to unlimited search on penalty of death. This is only reserved and pursued by holders of political authority.
Removing the duty to obey and the right to coerce is the only solution to this problem in the long term. Anything less and they will keep pushing for it loudly and outright doing it as long as they believe they can get away with it. Which, let's be honest in reference to the way rubber stamping of anything that is requested in the name of national security, is basically always, in practice.
> They do not have the ability to compel people to surrender to unlimited search on penalty of death. This is only reserved and pursued by holders of political authority.
If I run the company that controls peoples' goddamn brain implants, I can conduct unlimited searches or perform arbitrary death sentences at the click of a button.
Oh, there are a lot of options to make people install them. The easiest being, make it desirable to have one.
If smartphones are any indication, people would either straight-up not know about the issues with control/surveillance or have them as one factor of many to trade-off against.
And there are likely lots of benefits that brain implants could offer. If we go full sci-fi, some things they could enable:
- Perfect memory (thanks to digital storage, built-in or in the cloud).
- A perfect sense of location (thanks to built-in GPS and a direct uplink to a Maps-like service).
- Automation of tedious mental tasks (thanks to a bult-in scripting engine).
- Perfect control over your emotions.
- The ability to record and replay any experience.
- Telepathic communication.
- Taking part in fully immersive virtual worlds.
- Seamless interaction with electronic devices, to a point where they bahave and feel like a literal part of your body.
- etc etc.
So if you're a rational consumer, your choice is to have all of that and more - or to throw it all away, based on the seemingly remote possibility that the company could monitor or manipulate your thoughts or make you jump out of the nearest window. And the company is giving their strongest pinkie-promise that they would never do such things.
Even if you stay strong, if enough people take up the offer, network effects will kick in and make "not installing" an increasingly difficult choice:
- Employers will take the enhanced abilities for granted, so you'll have trouble getting a job.
- Operating devices could become difficult to impossible because manual controls will be seen as an unnecessary expense by manufacturers.
- You will be socially isolated, because taking part in telepathic communication will be difficult and you plainly can't visit virtual locations.
- etc etc.
If all that's to much of a hassle to you and you just want to force people to get the implants, you can just hire a "private security contractor" (or build your own army) and force people at gunpoint. If there is no state, who is gonna stop you?
The only reason anybody would install one is if it was desirable to have one, and it's definitely true you can't save people from themselves. With political authority as it stands though, you jump right to the absolute worst case scenario at the end there, they're simply coercively installed by force and there's no possibility of an opt out, they will then be used to do whatever the holder of political authority wants them to do to you, because just as you say, who is going to stop them?
Look at the world we actually live in, this is the way it works, the only thing which actually restrains wielders of political authority is the things they can't do. Not the things they're "restricted by law" from doing.
If they're still around when this is possible, the worst case outcome is inevitable. That doesn't mean that if they're not things can still get bad, by the way. Merely that the combination of the two things taken together is a surefire recipe for dystopia.
> and it's definitely true you can't save people from themselves.
That's too easy to dismiss if we - already today - have major industries that only exist because people act against their own interests - and we have armies of marketers and advertisers that work 40 hours a week to undermine peoples' capabilities of rational descision making.
Even if they didn't, market failures are real, the "invisible hand" a lot less so. We know that if everyone acts only in their own interest, the end result can end up bad for everyone. (Or good only for a small elite)
The market needs guiding principles and I'd very much prefer those principles being enacted by a democratic, law-bound government than some oligarch who is responsible to no one but himself.
> Look at the world we actually live in, this is the way it works, the only thing which actually restrains wielders of political authority is the things they can't do. Not the things they're "restricted by law" from doing.
And yet if leaders of private companies had the same powers, they'd magically restrict themselves. Why?
By the way, that isn't even true: Take Trump: You have a political leader with openly authoritarian views at the most popular position on earth, a party ready to follow wherever he wants to go and democratic institutions weakened by regulatory capture - and he still has to frequently backpedal because people defend the laws and don't just bend to his will.
In a private company, where the only law is "the boss is right", who should do that?
>They do not have the ability to compel people to surrender to unlimited search
They dont? Says who? Who the heck do you think will have power to do whatever they want whenever they want?
You think the 1% will distribute wealth and power fairly and evenly?
Governments are in place to democratically do this. If you think the 1% wouldn't kill you in a heartbeat take a look at world history or some dictatorships today.
Your ignorance doesn't make me a troll, and your citations for "working very well" are laughable.
Check my comment history, I've been here almost four times longer than you, with over ten times more karma than you, and have had the same view of political authority for the entire time. Your unfamiliarity with the dangers of political authority and the idea held by many people for the past half century that it must be destroyed doesn't change any of that. In fact in light of it, I'm probably also significantly older and better read on the associated topics than you.
Mark my words; unless political authority is destroyed there is no future that isn't a dystopia once this technology is widely available and employed.
It's quite depressing to see this venue spiral into the lack of critical thinking necessary to not see the blazingly obvious, irreconcilable and irreparable problem of allocating these powers to an agency that also has the unlimited undischargeable right to coercion, but I guess that's just the way the world is failing these days.
> They do not have the ability to compel people to surrender to unlimited search on penalty of death. This is only reserved and pursued by holders of political authority.
You're missing the point that this power being "reserved" by holders of political authority is exactly what prevents corporations from exercising it.
It's also a theoretical/idealistic divide, assuming corporations always act within the rule of law. See e.g. the long history of Ford's various dealings with their own plotting "terrorists": union organisers.
> You're missing the point that this power being "reserved" by holders of political authority is exactly what prevents corporations from exercising it.
That which can be asserted without evidence can be discarded without evidence.
> That which can be asserted without evidence can be discarded without evidence.
The same can be said of either side of the discussion—you appear to have provided no evidence that corporations would refrain from abuse of such powers were they not so restrained. Whereas there is plenty of evidence that they do abuse such powers even in our current system where they are somewhat restrained from doing so.
looking at how many people voluntarily give up their privacy in exchange for convenience (facebook, alexa, etc.) it seems that most won't need to be compelled to do anything..
I would say, this will never be socially acceptable ... Until my parents in law installed multiple Alexa microphones in their house, and my family of origin all submitted DNA to ansestry.com to be indexed forever...
Generally speaking, the way to buy a soul is to offer convinieance. It doesn't even have to be major.
For all those thinking there will be regulation or protections, where are those protections for consumers of the current privacy violating services and products?
You sign on the dotted line and are bound by the terms, there is no big other to protect you from you own choices.
You choose whether you sell yourself, but once you do, don't expect a bigger force to keep you protected...
You are the only person who is going to advocate for you and the people you care about.
> You choose whether you sell yourself, but once you do, don't expect a bigger force to keep you protected...
Even that isn't true. Plenty of information can be gathered on even the most privacy oriented person due to the actions of others: see Facebook's shadow profiles, or even the use of the DNA your relatives uploaded to learn about your medical history.
Yes, your friends and family can sell you as well... I have a huge shadow on Facebook because of my wife, as do my children who are not even at the age of consent.
If I ever commit a crime, or are framed for one and my DNA is used against me, you better believe my family has sold me out by giving up our shared and essential life force signature.
My point is, there is no big other, no religion, or government who will keep you safe from yourself and your choices. (Or the choices of your family and friends). Any human made laws are show through out history to be maliable, very flexible.
Those that bend the rules take power over those who dont, or don't understand.
A great example of this is tax law ... Huge power disparity between those wealthly enough to find ways to bend the law and those who cannot or do not...
> Until my parents in law installed multiple Alexa microphones in their house
How are the privacy implications of Alexa microphones any different from those of smartphones (which contain microphones and voice assistant software, just like the Echo does)?
The one saving grace with phones is if they were constantly streaming recordings 24/7, you would probably notice the battery & data consumption. Much less so with wall-powered devices connected to WiFi.
All you need to do to get around those things is just beam the recordings over when the phone is plugged in, say, when people are sleeping, and simply exempt the recorded data from the metrics.
It's easy to inspect your one, that didn't get the custom firmware loaded or bugdoor activated. To be fair the applies to mobiles as well (and probably more so since they are better targets).
OK, but this is moving the goalposts. If the CIA has put custom backdoor framework on your device then all bets are off, whether it's an Echo or a smartphone. This isn't really the situation that was being talked about on the thread.
Wow, thoughtcrime is ceasing to be fiction. Even in the original 1984 novel, they could only be detected when those thoughts manifested as actual words and action. This goes further.
I fear for how much restraint we will show when we become technologically able to detect thoughts that are deemed reprehensible in current society. Some ideas that were considered reprehensible in the past have become part of ordinary norms in the modern day; I still don't consider ourselves today to be infallible in always making those decisions correctly.
For now, even having ideas currently considered most heretical/reprehensible is still legal, as long as it never leaves your head - but I'm not quite confident it will stay that way in the face of mounting pressure.
First thought: I think that given the polarization in politics these days, everyone is likely harboring a thought crime according to someone.
But generally we are not an authoritarian society thus we will likely not start this type of witch hunt. Other more authoritarian countries may -- those that already do a lot of arbitrary arrests, and forced disappearances, thus will unfortunately make those state agencies more effective.
Second thought: The first people to get subjected to stuff like this is the same ones who are most likely right now to interrogated or have their social media inspected when crossing the border. This will just be a continuation of that existing trend. If you do not allow your neural state to be monitored and inspected in response to various stimuli you will be turned away at the border. Give it 5 years.
Third thought: There would be a lot of health and education opportunities. See brain patterns changing over time -- like a fitbit-like attention/focus score based on passive 24hr monitoring rather than active testing (e.g. brain training.) One could use those metrics to optimize your live to maximize your brain efficiency. Second, long-term one can see how patterns change during development and it may lead to better detection and understanding of mental illness, especially austim, bipolar, schizophrenia, depression and ADHD. One could likely see early warnings of these and do interventions and judge their effectiveness better than just self-reports.
> But generally we are not an authoritarian society thus we will likely not start this type of witch hunt.
...until you notice the UK police arrests about 8 people a day for tweets saying things like « It’s ok to be white » or calling a transgender a man [1]. The understanding of what is bad has gone from actual crimes to just saying a positive thing about a group that the majority thinks should never be talked about positively.
> If you do not allow your neural state to be monitored and inspected in response to various stimuli you will be turned away at the border. Give it 5 years.
5 years?!
Not even 10. We are very far from that, even excluding practicality.
It is possible now. Just not in great detail. It can not tell exactly what you are thinking but given responses to various stimuli it could infer where one non explicit loyalties lie. It would essentially be a super lie detector.
This makes about as much sense as being intolerant to intolerance.
You’re free to privately believe that privacy shouldn’t exist. That is materially different from trying to eradicate privacy to cement state control over a populace — in an age where the State has almost immeasurably great power to track, monitor, and police its citizens.
While I don't consider the paradox of tolerance to be invalid, what I see happening is people using Popper as an excuse to be intolerant towards people they dislike, declaring that their target party is "intolerant" so they're allowed to be - or even have to be - intolerant towards them. Hence, the total amount of intolerance in the world increases either way. It seems like quite a damned if you do, damned if you don't situation.
I guess it depends on what you mean by being intolerant. Preventing them from being violent or telling them that their views are wrong is one thing. Rejecting people for expressing their views is another. Both Neo-Nazis and ISIS recruit from those who feel as outsiders for whatever reason. Making them outcasts for having been recruited just serves to radicalize people further, locks them into their echo chambers and prevents them from rebounding into normal society.
I don't know. I can't even say if society stagnating is bad. I think society improved over the centuries and millennia, but of course I agree that the values I was raised into are the right ones. I don't have objective measures to verify this. I think I have good reasons and could argue them for hours, but in the end it's no different than arguing religion. The best chain of reasoning is worthless if the axioms are chosen arbitrarily.
We're currently moving towards prohibitting the communication of some thoughts (e.g. "hate speech"). That trend will be just as effective at preventing societal change/causing stagnation. :(
I'm far less worried about commercial companies doing this than I am worried about governments doing this.
Usually when it comes to privacy I can begrudgingly accept that governments will violate it to some degree. This is a field, however, where I think governments must be outright banned from. We must not allow a situation to appear where the TSA or the police will run a quick "brain check" on somebody. It must be avoided. This would be a one way ticket to disaster.
> I'm far less worried about commercial companies doing this than I am worried about governments doing this.
I don't understand this statement. I hope after Snowden we all know that data at a commercial company = data at government? What's the difference? I don't see any practical barriers to various government agencies having access to all data at all companies. With and without company knowledge, mostly secretly, with and without company and customer consent, legally and illegally. This goes for the US, all US allies and all US enemies. The intentions of commercial companies when it comes to data protection from government are pretty irrelevant.
If you (as a people) want to ban your government from access to anything specific, you must first have control over your government. Which is inherently impossible in a system that is anything other than a direct democracy. Just my very unpopular opinion. There is no nation in the world that comes close to a people having control over a government, except arguably Switzerland. But even there I have my doubts...
If the population is worried at all, they'll probably want to stop private companies using it anyway. Governments usually gets exemptions from privacy laws, so in a way that's more worrying - government access is less likely to be stopped than private access. Also, people won't have the choice when they're compelled to be scanned on arrest vs logging in to some optional private business.
> Governments usually gets exemptions from privacy laws, so in a way that's more worrying - government access is less likely to be stopped than private access.
Yes, because private companies are regulated. By the state.
Meanwhile, private companies have nevertheless managed to make surveillance and ridiculously detailed tracking into a whole industry.
> Also, people won't have the choice when they're compelled to be scanned on arrest vs logging in to some optional private business.
Except in reality, this stuff has long since stopped being optional. If you buy any reasonably modern car, you'll be tracked by private companies. If you have a bank account, you'll be tracked by private companies. If you write an email... you get the point. How exactly do you opt-out of that?
I choose to plead the 5th. The wording is "nor shall be compelled in any criminal case to be a witness against himself" which obviously covers brain scans.
Of course the police will say I have the right to remain silent and they have the right to lock me in a jail cell. And TSA will absolutely do brain scans. I guess the right to remain silent is successfully getting eroded.
I live in Switzerland. Direct democracy works perfectly well. I don't understand why it is not the default mode and no other country uses it -- to me, it is the obvious way of running things.
I don't live in Switzerland and I completely agree and it blows my mind as well.
The standard, surprisingly popular, rhetoric I hear against direct democracy in other European countries is really very unconvincing to me: people are too stupid to vote, people can't know enough about specific subjects to vote well, people aren't interested in politics (duh, why would you in a system where you can't influence politics), people can be influenced too easily through propaganda and other nonsense arguments which would all apply to a representative democracy as well.
People don't like change I guess and they (want to) love their country the way it is. On top of that I get a feeling that people somehow look up to politicians subconsciously as somehow being above them or at least being above their neighbors. They seem to equate leadership and sales skills to being able to take hard decisions and being incorruptible / divine. It's like the "king effect" (just made this up): everybody hates the king's decisions, but at the same time the king is the hero and pride of the nation and who could possibly want any other king?
People only have so much attention to give to operating their democracy. Which is why in most instances, the more local you get, the less efficiently government operates. Participation in local elections tends to be a complete joke, and as such those local government bodies tend to only represent a very narrow set of interests.
In any case, this form of direct democracy wouldn’t solve the problems the parent is talking about. To achieve that, you’d need to vote on every decisions made by every public official ever.
This is how it works here. We have local, cantonal and federal laws; the laws to vote on are presented in packages once every two months (so every time, people vote on both local and federal laws). Our participation is not a joke.
Every Swiss citizen can propose a law to be voted on one of the next referendums, provided that they will collect the appropriate minimum number of signatures in its support.
Just to look at your claim for a moment. Voter turnout for referendums moves between 30 and 50 percent, and referendums appear to decide about 10 questions on average per year. Which is hardly the paragon of direct democracy.
However, you missed the part of my comment where I wasn’t talking about Switzerland at all, I was talking in generalities about the places where it was being discussed as an idea to implement.
Take LA for example. A lot of people who post here seem to have strong opinions on how that city is run, and I’d bet a lot of them even live there. If the LA city council elections generate turnout in the mid-teens, that is considered very high. Turnout below 10% is not strange.
Switzerland’s voter turnout is actually quite pedestrian. It’s higher than a lot of local elections in the US for sure, and perhaps even high enough to prevent the kind of special interest local government policies you see all over the US. But just because something seems to mostly work in one single, small, rich European country, doesn’t mean it will work as well anywhere else.
Low turnout happens exactly because there is no transparent, understandable and visible feedback between voters decision and government actions. Direct democracy provides this kind of feedback: all laws or measures voted in have to be implemented in specific timeframes, and reported back to voters. It is OK to have 10% turnout for a measure that affects only 10% of population (our laws are fine-grained), for more polarizing issues the turnout is usually much higher.
> Low turnout happens exactly because there is no transparent, understandable and visible feedback between voters decision and government actions
That’s an interesting thesis, but it’s not at all substantiated. Switzerland may have a better voter turnout than LA city council elections, but it has some of the worst turnout in the OECD[0]. If I was going to take a complete guess at the reason (as you did), I’d say it’s a cultural thing more than anything else (outside the influence of compulsory voting laws).
I live in Switzerland as well and wouldn't agree that direct democracy works much better than representative democracy. You can certainly argue that it does but it's far from obvious IMO. There's less focus on lobbying and more focus on spreading FUD about other parties' proposals, people have no time to investigate consequences of proposals so they rely on what they read in political ads, see on tv, etc. Men were voting that women shouldn't be allowed to vote as recently as 1990(!) in one part of Switzerland.
I can't say it's worse than other systems, but it's also far from obvious to me that it's better.
Well, the case you are talking about is a few tiny villages who decided that they could keep their "tradition" way into modernity, but even they yielded to the pressure of the rest of the Swiss society.
Switzerland is a relatively small country though. Here in the US, we have 50 states that are each very different from each other. Their economies rely on different industries, the natural hazards they face are very different and their cultures can be extremely different.
To have such a large society being run that way would have lots of unintended consequences. Imagine a country where voters are so burnt out on voting for every policy that the response rate becomes extremely low - and the only people who vote on any given policy are the slim minority with vested interests.
Yes it's small but it's made up of 26 cantons which are very different from one another in similar ways, including economically and culturally. Half of the country's GDP comes from 4 cantons.
They mitigate the "low response" phenomenon by encouraging political awareness from a young age and trust in public institutions.
It's imperfect and it may be a massive task to make such cultural changes in the US, but I don't think there's anything inherently stopping it from scaling.
Not Amercian but I'm wondering if it would be an insane idea to split America up in to a whole bunch of countries joined by something like how it works in the EU. That way Texas and California aren't trying to battle for a one size fits all government.
I would say that the divisions in America are more urban vs. rural in nature. The big cities in Texas tend to be very diverse and most of them are very liberal overall. However, the state is huuuuuge and there are a lots of people living in suburbs and small towns who are not on the same page.
Likewise California's rural areas can get very conservative. There just happen to be a lot more voters living in the cities.
Why would the turf battles end? Those that provide the most X will demand more in return. Those states that are given the most Y will fight tooth and nail to keep it. Someone will threaten to leave. Europe is quite the example.
I agree, but it's inevitable. The state has claimed all conceivable cut-points or transitions:
- Transportation. Any time you want to go somewhere by car or plane (and potentially bus/train[1]), the government has decided it has the right to search you.
- Interstate commerce. Most economic activity is taxed, and may be searched passively or actively to ensure that the state gets everything it wants.
- Communication. TV, radio, telephone, and now the internet are monitored and adjusted by the state.
Brain-to-network communication will eventually be commoditized, and the state will transitively invent some justification for monitoring it. Thus you may think thoughts which are shared with the network and for which you will be held liable.
The only difference is that today you type the thoughts on a keyboard.
I wrote an essay about this in 2011, for university.
Now, do you really think it would be something as straightforward and innocent-looking?
A much more likely scenario is that brainstreams would be stored in various tech giants' systems (and because analytics is important, ~nothing is discarded).
Instead of running a quick check, the police or official would pull your
thought history. But no, not directly from the source. Of course not.
To maintain a veneer of legitimacy, the actual tech companies will not sell to governments or law enforcement. There will be a forest of brokers, who sell to anyone.
And if you thought advertising was invasive or creepy now, with this kind of technology we would be entering a world Philip K. Dick would have found too depressing to write about.
Current laws would make it even simpler. In the US at least, an agency could simply have subpoenas approved by a judge, and then require the disclosure of whatever brain info any number of companies have for a person. The companies could of course sue to not disclose, but there'd be enough firms that would comply as to make access trivial for whichever agency wanted the info.
Warrants and subpoenas are inconvenient. They are targeted and may take time to be issued.
And observing from the outside, US as a nation is still years - likely decades - from being able to stomach general warrants. But you are already well on your way towards something like that: reverse warrants are now a thing. Law enforcement in US has already issued a warrant for recorded information on all persons who were near an event after the fact.
The logical conclusion with brain data archives will be to issue warrants for full mental activity history for X years on people who have had thoughts on a given unwelcome subject.
But even then, warrants are inconvenient. It'll be much simpler and easier to just buy access to the data in bulk, all the time. That avoids the awkward questions on storage, access and oversight.
My european take on this: governments are the only entities that can keep companies in check.
A completely free, unregulated market is a bit like anarchy: it might work well if all of the actors were mature and aware of their responsibilities. Unfortunately the ones who wish for that the most are usually the ones least suited for it. They support these ideas not because of a noble goal of perfect freedom but because they are annoyed that there is a more powerful entity that gets in the way of what they want to do with/to others.
>They support these ideas not because of a noble goal of perfect freedom but because they are annoyed that there is a more powerful entity that gets in the way of what they want to do with/to others.
The problem with statements like this is that they are presently unfalsifiable. Even though you've stated it as fact, it's clearly only an opinion that you've arrived at based on your own perspective and experience. However, you can't prove it and nobody else can disprove it.
The general practice of extrapolating worst-possible motivation from every action is, in my opinion, one of the reasons why it's so hard to communicate rationally these days. If you envision everyone 'on the other side' of an argument as demons, maybe sub-human, why bother negotiating? Fuck 'em, maybe a meteor will strike and wipe them out or they will kill themselves off through one of their many vices.
Ironically the brain-reading capability discussed is that they might lay this bare, expose some of the incorrect asumptions and start to heal some of the relationships between cultures and people.
I would go a step further and say the only completely free and unregulated market is anarchy.
Take a regular state and strip away all those regulations about consumer protection, environment protection, substance abuse, contract enforcement etc, and stop all this socialist stuff like public healthcare, public unemployment insurance, and public roads, and you are left with is stateless anarchy. If you remove anything less you can always claim there's regulation and no truly free markets.
I'd rather see free market as fair playfield for all players. With clear rules that equally apply to each without special treatment. Wether written or not.
You could start with a minimum viable capitalistic state (enforcement of property rights and contacts through police and courts, some sort of government to update laws as necessary, taxes to pay for it all) and from there add basic infrastructure where competition is impractical, like roads, subsurface conduits for laying electricity and data lines, sewage systems, etc; investments that are beneficial to the overall economy but not rewarded by the market like parks, public transportation, and dams; things that ensure a good work force like public education (to supply educated workers) and public healthcare (to keep workers healthy and productive). Optionally you can add laws to improve the free market: consumers can't reliably judge food safety or environmental impact, so make the worst things illegal and improve transparency on the rest (food labeling etc), or impose taxes to mirror true costs to society.
All of this is a free market in your definition, and is essentially how European states are run. But it's somehow not what "free market" advocates in the US want today.
As a European, European countries have a tendency to have not-so-fair rules. Opaque taxation with lots of loopholes if you're willing and/or have resources to go for them. Very different taxation for employees and contractors with loose enforcement who is contractor and who is employee. (Partially) government owned companies participating in the market. Poorly run public purchases. Privatisation gone wrong in sectors where free market just doesn't work (e.g. energy last-mile).
On top of that, Euro side feels like having much much more regulations than US. Which causes it much easier for big companies, but hinder small businesses and upstarts. While technically it may count as free market, it feels like gatekeeping.
Dump trash or chemicals wherever it's most convenient, reduce product quality because of margins, read emails of dumb users who reuse their email-password for your social network site (Oh, wait a minute), work conditions that border on indentured servitude...
There's plenty of room at the bottom (as Feynman said in a completely different context).
I haven't seen the government stopping any of those, at least not consequentially.
Let's take a look at this one:
> Dump trash or chemicals wherever it's most convenient
To make these stop, government didn't show itself very fit at reducing the problem. How can this be changed? That's what I would find interesting to hear.
Well to some degree you might be right, but don't forget the USG is in bed with the private sector and regularly sends out NSLs[0] to companies asking for back doors and full access to customer/user data
In theory, we should already disallow this. The foundational law of the United States prohibits "unreasonable" searches of "persons, houses, papers, and effects".
In practice, this has been eroded to the point of practical meaninglessness, such that the FBI director can issue a statement that clearly violates both its spirit and its letter, and few people bat an eye.
No ban will stop that. Whatever is possible will also be done, that has always been the case. As usual, there will be an arms race and possibly, for example, meditation teachers will open up completely new fields of activity.
If companies invent the tech, the governments will be able to use it no matter what. 90% of all the things China is using are based on Silicon Valley research. Plus, even if you don't let TSA or the police have the scanners, it would still be possible to subpoena companies for your latest thoughts as they happen.
/The only/ way to not allow this to be the absolute end of privacy is to have the technology be shaped in an incredibly ethical way that makes such privacy violations fundamentally impossible.
Store close to no data. Constantly encrypt whatever data you do store and keep it on the device itself. Constantly cycle keys. Make brain scanners absolutely worthless unless the user consciously authenticates themselves.
Requiring all brain scanners to be 100% fully open (full on GPLv3 including the hardware plans) and physical networking kill switches would be a good start.
All of this is not even new advice - there are quite a few outspoken people out there against any sort of data collection. But whether you agree with them or not, you must agree that the line in the sand must be drawn somewhere - and it definitely should not allow collecting people's thoughts.
Trying to extend the Google and Facebook model of siphoning as much of your data as possible to also include thoughts should literally be a crime against humanity.
I will bet my left nut that if such a technology exists TSA may use it based on their discretion on any person entering the US including citizens and most citizens will be okay with it and the Supreme Court will hem and haw about it for years before giving some weak half ass decision against it that won't be enforced at all.
IAGO
Good my lord, pardon me:
Though I am bound to every act of duty,
I am not bound to that all slaves are free to.
Utter my thoughts? Why, say they are vile and false;
As where's that palace whereinto foul things
Sometimes intrude not? who has a breast so pure,
But some uncleanly apprehensions
Keep leets and law-days and in session sit
With meditations lawful?
> And next month Chilean lawmakers will propose an amendment to the country's constitution enshrining protections for neural data as a fundamental human right, according to Yuste, who is advising on the process.
In the US, I have a right against forced self-incrimination. The rights against indiscriminate search and seizure and protection of speech also seem to apply.
Being able to read my thoughts would seem to negate my protection against the disclosure of self-incriminating speech (presuming thoughts are considered speech).
This frontier is all the more concerning while our dominant model of commerce on the internet is the exchange of content for attention. The attention market already drives businesses to develop sophisticated, targeted models of users, so that the businesses can most efficiently encourage addiction to their services.
Being able to tune those mechanisms in real-time could be disastrous. It is encouraging to see at least one government making an effort to get ahead of this, and their work can be an example for the rest of the world on how to protect people from a looming dystopia.
Was thinking about why privacy is important, reading the Snowden post on HN this morning. And it stupidly occurred to me, the privacy is only important insomuch as it is related to power.
I say stupidly, because it is such an obvious thing, but one that might be ignored or forgotten, without a sustained reminder. In the face of such an understanding, the distinction between government or commercial privacy is meaningless. If that lack of privacy can have significant power over your life, the source doesn't matter. The content doesn't matter.
The title reminded me of the sci-fi novel The Dark Forest, where potential alien invaders could read all human communication accept of human mind. The UN selects four men to be "Wallfacers". Each one of them is supposed to devise a defence strategy known only to himself and they are granted access to UN resources to carry out the plans.
What's also worth noting about the Wallfacers is how for 3/4 of them, the aliens and their human supporters were still able to figure out their "inaccessible plans" simply by deducing from their actions.
In some ways it feels like a waste to try and directly read the mind when humans can be pretty well predicted by their outside thoughts alone.
Why worry? Facebook's Privacy Police says they value our privacy. And Neuralink's home page doesn't even have a Privacy Police. And Neuralink's paper [1] doesn't even mention the word privacy once.
My point is, we will never have privacy from tech because tech's most valuable field is ourselves. So we all respect others people privacy but we need to harvest every data point we can because otherwise what will we do, kernels and drivers?
Our society still hasn't processed just how fundamentally different the psychological experience of today is from the rest of humanity's existence, just as recently, say the 1980s or 1970s. Prior to the advent of smartphones and ubiquitous computation and surveillance, social media and the narcissism economy, one could actually live their own lives from a first person perspective without worrying about people looking over their shoulder every single second. Now, today, we are literally carrying around an audience with them everywhere, whether that be the photos they may consider posting to Instagram, or the machine learning algorithms monitoring their location to figure out what advertising to direct at them, or government surveillance to determine if they are some kind of threat. Heck, people even install apps that monitor their sleeping patterns and upload that data to a third party. It's almost as if we have created either the all-seeing eye of Sauron, or a tiny portable film crew following everyone around all the time for our egos.
I, for one, do not plan to put anything in my brain I have not designed myself, and for sure nothing that auto-updates!
Hell, it took me hours to straighten out my update to iOS 13 because my backup had trouble restoring... not the first time that’s happened either! Can’t remember the last workday (programmer) where I got through the whole thing without encountering bugs in my tools, bugs in my bank’s app/website, bugs while trying to text a photo to a friend, and on and on and on.
If I want to hack my consciousness, I’ll go read a book. Naked Lunch was enough of a brain bug, thanks. Or how about Black Mirror?
And once we get past not knowing wtf we’re doing, take a look at how we’ve weaponized every major scientific advance and tell me this would be the exception. Rabble-rousing and voter manipulation on social media will look like arts and crafts hour.
I am not worried about this. More amused and excited about the future of cognitive exploration and expansion.
I believe in the way, and all that’s happening is natural. I don't think the collective organism of us would really create a hell for itself. Human civil courage and compassion is a golden guard against the truly reprehensible.
Let them try.
Also +1 switzerland. Hope to see that style of being on a global scale.
It's much easier to get into power if you are ruthless and without empathy. As a consequence we are mostly ruled by psychopaths who don't have a problem with creating hell for other people if it benefits them
• Driving the news: Neuroethicists are sounding the alarm.
• Earlier this month the U.K.'s Royal Society published a landmark report on the promise and risk of neurotechnology, predicting a "neural revolution" in the coming decades.
• And next month Chilean lawmakers will propose an amendment to the country's constitution enshrining protections for neural data as a fundamental human right, according to Yuste, who is advising on the process.
• A major concern is that brain data could be commercialized, the way advertisers are already using less intimate information about people's preferences, habits and location. Adding neural data to the mix could supercharge the privacy threat.
• "Accessing data directly from the brain would be a paradigm shift because of the level of intimacy and sensitivity of the information," says Anastasia Greenberg, a neuroscientist with a law degree.
The intrusion to brains has already started. Schools in china are experimenting on using brain-wave trackers to track whether students are paying attention in class. But the parents do not seem to care. It is likely to give false readings so the accuracy is unclear.
The picture is scary. They do not seem to care about the privacy issue even though the benefits are unclear. They are OK to sell out for ... maybe nothing.
There are reasonable people today who buy "RFID wallets" and are not generally considered to be overly paranoid.
It seems at least a possibility that we will enter a future where similar people will buy the related product of "RFID hats" [as branded with whatever acronym this technology adopts, and perhaps as an "also suggested by Amazon" to boot].
I don't agree that it's the ‘last frontier.’ Let's say the government and corporations watch everything you do outside of your house, both physically and on the web. What's left for you to do in your brain has a term, ‘mental masturbation.’ It doesn't matter what you're imagining in your head if you can't act on those thoughts.
Just out of curiosity, does anyone have a comprehensive argument of why absolute privacy is essential? I understand the possibility of abuse of the power granted to governments, but is there a solution for identifying malicious actors that doesn't involve privacy compromises?
The only interface that makes sense to me for this kind of application is something non-invasive and can be removed like headphones. Governments could still forcibly apply it to you under certain circumstances but you might be able to retain some semblance of privacy otherwise.
We're getting close to being able to tell whether someone is lying based off of an EEG, it seems like it's just a matter of time until we're able to read other people's thoughts. No one in this thread will see that happen in their lifetime.
Well, actually ... you d have to incorporate the peripheral Neural system. People may start using that to think, in an attempt to escape brain surveillance. And also any extracorporal BMI device they might use to augment their mind.
Some decent orthodontic braces and a couple of little screws in the skull and you’ll keep the fMRI at bay for a while yet.
Though it might be cheaper to just move a little and ruin it that way instead.
A. You can think "type these letters" and make your fingers type them, or you can route that same electronic message via bluetooth to the phone near you. This surgery is like going from dial-up internet speed to broadband. You can type much faster without fingers!
In principle this makes sense; where the line around “your thoughts” is drawn is much more difficult to define in practice.
What is a thought? Is it the words you hear inside your head as you think? The images? Do those even have a concrete biological representation? Are we just talking about patterns in neural signaling cascades? If so, I doubt that “thinking about a tree” looks the same from one person to the next. Which means that these patterns will have to be learned from training data (once the sensing technology exists).
Brains exist to decide to take actions. In the above case, companies with an interface to your brain will only “understand” your thoughts insofar as they map to actions you can take through their services, or “API”, so to speak. In that sense, It feels like we’ve already crossed this line.
Let me be more concrete: unless there were, say, a bomb manufacturer with a brain-API that you “trained” and used regularly, nobody will be able to decode that you think about bombs in your spare time.
Unless, of course, a (hypothetical) brain computer interface just translates your thoughts into “stream of consciousness” strings of words - but if that were the case, how does that add value over a speech-to-text google search?
TL;DR, if you’re worried about the thought police, look around you - we’re already there. Your “thoughts” are just the actions you decide to take using technology.
https://twitter.com/hashbreaker/status/709314886384427008
> Fun game to play: Take statements from Comey et al. Replace "smartphones" with "brains"/"memories"/"thoughts". Technology will get us there!
> "Everybody is walking around with a Swiss bank account in his brain if government can't get in. You cannot take an absolutist view on this."
> "How do we solve or disrupt a terrorist plot if law enforcement can't access the memories and thoughts inside suspected terrorists' brains?"
Both funny and scary.
Time to rewatch the TV animation version of Ghost in the Shell [0] again. Released in 2000s, it portrayed and predicted our world remarkably well, and it'll give you a lot of inspirations of what would the future society look like when everyone uses an electronic brain.
[0] https://en.wikipedia.org/wiki/Ghost_in_the_Shell:_Stand_Alon...