We've had decades of 'simple warning signs' or measures as complex as coastguards and yet people are still periodically lost in the wilderness, badly injured, or even killed. Education clearly isn't working here either — what restrictions should we impose on people's right to roam to solve this?
You clearly know the answer here since you used the word “periodically”. There’s a massive difference between hundreds and millions. No one is stopping you from buying a non Google phone, no one is stopping you from running calyx or graphene. Mitigation for the things that affect the most number of people is how the world works.
> Mitigation for the things that affect the most number of people is how the world works.
Millions of people hurt themselves, physically hurt themselves, every day, doing things that we could easily restrict. Yet we still allow them to buy knives, glassware that can break, hammers, power tools, non automated vehicles of all kinds, the list goes on.
We also spend a lot of time educating them on the dangers, far more than is spent warning about online scams, and we do it at a far earlier age (age 0, for some of them).
Of course we still allow the sale of safe knives and plastic mugs, so people are free to choose; that point still stands. I'd argue that there is more competition in tableware, and less friction shifting between it, than there is in mobile operating systems.
Physical goods are much easier to regulate and legislate than digital worlds. You choose to take a specific level of transit when traveling places, guided by your risk aversion and other things you know. But some stuff you don’t know, like the things that go on behind the scenes to actually make those things safe. From road markings to the type of joints used in train tracks.
This is the exact same thing. We don’t spend time educating users of roads on how the road stripe width affects their safety, nor about how train tracks carry radioactive material through their communities every day. We let the companies and governments work to make things safer for everyone, even if it comes at the expense of some.
I honestly can’t believe I’m having this argument. Making things safer for the world should be a goal we all strive for, even if a very very incredibly small minority lose a tiny tiny bit of what they want.
> No one is stopping you from buying a non Google phone, no one is stopping you from running calyx or graphene.
Google and phone manufacturers have been actively moving in that direction and have a long history of being actively hostile to those things. This is just another move on the same board to restrict these freedoms.
> There are, I think, two small cracks in that argument.
> The first is that a user has no right to run anyone else's code, if the code owner doesn't want to make it available to them. Consider a bank which has an app. When customers are scammed, the bank is often liable. The bank wants to reduce its liability so it says "you can't run our app on a rooted phone".
> Is that fair? Probably not. Rooting allows a user to fully control and customise their device. But rooting also allows malware to intercept communications, send commands, and perform unwanted actions. I think the bank has the right to say "your machine is too risky - we don't want our code to run on it."
> The same is true of video games with strong "anti-cheat" protection. It is disruptive to other players - and to the business model - if untrustworthy clients can disrupt the game. Again, it probably isn't fair to ban users who run on permissive software, but it is a rational choice by the manufacturer. And, yet again, I think software authors probably should be able to restrict things which cause them harm.
It's not clear to me whether in this fragment the author is stating the two alleged cracks in the argument or rather only the first one — the second one being Google's ostensible justification for the change. Either way, neither of these examples are generalisable arguments supporting that 'a user has no right to run anyone else's code, if the code owner doesn't want to make it available to them'.
With regards to banking apps, the key point has been glossed over, which is that that when customers are scammed the bank is 'often' liable. Are banks really liable for scams caused by customer negligence on their devices? If they're not, this 'crack' can be thrown out of the window; if they are, then it is not an argument for "you can't run our app on a rooted phone", but rather "we are not liable for scams which are only possible on a rooted phone".
As for the second example, anti-cheat protection in gaming, the ultimate motivation of game companies is not to prevent 'untrustworthy clients' from 'running their code'. The ability of these clients to be 'disruptive to other players' is not ultimately contingent on their ability to run the code, but rather to connect to the multiplayer servers run by the gaming company or their partners. The game company's legitimate right 'to ban users who run on permissive software' is not a legitimate argument in favour of users not having full control over their system.
Thanks for the feedback. Those examples are meant to cover the first point.
The problem if you are a bank is that scammed people can be very persistent about trying to reclaim their money. There's a cost to the bank of dealing with a complaint, doing an investigation, replying to the regulator, fielding questions from an MP, having the story appear in the press about the heartless bank refusing to refund a little old lady.
It is entirely rational for them to decide not to bear that cost - even if they aren't liable.
> rather "we are not liable for scams which are only possible on a rooted phone".
Who is going to prove that though? It’s much simpler and less stressful on our court systems if a bank just says “we don’t allow running on rooted phones” and then if a user takes them to court the burden is on proving whether the phone was rooted or not rather than proving if the exploit that affected them is only possible on a rooted phone.
> Are banks really liable for scams caused by customer negligence on their devices?
In the UK, not legally liable. However culture is not 100% aligned with the law and in practice banks that stick to the rules will be pilloried by the left-wing press and politicians, they risk regulator harassment etc, so they sometimes decide to socialize the losses anyway even when the law doesn't force them. The blog post cites an example of that.
To stop this you'd have to go further and pass a law that actively forbids banks from giving money to people who lost it to scammers through their own fault.
I have a question for you. For context, in case you haven't read it, we are discussing scientific paper reporting a large population study (almost 150,000 newly-diagnosed ADHD patients, aged 6–64 years old) which compared the outcomes of those who were medicated (~57%) and those who were unmedicated (~43%). Around 88% of the medicated cohort were prescribed methylphenidate (e.g. Concerta or Ritalin).
The conclusions of the study, copy-pasted from the abstract, were:
> Drug treatment for ADHD was associated with beneficial effects in reducing the risks of suicidal behaviours, substance misuse, transport accidents, and criminality but not accidental injuries when considering first event rate. The risk reductions were more pronounced for recurrent events, with reduced rates for all five outcomes. This target trial emulation study using national register data provides evidence that is representative of patients in routine clinical settings.
My question is this: if we assume ADHD does not exist, what is going on here?
Specifically, how do you explain so-called "ADHD" patients who were medicated having a statistically significant lower risk of suicidal behaviours, substance misuse, transport accidents, criminality, and recurrent accidental injuries than those who were not medicated?
Do you think non-"ADHD" individuals (i.e. who don't fit the current diagnostic criteria for this assumed fictional disorder) would also display a reduced risk of suicidality, accidents, etc. if they were to take methylphenidate on a daily basis?
> I broke down every funeral I went to and would rather avoid that in the future.
I understand it can be emotionally challenging, but arguably that expression of grief is what provides meaning to attending a funeral. Furthermore, if you don't attend a close friend's funeral, don't expect him or her to attend yours.
In other words, VC-backed tech companies decided to weaken the definition of 'Torment Nexus' after they failed to create the Torment Nexus inspired by the classic sci-fi novel 'Don't Create the Torment Nexus'.
The firebombing of Tokyo[0] on 10 March 1945 is often considered even more destructive and lethal than either of the nuclear bombings of Hiroshima and Nagasaki, or at the very least in the same ballpark. However, popular discourse never treats it as a crime against humanity.
What is the qualitative difference between the killing 100,000 Japanese civilians in one morning using an atomic bomb and the killing 100,000 Japanese civilians in one night using explosive and incendiary devices?
The CEO of Braintrust, a company that offers AI interviewers, is quoted as saying “The truth is, if you want a job, you’re gonna go through this thing,”. Let's see how they react to the founding of 'Trainbust', a company offering AI interviewees to respond to AI interviewers. The truth is, if they want to use AI interviewers, they’re gonna have to go through this thing.
> It should be noted that not all AI interviewers are created equal—there’s a wide range of AI interviewers entering the market.
Maybe someone will make an AI to interview the AI interviewers and see which one is the best? AI's interviewing human candidates gonna have to go through this thing.
the real punchline is how jobs hiring with AI are hiring for positions which require the worker to pretend to be some kind of bot (follow a script, repeating the same actions cyclically)
reply