This is one of those cases where I probably should feel bad for the company being repeatedly hacked to the point of being ripe for being shut down, but I just can't muster the will right now.
If you are in the business of collecting data without users' explicit permission, and can't protect that data from being accessed or deleted, you shoudln't be in business.
I wish they had done one more thing: notified everyone on whose devices this software was installed. If someone put this on my phone without my knowledge, I would want to know. (I would also almost certainly sue.)
Yes. I want to see that happen also. But I want to separate the part of me who wants to see fireworks from the "right thing to do" I think we need to think through the technicalities of notifying the hacked people, would they even know what to do with your email? Would they consider it spam or phishing and just delete it l? Assuming you're successful, what societal fallout could result? I don't know the answers to these questions. Ultimately it's similar to emailing the partners of the users of Ashley Madison and sending the user profiles to them. What's the intended outcome?
Punitive action against the company who collected the data and lost control of it.
Punitive penalties are typically just a monetary fine above and beyond damages, but exposing just how badly they screwed up to the public at large should provide a good financial penalty in addition to doing a public good (a reminder of how much "private" data is not actually private).
The definition of "your phone" is what comes into play here. If the parent or abusive spouse pays for the phone, it's not "your phone", it's theirs, and can put what they want on it.
They stored the master key to their entire data store in a publicly distributed app?
> ...we have been taking steps to enhance our data security measures. Sharing details of security measures could only serve to potentially compromise those efforts.
Maybe they used ROT13 on the API key twice this time!
It's all the more pathetic that they responded to the first hack by "obfuscating" the client-side secret, as if that could even theoretically stop any attacker with a budget of more than about an hour.
Future service designers: if your client is talking directly to AWS, then your attacker will, too. Take the week to write a CRUD frontend server that enforces the policy you want.
Do you have a recommended security checklist for something like this? I remember seeing an old github repo with a bunch of good information but I cannot seem to find it and my search results are... unhelpful at best.
You wont like it, but... How would you feel being asked for a checklist for 'good software development' or something similar by a cooking chef? Hire skilled and dedicated DevOps/SRE/SysAdmin/hype-of-the-week person. ;)
100% this. I’ve been transitioning to doing more devops/admin work for the past several months and it’s amazing the stuff people are willing to put into production. Like no, maybe we shouldn’t be committing that secret to Github. Dedicated operations critters are essential for any kind of service.
A chef cooking something unfamiliar will look up the dish they are cooking. If your cookbook can't handle a checklist of common pitfalls and overlooked actions to take then it isn't much of a cookbook. (granted not putting your secrets in a public repo is definitely a given.)
This is a common issue, and many apps do this mistake.
Another common mistake is having /.git/ available on the domain itself, often with PHP sites or backend-less SPAs this is common, giving full access to the source, including those API keys. Even major sites do this – The Hill until recently had their git repo, including API tokens and access keys for everything, publicly available.
It should be mentioned that none of that should ever make its way into a Git repo in the first place. If a secret is committed to Git, it's compromised, period. Suck it up and generate a new secret.
Agreed ... use a pre-commit hook to scan your repository for high-entropy strings before they are forever enshrined in your history (https://github.com/dxa4481/truffleHog).
No, you should never commit your secrets, not even to a private github repo, and not even to a privately hosted git server. Doing so increases your attack surface, sometimes in surprising ways.
Now, if your secrets are encrypted before being committed (using something like ansible vault) and the encryption key is not stored in the repo, that may be ok. However, you still need to be aware that any time you rotate that key, you also should to rotate every secret hidden behind that key.
So how do you do version the configuration management for the cluster that runs everything? It needs to be versioned, and yet with just the data in it you need to be able to recreate your entire environment from scratch.
If you want your secrets under version control, then they should be encrypted.
There are lots of ways of doing this. git-crypt is one that is configuration management agnostic. Most CM tools have their own way of dealing with secrets and there are CM agnostic tools like git-crypt. I'm not personally familiar with anything besides ansible-vault, but this article seems to provide a pretty good summary of several options: https://www.threatstack.com/blog/cloud-security-best-practic...
I can publish my public key all day long for encryption into a back-end ... and later I can only read it with my private key. So they may not have used ROT13 but it certainly looks like they used something symmetrical.
How is this even possible that a 3rd party application can intercept all text messages, call history, and photos and still get published to the Android Play Store? Ins't Google supposed to be reviewing the apps?
This is by design. For the apps on Google Play Store, Google wouldn't look at these because it's been possible to do all these and more using the APIs provided by the Android OS all along, even before the granular runtime permission model came with Android 6 (Marshmallow).
I always dread the thought of people not understanding these permissions and letting apps have all kinds of permissions — access to all text messsages, privilege to send text messages, access to call history and privilege to make calls. Many apps read the text messages to process one time passwords/codes sent as text messages, thus avoiding the user having to enter them manually.
These privileges have never been available in iOS for third party apps, and I appreciate Apple deciding to err on this side of the privacy equation (though Apple could still do a lot more on app permissions). Taking the same example as above, iOS apps that need one time passwords/codes depend on the user to enter them manually.
So many apps ask for permissions that I think most users just hit allow just to get past them. I think they absolutely don't understand them and there's nothing at the moment you're running the app to encourage you to learn.
As a regular Android user, I'll say that Android's permission model is simply awful.
There's no way (in stock) to return blank data, so apps will simply shut down or silently malfunction if you refuse permissions.
The grouping of permissions lumps "can portscan your network" and "run hidden in the background when your phone boots" under "Other", which you can't disable.
J2ME had a more refined security model back in 2006.
There are so many applications I'd love to install but ... why do they insist on asking for permissions that they simply don't need for their stated functionality. <sarcasm>Of course you want your SSH terminal to have ties to the social networks</sarcasm>.
Whenever I would publish my app (or an update) on the Android Play Store, it seemed like it would be available almost immediately, which hints at the absence of review process.
Also, I've never had to have a discussion with any "reviewer" about my app on Android. For iOS, I've always had to do quite a few back and forth interactions with the "Resolution Center".
Android seems to be a pass through, perhaps after some automated checks are passing.
As mentioned by others, you don't necessarily have to publish to the Play Store: apps can be side-loaded on Android.
The real question is: who gets to say in what someone's phone is going to be doing: the programmer, the manufacturer or the phone's owner. Most everyone would agree that the owner should have a word in it, and that the manufacturer should have no say at all.
Unfortunately most phone owners are unaware of technical details, security, and privacy implications. I'd argue Apple has the right approach with their heavy handed reviews and security model.
A better question is how can we give the owner a say without the typical owner getting pwned roughly 100% of the time? Which is what's happening on Android at the moment.
Someone invoking "personal responsibility" usually means that they try to drop their responsibilities on someone else, but for car maintenance it actually works. People know they should change tires when they are getting bald and leave brake maintenance to their mechanic unless they really know what they are doing. Anyone who gives a game access to their contacts really had it coming to them. (Disclaimer: don't own a smartphone)
Just waving the whole issue away by saying "make it the user's problem" isn't very helpful. It completely ignores the very real problem that most people don't know what is good for them and worse, even the people who do know what is best have a million better things to do with their time than personally scrutinize each and every app they install. Perhaps, the actual solution is to delegate that responsibility to somebody who does have the time, knowledge and incentive to make sure you are secure.
I own an iPhone instead of an Android precisely because I don't want to be burdened with the task of determining if what I'm installing is or isn't going to backdoor its way into my phone. I trust that Apple will in most cases do the right thing. Maybe they won't every time, but the risk of that is less than the cost of time & energy required to play "deep dive into every fucking app I install on my phone".
The US regulates "wear bars" on tires, which make it a lot more obvious when tires need to be replaced.
I'm not saying that government regulation is needed, I'm just pointing out that the purpose of this tire feature is to make it more likely that people notice tire wear, and don't die from accidents involving bald tires.
Ah thank you for the clarification. I was thinking along the lines of a right to repair where maintenance tasks such as replacing filters are convenient. I would not attribute that to regulation as much as practicality in even mechanics working on the cars.
>Most everyone would agree that the owner should have a word in it, and that the manufacturer should have no say at all.
Isn’t setting the defaults behaviour ‘Having a say’, and who else but the manufacturer gets to do that? Sorry but this suggestion doesn’t make it past even 2 seconds of considered thought.
Another few seconds - who gets to decide what privacy controls the phone even has, if not the manufacturer? Does this not count as having a say?
I can see why some manufacturers would agree with you though, many of them give the impression they don’t want to have any responsibility in this area whatsoever.
> " Friday morning, after the hacker told us he had deleted much of Retina-X’s data, the company again said it had not been hacked. "
I have to admit that, despite all the seriousness of the actual SPYware this obviously terrible company sells, that one sentence brightened up my normal depressing morning experience of reading the weekday morning news on the Internet. That's just funny.
I'm always amazed that people feel the need to utilize these products. People are aware that trust is the bedrock layer of relationships right? The minute someone installs this product, their relationship with the person they are monitoring is already over, they just don't know it yet.
No, they are not aware that trust is the bedrock layer in a relationship. In the same way that people who verbally or physically abuse their spouses are unaware. They are filled with an all consuming jealousy that makes any violation of privacy seem justifiable to them. These are not people who are thinking rationally or sensibly.
This is a good case of vigilante justice, but vigilantism is problematic in general. We should probably be formally outlawing the sort of practices these companies have and also putting in place far stronger real privacy measures for all data-collecting companies. Until (if ever) the law catches up, vigilantism will be better than nothing.
I don't know exactly the legal details, but over the past several years "privacy policies" have evolved into some sort of "data use policies" (with no privacy to speak of) and the norm is to use terms on apps to basically give companies access to the most invasive and abusive stuff, and we don't see the legal system doing anything to stop this, as long as it isn't directly in medical or legal contexts…
Agreeing to surrender your privacy, even when through a clickwrap agreement that is subject to change without notice is very different from installing spyware on someone else’s phone.
This is exactly why I don't bother with forums controlled by the company I wish to raise a grievance about. Just go air your complaint on Reddit or something they don't control, it's not worth your time complaining to their forums.
Does anyone maintain a list of such software vendors? Not much we can do if they’re overseas. But I’m curious about exploring the limits of their liability if they’re based in the United States or Europe.
I'm torn by a lot of things here. The validity of the claims in the article, the correctness of the hacker to simply delete data, but also the "stalkerware" as described by the article. Surely there's got to be a better way of dealing with this atrocious software? How can it be legal in the first place?
The software may not be illegal, but particular uses of the software are. I think[0] I prefer it that way less we go down a slippery slope of deciding what software should be legal vs illegal.
[0] And do please try and convince me if I should think differently about this.
> The software may not be illegal, but particular uses of the software are
If you profit from selling software that is predominantly used for illegal purposes, or in the course of illegal activities, and you know it; you should be liable. Not put in jail. Not shut down. Just commercially liable.
This is a conservative test (commercial sale, predominantly illegal use, and wilfulness) and a conservative solution. In the long run, however, it balances commercial incentives with broader social ones.
We put thieves into prison because if theft were common it would increase distrust, society as a whole has an interest to fight theft, and we permit victims to sue the for restitution. In the same way society has a reason to put strict limits on surveillance.
There's the other problem: what damages can someone with a spy app on their phone ask for? There is no monetery value, they can at most ask for relief, that's not much of an incentive to stop.
> We put thieves in prison...In the same way society has a reason to put strict limits on surveillance
On one hand, we have a stylised burglar. On the other, a stylised lockpicking tool maker. The former is illegal; the latter is more complicated.
I am conservative about expanding the scope of the law. You criminalise surveillance apps in one decade and in the next, a security researcher disclosing a bug gets bitten.
> what damages can someone with a spy app on their phone ask for?
If someone snooped on my phone without my permission, they would see a lot of confidential client information. They may also see my and my loved ones’ protected health information. Finally, they will have sought and procured illicit access to my device, which is itself illegal. Lots of potential monetary damage in there, if only in legal time to ensure everyone who needs to be notified gets notified.
We've decided that privacy and freedom from surveillance is highly valued and deserving protection.
The target audience of that company is teenagers of helicopter parents. Whatever they have on their phones isn't privileged or valuable information, so the civil damages approach doesn't work too well. Some may have (against all advice) nudes on them, but I'd rather not wait until those are available to the public, and even then only those whose nudies escaped can sue.
The law needs to project the notion that privacy is valued, because it is highly valued. The only idea that I can come up with is to restrict availability of spyware. Others may have better suggestions.
My suggestion is to require spyware like this to put up an obvious indication on the screen of the device that spyware is active on it.
That still allows parents and employees, the supposed target audience, to use the software for its alleged intended purpose, but renders it useless for the illegal use cases.
Same. Retina-X Studios sounds creepy, but so is vigilante justice. Hobbes' "war of all against all" is a pretty terrible way to run a society, and replacing physical war with information war doesn't make it any better.
I see no reason to praise the hacker. He destroyed a legitimate company's private data for no purpose other than his flawed moral reasoning. The company provides a way for parents to monitor their children and other legitimate business practices. Obviously, the software can be used for nefarious purposes but so can almost any other software. U.S. representatives and senators try to ban encryption using the same exact flawed reasoning.
I see a legitimate business opportunity, a phone walking service. You collect the children's phones and take them to the mall, the library or wherever teenagers go these days, and meanwhile the kids can enjoy life without parental surveillance.
You do wonder what the 24-hour panopticon does to adolescents' mental health and to the health of the parent-child relationship.
Of course the walkers would have to check the facebook status every 5 minutes and post cupcake pictures to instagram at least once to maintain a credible profile.
That's how my friends and I grew up. You left the house --- you were just GONE --- until you came back. Short of hiring a private investigator, your parents really had no precise idea of what you were up to.
Of course, they did tend to find out the important stuff anyway. The parent grapevine was definitely alive and well.
Well, I side with that hacker for this, taken from this article — "I don't want to live in a world where younger generations grow up without privacy."
While parents make a lot of decisions for children in their best interests, this certainly wasn't one of them. The fact that children might later suffer for no fault of theirs and live with something for life because of such a company makes me a lot more angry. It's becoming far too easy to push people into such a situation now.
That's very noble, but I don't want to live in a world where one moral vigilante hacker cowboy dictates what world we live in. It seems much more reasonable to come together as a people and vote on what we can and cannot do. After we vote, we can write down what the majority has decided and then demand individuals adhere to those policies. We could then call those policies "law". Seems much more reasonable to me.
Like handling nuclear material, if a company is going to collect intelligence on consumers, they have a social obligation to secure that intelligence. Encrypt it, tokenize/anonymize it, or delete it after use. Anything less and you're simply positioning yourself as a facilitator of doxxing or stalking when data leakage or exfiltration inevitably happens.
If it was this easy to break in and access the data, the hacker did consumers a kind mercy by deleting it before someone else got in and did something more nefarious.
??!? Their 'data' includes all the photos YOUR KIDS take on their phones. Do you not realize how much absolutely idiotic shit kids do with their smartphones these days? Which they store in an an obviously unsafe way, as evidenced by the fact that they've been hacked via a super-super-super obvious software flaw... twice (that was widely reported on, most likely many more times). They deserve, and they should get, no sympathy. They sure as shit don't deserve to make any money off of it.
You know what other services have data that contains photos of your "YOUR KIDS"? Google, Facebook, Amazon, Apple, Microsoft, Sony, Photobucket, and pretty much every other tech company with hosting services. If these companies get hacked, do they deserve to have their data deleted? The answer is no.
No, they would deserve a hefty, HEFTY fine and regulation barring them from providing services to anyone under 18 years of age, and then they deserve to delete all the relevant data themselves (with oversight). However Google, Facebook, Amazon, Apple, Microsoft, Sony, Photobucket are not in the business of selling shady ass spyware whose sole reason for existing is for stalkers, creeps, jealous ex-lovers and helicopter parents to invade people's privacy and personal security, so they have at least a little bit going for them over this sleazeball company. _Very_ little, but a little still.
As TFA describes it, the first deletion last year by the heroic hacker actually interrupted another hacker's access to the data. Why hadn't that other hacker deleted it? Maybe he enjoyed having access to private pictures and communications of children, teens, and adults... Does that sound like a good situation?
No, that doesn't sound like a good situation. You want to know some other big companies that have been hacked? - LinkedIn, MySpace, Adobe, Dropbox, DailyMotion, Sony, Kickstarter, Equifax... ad infinitum. Do these companies also deserve to have their data deleted?
None of those firms collected private pictures without consent. All of them actually made some attempt to fix their vulnerabilities when notified. Any one of them that allowed random strangers to delete their data "deserved" to have that done. No firm has a right to success in business.
Companies that practice bad security and potentially expose sensitive data to bad actors should be fined or shutdown via the proper legal channels. In the more extreme cases, I think short jail terms should be considered for those responsible depending on the extent of the damage caused.
Whenever I read this phrase, it always has a sense something like legal, therefore ethical or legal, therefore OK, and.. (this is not an easy sentence to finish) I wonder where people learn to think like that. OK, apart from the pressure of the entire commercial/corporate/advertising apparatus.. Maybe it's surprising it isn't more common. I guess it's the norm, in some circles. I'm naive I guess, but I'd rather die than think like that. A friend of mine used the phrase once, and..that felt like the end of the friendship.
The reason to think and speak in such a manner (with regard to law) is because we live in a society where we have a social contract with all of the other people and businesses in it. It is frowned upon for individuals to go around and commit illegal acts that they deem ethical because of that social contract. It would be chaos if we did not adhere to the laws that society has agreed upon. This is why we come together and vote on what we deem to be unethical and make it illegal.
If you are in the business of collecting data without users' explicit permission, and can't protect that data from being accessed or deleted, you shoudln't be in business.