Hacker News new | past | comments | ask | show | jobs | submit login

Google's actions were legal. The problem is, we, as a country, have still not come to an agreement on what 'privacy' should entail in this brave new world. Given that Google was not breaking the law in reporting this person, I would argue it would have been immoral for them to have sat on this information.



I do have one question. What if it wasn't child porn? Would it be right for them to go looking through your email for evidence of tax fraud? How about smoking weed? Where do you draw the line? The question isn't "do the ends justify the means?" the question should be "What if they're mistaken?"

EDIT: While I stand by my questions I acknowledge the use of PhotoDNA as opposed to simply rifling through someone's inbox.


Your question doesn't seem to be "What if they're mistaken?"; it seems you are more asking "will they stop at child porn?". Anyway, I'd agree with jfoutz, in saying that society has dealt with these problems before, and has managed to create a fairly good line as to where something must be reported, and where a person's privacy becomes more important. That's not to say they will manage to create that line again, but it's also not to say they won't. Regardless, I don't think it's an unavoidable slippery slope.

As for "What if they're mistaken?", well I highly doubt anyone will be convicted off the basis of an automated image recognition tool alone. If they are mistaken from time to time, then the person will probably have a warrant (ha) issued for their email, and any investigation would follow normally. If they're mistaken very often (i.e. the PhotoDNA implementation turns out to be shit), then the police (or whoever is receiving these reports) will probably stop caring, and nothing will have changed.


> As for "What if they're mistaken?", well I highly doubt anyone will be convicted off the basis of an automated image recognition tool alone.

"Convicted" isn't the fear, post-9/11. The fear is getting put on a secret government watch list and being harassed for the rest of your life with no chance to ever clear your name, or even to be told why you're being screwed.


That's a legitimate concern, but the root problem is the existence of secret government watch lists free of oversight and lacking any transparent or available process for being inadvertently put on one.

If we grant that such a list is wise to have, then sure, let's focus as much effort as possible on making sure that we don't put innocent people on those lists. But I'd much rather focus my efforts on not having the lists in the first place.


Isn't that mainly terrorism, not paedophilia? Anyway, at least in this case, the tip was sent to the National Center for Missing and Exploited Children, which, being a private non profit, doesn't seem too closely associated with the NSA/DHS/whatever government organisation you believe maintains these lists.


"Isn't that mainly terrorism, not paedophilia?"

It's nothing at all. Plenty of innocent people are on those lists having done nothing.

That the pretext is terrorism is irrelevant.


The UK version - the Internet Watch Foundation - is also a charity but you'd be a foolish UK ISP to ignore what they say.

https://www.iwf.org.uk/


The UK equivalent would probably be the NSPCC. If Google had a tip of an individual person distributing these images they would not contact the IWF. That would be for a website that was distributing these images.


In the UK the IWF provide information about what to block and take reports about hosted images.

CEOP are the group that Google would report people to.

http://ceop.police.uk/


> Isn't that mainly terrorism, not paedophilia?

"Terrorism" was just how they learned they could get away with it. Having established that, it's now any damn thing they want.


I'm not claiming a slippery slope. They already are scanning all emails, we know this for certain.

And as for "What if they're mistaken?" There's a fair amount of fallout from simply being accused of a crime. Between arrest, jail-time waiting for arraignment, bail, legal fees, news coverage which will forever attach your name to whatever it was...etc. The question squarely is "What if they're wrong?"


You are claiming slippery slope. They've reported child porn cases, and you are worrying about the possibility they could begin reporting tax fraud and other more minor crimes.


You mean like the football coach in Minnesota that had some videos of his kids playing after a bath in his cellphone and got arrested for child porn?

http://news.yahoo.com/ex-minnesota-state-mankato-coach-retur...

I'm not sure if there is a ton of oversight for what Google does, but I agree with your point there needs to some kind of vetting to determine guilt beyond a reasonable doubt.

In the case here, the man had quite a history of previous behavior that would make it a pretty clear choice as to what Google needed to do. In the case of the football coach who only had the innocent video of his kids playing, there was some obvious signs that were ignored in the process of steamrolling a mans career, his reputation and his standing in the community where he's lived his whole life.


Isn't what's happening here quite different? They'd be matching against a know list of images.


Yes. Robust hashing wouldn't flag a parent's personal pictures of their young kids at bathtime. We aren't discussing nudity detection algorithms.


They can't use the same technique, or any technique with similar privacy tradeoffs, to look for tax fraud.

The scheme they use here only works with documents known to authorities, whose possession is criminalized. The technique (searches for collisions in a corpus of robust hashes) can't generate new information for authorities about documents they haven't seen. And there is no case in which a person could possess those documents where the government wouldn't have a reasonable interest in knowing that; in other words, there's no valid privacy interest intrinsic in possessing one of the specific documents they're looking for.

None of those conditions exists for tax fraud, or for that matter terrorism.

The slippery slope you're invoking doesn't really exist.


Searching for collisions in a corpus of robust hashes seems to me like the post office drug sniffing packages, and people seem to be ok with that. The same way the drug sniff dog won't give away anything other then drug/no drugs (is that how they work? I thought so?), this scheme shouldn't give anything away more then CP/no CP.

At the same time I think that to eliminate CP entirely you need to get rid of some of the freedoms we enjoy. I'm sure you can 100% get rid of CP if you track what everyone is looking at on their computers, but is that a tradeoff you want to make? Even if the filter really only can ever report looking at CP/not looking at CP, would you be comfortable with that running on everything you own?

I could be arguing to a nonsensical extreme, but the NSA tracking all data is following this to some perverted extreme - if we can track EVERYTHING that is going on, and eventually actually make actionable data out of it, we can catch all the criminals/stop crime. But I think we accept the possibility of a bit more crime in exchange for preserving some of our freedoms.


>They can't use the same technique

It's not the technique, it's the precedent. The technology is really not the point here.

>None of those conditions exists for tax fraud, or for that matter terrorism.

Actually they exist for both. Google is only one possible access point for monitoring.

The question is whether we want Internet services of all kinds of to be part of a culture of automated state surveillance.

I'd suggest there are good reasons for answering that question with a firm 'No'.


I don't think it's about the technique. The "precedent" (set at least 3 years ago, when this was all announced publicly) involves the tradeoffs.

The technique comes into the picture because there is no technique for detecting tax fraud that makes the same tradeoffs.

I don't have any trouble believing simultaneously that we shouldn't have a "culture of automated state surveillance" and that it's OK to sweep image uploads for matches against known child pornography. Just like I had no problem with metal detectors, but do have a big problem with millimeter wave imaging.


Is Google legally liable if it's discovered that they're hosting emails with child porn attached? If so, then does that suggest that Google owns the email hosted on their servers, and has the right to examine them if they want?


I think Google's only liable if they know about it and didn't report.

According to paragraph (f) of the 18 U.S. Code § 2258A,

Protection of Privacy.— Nothing in this section shall be construed to require an electronic communication service provider or a remote computing service provider to—

(1) monitor any user, subscriber, or customer of that provider;

(2) monitor the content of any communication of any person described in paragraph (1); or

(3) affirmatively seek facts or circumstances described in sections (a) and (b).

http://www.law.cornell.edu/uscode/text/18/2258A?quicktabs_8=...


FWIW: Child abuse of most forms is subject to mandatory reporting.

It is highly likely this is not as voluntary as you think. While in the US, they are not required to scan, this is not always true in every other country.


FWIW: Child abuse of most forms is subject to mandatory reporting.

It is highly likely this is not as voluntary as you think.

Google would likely have liability if they discovered this and didn't report.


"Google would likely have liability if they discovered this and didn't report."

I agree. However, they have to look for it to discover it. The provider is safe in ignorance by default. Google has chosen to pursue this activity like an investigative agency and is using a system that seeks out specific material in an automated fashion.

But i agree that now the ignorance is gone they must report it.

Further i think subject nature of the content is distracting to the conversation. The problem IMO is not that google reported content. Its that google is looking for it.


Just because something is legal doesn't mean it is right nor that someone should do it. Yes, if Google knew about it they should do something. But Google shouldn't have known.


Google won't be breaking the law either when they ask them to report some other stuff that might make a target out of you. I say it's moral for them to report on you when the time comes. You've got nothing to hide, amirite?


How can Google and Microsoft comply with these requests and still claim that the emails they data held in Ireland is exempt?

Moving on, what if blasphemy is illegal in another country and they (Google, Microsoft) spots someone who sent a a private message to someone else where they are making jokes about the prophet of Islam? What about countries ass-bent like India where you can't say anything bad about politicians? Will Google volunteer data about people who make bad jokes about politicians in private?

I know it sounds ridiculous but you're right. Where does the slippery slope end?

Problem is that on the other end... if they don't do this then they will have a PR nightmare. If people can spin their services as a safe haven for pedophiles, they will have a hard time.


Did not know about those laws in India. That is mind-blowing! Not good.

I personally agree with this line of logic. I'm working on a personal solution for moving my data off the cloud. But, that isn't a substitute for having this discussion as a country. Until we draw lines that slippery slope exists and these things are legal.

EDIT: Regarding the safe-haven issue, I think that is a major issue with services like SpiderOak. They play up the ability to hide data. I personally believe we should be aiming for a solution that is closer to the locks on the doors of our homes. I feel secure at night, but I also know the police can knock it down by force if given a warrant. SpiderOak and other such services try to be more of a Fort Knox than a dead bolt. This is the cloud after all. Not a thumb drive in my safe in my home office.


About the safe haven issue, I'd imagine that if there is probable cause, they can just serve me a warrant rather than serving the guard who is on duty at Fort Knox. From what I know, it is wrong to bypass me to get to my data. If my data happens to reside outside the country, well tough luck. Transfer the case over to the other country and stop being a bully.


Not sure why you are being down voted.

Laws vary by country (Even if they are democratic for 50 year and more) and laws in some developing country appear stupid to any one in a developed country.

US has Freedom of Speech. India has exact opposite. You can get into trouble because no matter what you do, you will offend someone.


I don't think they're doing this to follow a particular law. They're doing it to follow their own internal ethics about child abuse.

If they start reporting sedition and blasphemy, it'd mean that Google's ethics as a group has fundamentally changed, and we'd see far greater effects than their occasionally reporting someone.


Doctors and Lawyers are allowed and even obligated to break confidentiality in some special cases. We'll probably come up with a collection of special protections and special exceptions for remote storage of our data.

Although, it's pretty scary that some random person could send an email that would then send me to jail. I would hope the standard for possession is higher than "WTF is this? [delete]"


I would hope the standard for possession is higher than "WTF is this? [delete]"

Well, according to the linked article, the guy was sending the picture, not receiving it.


I would still hope that "their email account sent an image" is only sufficient to start an investigation and not end it. Email accounts can be broken into, many people have access to the same computer, etc etc.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: