Computer Crime laws are already insanely disproportionate.
They were created during a moral panic when only a few individuals, large multinationals and, large governments had computers and the government were worried that "hackers" could break into the electricity grid or the communications system and shut it down.
So right now you could literally break into someone's home, knock them unconscious, and then steal their laptop, but yet still get more jail time for "hacking" into their laptop than any of the previous crimes.
It only gets more insane when someone "hacks" across state lines. The federal laws are absolutely insane, and the only thing more disproportionate are some of the drug laws (many of which were also created during times of moral panic).
In particular are comic cases of when companies fail to secure things at all (e.g. leave data exposed to the public via hidden URLs) and then someone gets prosecuted because they "hacked" that company and "stole" that data.
> In particular are comic cases of when companies fail to secure things at all (e.g. leave data exposed to the public via hidden URLs) and then someone gets prosecuted because they "hacked" that company and "stole" that data.
Which is my main concern with all the attention and push for change (or rather more laws and less privacy) in relation to the Sony hack (among others). Would Congress and the media be calling for equitable action if someone had physically broken in to the Sony Films offices and stolen the data, due to lax (physical) security policies? Doubtful, they'd probably be told to buy better locks...
>In particular are comic cases of when companies fail to secure things at all and then someone gets prosecuted because they "hacked" that company and "stole" that data.
<devil's advocate>If I leave my garage open, and somebody takes my golf clubs, is that not theft?
Yes, the potential punishments are disproportionately harsh.
Yes, the company is silly for leaving data exposed.
No, it's not ok to take data because it's unprotected.
Yes, stealing something is a crime whether the item is locked or not. However, cases like that of the Weev guy (http://en.wikipedia.org/wiki/Weev) are very different than someone coming into an unlocked garage and stealing your stuff.
His case is more like a going into a store that is open and invites you in (this was a public website he went to). You are browsing around, looking at stuff for sale.. you then see an unmarked door in the middle of the store. It isn't locked, and doesn't say "Employees Only", so you walk in.
The store can't then turn around and have criminal charges brought against you just because you weren't supposed to go into the door. There were no locks or signs, and you were in a place you were supposed to be. Now, if there was any sort of lock at all (even a crappy, broken, one that was easy to bypass) you could argue that it is a crime.
If I send a standard request to a website, with no special forged auth or anything, and that website gives me back data, you can't blame the person who made the request. It is up to the website to tell me "no, you are not allowed to access that."
If I accidentally leave my front door unlocked, it's still a crime for you to waltz into my house and peak around. I wouldn't describe what weev did as just walking through an unlocked door, either. It's more like he walked through an unmarked door, looked into a filing cabinet and saw some private business records, then thought "Cool. There's more in this cabinet. I'll just go ahead and make copies of them all for myself."
> If I send a standard request to a website, with no special forged auth or anything, and that website gives me back data, you can't blame the person who made the request. It is up to the website to tell me "no, you are not allowed to access that."
Many remote exploits are going to fit this description. Sometimes it's just magic parameters or misconfigured URL routes.
Your concept of computer crime requires a bit more depth than that. Intent matters, as does what you do with data on their end and on your end.
The room analogy may be reasonable. Except weev realized the room contained private company records. He proceeded to copy everything, then went home and published it in the newspaper. He wasn't really prosecuted for the entering of the room, it was that second part that did it.
If your garage says "Come on in" and, when the person takes your golf clubs, you say, "OK!" then yes, it's OK.
And that's exactly what publicly accessible URLs and a status code 200 are.
You had a chance to issue a 403 (ie, "You can't take my clubs") but instead you said "OK."
I think it's egregious that anybody anywhere can be held criminally liable for accessing information available as a the result of a 200, with no former contractual arrangements in place.
If you're a robot that's true, but you're probably not a robot.
The law leans on the word "reasonable" all the time, simply because there are plenty of situations where humans can make an obvious judgement call.
If a publicly accessible URL contains obviously priveleged information, as an adult you know that it is a "door that has been left open accidentally". That doesn't mean you're free to walk in and take what you want. If you're neighbourly, you may want to let them know their door is open. People should feel safe from litigation if they do that, but I don't see why you should feel like you can do whatever you want just because there's a 200.
> The law leans on the word "reasonable" all the time
Far too often in my estimation, and in the wrong places.
> If you're neighbourly, you may want to let them know their door is open.
Agreed. But failing to be neighbourly is not a reasonable criminal offence.
> but I don't see why you should feel like you can do whatever you want just because there's a 200.
To turn things around: Why not hold AT&T liable? Why not issue a 403?
I agree that weez was "un-neighbourly," but at the end of the day, the protocol was the only contractual arrangement in place at that moment. It's just maddening to me to imagine that this can be regarded as criminal conduct.
If the Goatse security guys had discovered the exploit, shrugged and ignored it then absolutely nothing would have happened.
They chose to write scripts, pull all the data, send it to Gawker, etc. There's no question that they knew what they were doing is wrong (because of the IRC logs) so you don't even need a judge to decide what was reasonable.
What if you leave your door unlocked, someone walks into your bedroom, and looks at the sex photos you and your wife took. Or rifles through your personal letters, bank statements, etc. Still okay just because they took a copy?
> So right now you could literally break into someone's home, knock them unconscious, and then steal their laptop, but yet still get more jail time for "hacking" into their laptop than any of the previous crimes.
Can you provide some examples? And not sentencing guidelines or what the prosecutors asked for; actual sentence lengths that were actually served by people convicted solely of computer crimes vs. time served for breaking and entering and assault and battery and theft.
They were created during a moral panic when only a few individuals, large multinationals and, large governments had computers and the government were worried that "hackers" could break into the electricity grid or the communications system and shut it down.
So right now you could literally break into someone's home, knock them unconscious, and then steal their laptop, but yet still get more jail time for "hacking" into their laptop than any of the previous crimes.
It only gets more insane when someone "hacks" across state lines. The federal laws are absolutely insane, and the only thing more disproportionate are some of the drug laws (many of which were also created during times of moral panic).
In particular are comic cases of when companies fail to secure things at all (e.g. leave data exposed to the public via hidden URLs) and then someone gets prosecuted because they "hacked" that company and "stole" that data.