I think people tend to forget that the banks were bailed out because if they weren't then the world's financial system would have shriveled up and died -- remember the credit markets had actually ceased to function and as a result no one could borrow money.
I haven't seen any credible arguments against bailing out the bank which would have ensured that the financial system didn't completely collapse. Some argue that we should have let the crisis take its own course, let the banks fail and deal with the consequences. But this would have guaranteed a second great depression and the vast majority of the world would have been negatively affected in a very bad way.
Now without congress actually doing anything against the banks there aren't too many options left, but hopefully the banks being sued can keep this issue in the spotlight longer to encourage public rage, so that Congress is actually forced to do something of significance.
>But this would have guaranteed a second great depression and the vast majority of the world would have been negatively affected in a very bad way.
But we are in a recession as deep as the Great Depression! And there are obvious undeserving winners (Wall Street) and losers (everyone who worked hard, paid their bills on time, invested conservatively, and saved money).
They have to check the leaks to make sure that they aren't putting any ones life in danger (when they haven't done this in the past their enemies have used it against them in the ongoing PR battle). That takes time.
It also means that they do cherry pick because they have to prioritize what leaks they will check and release first.
They've got 250,000 cables. A small team is going to take a long time to check 250,000 cables.
No doubt they do keyword searches of the cables to find the juicy ones and release those first, but, what you gonna do... release the boring, mundane stuff first? No one would pay attention to you.
I think the boring, mundane stuff could potentially be the most interesting stuff. Run statistical analysis for unusual words and phrases and find the secret diplomatic codes. :)
It's quite odd. Certainly Assange is quite technically savvy -- he's a reformed (genuine) hacker -- after all. So I just can't imagine him re-using passwords. Similarly, I would have thought that he'd be enough of a control freak to, you know, check this stuff out himself.
I agree that this is a weird thing. From my reading it appears that the encrypted archive sent to the Guardian got out somehow and that combined with the password (recklessly) published in the book, the data can be decrypted to reveal the full unredacted archive.
There are some interesting considerations involved in what this means for distributing highly sensitive data to non-technical people. They apparently have no comprehension that a PGP-encrypted file is not like a web service where you can just go in and change the password in a jiffy -- as long as that file exists, the same password will work on it, forever. The rebuttal quoted indicates that WL said it was a "temporary" password, so it seems that via a misinterpretation at the Guardian, its editors expected the password to stop working on that file in a matter of hours.
It would be really interesting to see PGP files that were time-sensitive, and used passwords that only worked within X time. Does anyone know if something like that has been done?
What would have been a more secure way to distribute the archive? Only bundle 1000 cables at a time, each file with a unique password? Require journalists to view the files on premises at WL so that there was no loss of control on the data? Bundle everything up in a black-box .exe that self-destructed in x time (though, unless implemented carefully, this would still reveal private data once a competent person got a hold of it)? Why weren't these files asymmetrically encrypted anyway? Surely it is not very likely that the private key of a user would be published in a book or that a user would upload his private key to bittorrent. Lots of interesting possibilities here...
It would be really interesting to see PGP files that were time-sensitive, and used passwords that only worked within X time. Does anyone know if something like that has been done?
I'm not a cryptographer, but it seems to me like something of this nature is impossible without maintaining control of the decryption process. You could add a timestamp to the file, but the workaround would be to change your computer's clock or rewrite the decryption software. You would have to include a cryptographically-signed timestamp from a trusted time server in the en/decryption process. Once that signed timestamp is obtained, though, it could be distributed along with the password and a modified application that uses the stored timestamp instead of a live one from the server.
My knowledge comes from reading about failed DRM schemes and the comments of tptacek and cpercival, so I can only point out things that wouldn't work, not what will.
I can't imagine how you could build a foolproof (or more importantly, state-sponsored-team-of-experts-proof) time-limited system. Assuming the file is digital, and can be accessed freely, you can make infinite bit-identical copies and fiddle your system clocks to make it work.
You'd need some sort of physical real-time clock combined with the memory storing the material, which wipes it after
a given time. Maybe even a physical medium which degrades over time[3] could work, but that could be foiled by controlling the environmental conditions (inert gas atmosphere to avoid oxidation, cold temps to slow electron migration, etc).
My personal approach would be something like providing an incredibly locked-down laptop/netbook (https://grepular.com/Protecting_a_Laptop_from_Simple_and_Sop... would be a good start), but with additional physical security improvements (battery/big caps wired directly to HDD and RAM via a set of tamper switches[1], disabling all IO ports in software and filling them with epoxy / disconnecting internally) You could then wire in an RTC to the same system, as well as perhaps using a GPS receiver to verify the time (Yes, you could jam/spoof GPS signals if you knew to expect them, but that's still raising the bar).
One final approach would be to have some other trusted party/system which remains in your control, and have some challenge/response auth which you can disable/destroy after a fixed time.
To conclude, I can't see any way to build time-limited encryption without some external trusted authority or some trusted physical infrastructure.
[1] Not just physical switches, but as many things as you can come up with: Light sensors, pressure sensors (especially if you can gas-seal the enclosure and keep it at elevated/vacuum pressures), temperature to avoid cooling attacks, resistive/optic-fibre security meshes. Another amusing idea would be to use a GPS receiver to ensure that data can only be viewed from a given physical location[2].
[2] This gets used in _Distress_ by Greg Egan, although I'd thought about it myself long before reading the book.
The only way to time-limit data would be to find some kind of cryptographic function which can't be parallelized, requires a certain amount of work, and then make assumptions about the speed with which this could be done based on resources available to an attacker. You could at least set a lower bound for time given likely resources. I find it highly unlikely that even national technical means include general purpose reconfigurable logic much faster than 50x the open state of the art; if your problems keep changing, reconfigurable logic is going to be needed.
The key is to have lots of problems nested together, which must be solved in series.
Computers scale a lot better than people, so something which required a human to try to solve a puzzle to get a key, then use that key to decrypt the next puzzle, and so on, probably has better characteristics.
A trusted third party or tamper-resistant hardware is far more practical.
Indeed, it seems something like a dongle that kept its own clock would be required to implement this in a way that couldn't be circumvented merely by setting your PC's clock back. The firmware could wipe as soon as the clock in the device hits time X; if you distribute these close enough to X, even an experienced hacker would be unable to get around the deletion without destroying the whole device.
Alternatively, this dongle could contain the necessary private key to decrypt the file instead of the data itself, or another component required to unlock the data a la RSA SecurID.
I would be greatly interested to see relatively secure self-destructing USB sticks.
Obviously someone interested in copying the data at whatever cost will be able to do it, but that's not the use case pertinent to this story. This would not be designed to taunt your enemies, but rather ensure security of data in the hands of individuals who may not understand how to handle it properly.
The Guardian was operating under a grievous misunderstanding about the nature of the encrypted data, but from my vantage point I don't see that they operated out of intentional malice. If you are distributing data to compliant parties and just want to ensure a tidy cleanup to prevent mishandling or theft, something like this definitely could be useful.
Your only defense is compartmentalization. Segregate the data and encrypt each segment separately. Communicated the data and keys through separate channels to separate parties. Hope that, therefore, a compromise is limited to a single compartment.
You could also make decryption dependent upon a network connection (e.g. Adobe DRM, et al.), but with "the opposition" potentially in control of the network and/or able to compromise you physical security, and with the decrypted results readily copy-able (they always are, one way or another), this is probably more trouble than it's worth.
P.S. I didn't mean actually Adobe DRM; rather, just citing them as an example instance of such a thing (though, truth be told, I've never looked at how they do theirs, in detail).
$500m would be more than sufficient to do all of those things you've outlined. If you can put up 25% (random percent) of the money then you'd be able to get investors to put in the rest. Only a nut would invest the entire amount themselves.
But that aside, we're talking here about quality of life which does not equal being able to pay for the craziest most expensive ideas that you can come up with.
>Only a nut would invest the entire amount themselves.
Not everything I'd want to do would have an immediate commercial payoff, which would rule out investors. That, in turn, would rule out anyone else doing it, which is why having a wealthy backer is necessary.
>But that aside, we're talking here about quality of life which does not equal being able to pay for the craziest most expensive ideas that you can come up with.
If I'm retired, the ability to do those things would be a huge influence on quality of life for me.
> Not everything I'd want to do would have an immediate commercial payoff, which would rule out investors. That, in turn, would rule out anyone else doing it, which is why having a wealthy backer is necessary.
If something is a good idea, you can probably persuade some other people to contribute to it. If you can't persuade anyone, it's probably not a good idea. Even if $500M wouldn't get the job done, it'd be enough to fund a prototype or demonstrator that could attract more funding.
Yes, but you see, in order for him to actually get the gig as an Apple intern he'll have to sign his soul away and the consequences of leaking any information after he's signed those papers will be draconian.
For once I'd like to see a little more honesty... you know: "we did it for the money" rather than "we did it for the users". Liars.
--------------------
2. Why is CNET Download.com making this change?
The same reason you have your applications on Download.com – for the users. The CNET Download.com Installer ensures a safe and improved download experience by making it easier for Download.com users to complete downloads and launch the software’s installer.
Just because someone posts a benchmark that is obviously flawed, doesn't mean that someone else should be obligated to do their own benchmark as a counter.
These guys didn't ask for node.js to be used in any benchmark.
The onus is on the person who chooses to do a benchmark, and publish the results, to get it right. If they don't get it right, then they are in the wrong and deserve the criticism that they get as a result.
Because it's like complaining about the weather. It doesn't do anything to make things better. I think there's too much negativity on the Internet already.
There is too much negativity on the Internet, but there's also too much of just about everything else. The ease of publishing on the Internet means that people don't think too critically about their work prior to publishing it.
For example, if these guys were publishing this benchmark as part of a study or argument in a quarterly magazine -- and this was there only opportunity to publish it for the next year or so -- then they would have been far more critical of their own work prior to publishing it. Instead they've quickly thrown together something and pushed it onto the Internet without too much further thought.
To some extent this is like a peer-review. If I submit a paper to a journal, should my response to all criticism be, "Well, if you think I didn't do the experiment right, then you do it!"
Overall value of one person to a company is hard to measure. Yes, they might write great code, but if their abrasive and asshole personality bring down the productivity of the other five developers on the team, is it worth it?
Probably not in that scenario, but plenty of anti-social types manage not to bring down the productivity of everyone around them.
If someone's social problems are destroying their value to the company then by all means they should not be employed there. I simply strenuously object to the blanket statement that all anti-social people should be completely unemployable regardless of circumstance or value.
Credit markets and the shadow banking system http://en.wikipedia.org/wiki/Late-2000s_financial_crisis#Cre...
I haven't seen any credible arguments against bailing out the bank which would have ensured that the financial system didn't completely collapse. Some argue that we should have let the crisis take its own course, let the banks fail and deal with the consequences. But this would have guaranteed a second great depression and the vast majority of the world would have been negatively affected in a very bad way.
Now without congress actually doing anything against the banks there aren't too many options left, but hopefully the banks being sued can keep this issue in the spotlight longer to encourage public rage, so that Congress is actually forced to do something of significance.