Completely agreed. I downloaded the article fully expecting an awesome guide to /Risk/ (the game) strategy. But, once I saw that it was a DEFCON talk, I knew better.
OP: might I recommend adding a DEFCON: tag to the beginning of the title?
Cool, glad to see a focus on risk. It's quite important, often overlooked.
Spot on that the security industry (and IMHO many internal security departments) often focusing on technical vs. business risk. Who needs risk management when you have fancy sound bites and scary sounding technical jargon?
Pedantic nitpick: Death statistics slide asks 'what could a billion $ do for these causes?' The question is more... what would Takata / Honda have done with the time and resources from the outcome.
From the company's point of view (purely pathological) the 'cost' of the injuries would be damage to their brand, lawsuits etc. That should be taken into account.
Ethically though I think the recall was the right thing to do. And presumably the sooner they do it the better for everyone?
Another interesting 'risk' to look at is given some evidence that your product may be defective, what is the probability that is actually is defective (upon further tests). And whether to test 'under the radar' or be transparent about the problem. (Putting ethics to one side for a moment).
I'm an Iranian Hacker, I don't need to watch news to know what is going on in my country. Apart from a gross generalization, Let me tell you why the statement in the slide can not be true. because of sanctions imposed by US to Iran, no Iranian can be selling anything on eBay at the first place.
There's a huge problem with risk models: probabilities. What's the probability of data disclosure via FREAK, and how does that compare with the potential loss of data via RAID6 failure?
I.e. how much should I and my manager worry about a given vulnerability, and how do I balance addressing vulns with the core business need to serve customers?
I agree. As per Taleb (Antifragile) if you adopt conventional perspective of "probability" you will be bitten by the occurrence of the apparently improbable. Downside (here called "impact") is much easier to assess/predict, and compare against "upside", which in the case of taking infosec risks could be e.g. saved implementation cost.
I haven't read antifragile, and its been a while since he's made the book interview rounds. If I recall his point correctly, it's that antifragile systems need to be able to fail, and grow from them. If we design to minimize the impact of any such failures, the system as a whole is considered reliable.
The problem with applying this to IT security is that I see no way to lower the impact. With hard drives, we use RAID to build a more resilient system from unreliable components. It seems harder to do that with the concept of customer passwords, credit card numbers and SSNs and still have a system the accountants can use to reliably file earnings reports.
Video probably won't be up for a couple months. Defcon sells the video and people usually upload them after.
My guesses are that he was saying the security community regularly freaks out about small things like complex MiTM attacks which if you followed these frameworks for Risk and Threats he has laid out you'd realize are not that big of a deal. At least not the big deal that they were on twitter, blogs, etc. He's basically saying stop crying wolf, use the risk management tools to figure out if this is actually a big deal and if it is THEN freak out. If they do this then when they freak out at the right moments the community will be more respected for their freak outs and people will respond to any problems raised..
Yeah, just looking at the slides is missing 75% of the context. Videos of the DEF CON presentations will be uploaded to media.defcon.org at some point.
From the viewpoint of a security practitioner and engineer I think what Bruce Potter says is really important here.
Oftentimes people get hung up in fixing or doing things that are important to do for security, but not a pressing matter, and they may miss more important issues because of this.
There are a lot of security folk who worry about things that make very little difference and it not only makes me think less of them but it also is more noise and less signal. It means businesses have a harder time taking security people seriously whenany constantly worry about the wrong things.
The most interesting security guy I know comes from an accounting/auditing background. The similarities are amazing, especially they way they deal with risk. He's one of the few security people I know who treat security as a non-absolute, with tradeoffs in other domains. Also he tends of think about things at a higher level: not just what tools should we use, but what policies should we have, how should we encourage and audit compliance, how can we measure the real organizational benefit of policy changes, etc.
To simplify all that, take just the ideas of likelihood and impact. Now, draw a quadrant. Put likelihood on one axis, impact on the other - worry about the things in the upper right.
I have never before seen a discussion on risk management that didn't start by drawing out that quadrant.
Slides were pretty boring. People in business deal with a lot so they sort all input based on an assessment of how much they actually need to care. If you shape your communications with them in such a way that makes this process easier for them, they will respect you more. Film at 11.
I didn't quite grok the talk from just the slides. Is it a guide on assessing seriousness of a risk? Showing there are many risks the security industry takes and shouldn't, and spend too much on less risky issues? Was that what the car recall was about?
His opinion was that the risk was negligible. You have to intercept the traffic and then run an offline attack to decrypt the data before you even know if the traffic is valuable. It is a highly inefficient and potentially costly exercise. I'm typing this while on a plane with a severe hangover... I hope this is coherent.