Hacker News new | past | comments | ask | show | jobs | submit login

I saw a talk about medical device security (or lack thereof) at the Eleventh Hope a few weekends ago. Very scary. They started off with a story about patients in a hospital who became horribly addicted to morphine because they were able to hack the machine from resources found online (http://www.massdevice.com/hospital-patient-hacks-his-own-mor...). Go on Shodan and search for medical devices and terminology (e.g. "radiology") and you'll see the state of things. Sensitive machinery exposed on the open internet. A lot of medical devices have hardcoded passwords that are used for remote operations by technicians.

Open sourcing this code would do a lot to mitigate these issues.




The sad part is that the companies will use this security by obscurity argument against open sourcing.


contrary to popular opinion....

Obscurity is good practice as one layer of a layered defence system.

See "Defence in Depth" https://en.wikipedia.org/wiki/Defense_in_depth_(computing)

"Defense in depth is originally a military strategy that seeks to delay rather than prevent the advance of an attacker by yielding space to buy time".

We have to acknowledge that no system is perfect, there will always be holes, therefore a good approach is to layer up the imperfect systems which delays the attacker.

Obscurity is one of those layers, a system will always be more secure if you have to find it first.


Obscurity is one possible layer, but it's not very good. Obscurity has a cost for anyone working with the system.

Obscurity don't scale. Things that are commonly used should not use obscurity.

Somebody who mass produces computing equipment or software that many use can't use obscurity because it's economically efficient for attackers to look past obscurity. It's also unproductive to advice others to use some obscuring methods, because as soon as something becomes even slightly common, it can be detected and security of obscurity vanishes.

Obscurity must be obscure. Great minds think alike and it's very easy to build obscurity that is similar to what everyone else thinks is nice trick.

Genuine obscurity can provides additional security layer (in probabilistic expected value sense) against automatic or routine attacks. If obscurity requires even small time to figure it out, it's likely that attacker moves to next target. But it's hard to know how well the obscurity is working.


Passwords are obscurity, are they not? And in the end, so are 2048 bit RSA keys. It's just a prime-number needle in the haystack. Look enough places/try enough passwords, and you will find it.


This is interesting and quite philosophical. Is there a difference between procedures and data? Isn't it all just transistors and capcitors anyway? And is anyting really anything? Isn't it all just really quantum fields?

Practically speaking, obscurity is a "platform" that lets you bypass everything, whereas knowing a password is more limited since it grants access to a single user. But practically speaking, obscurity and root passwords are similar..

I wonder if there are formal definitions here that makes the separation clear.


The way I see the distinction is that obscurity is about hiding the security mechanism, whereas a key is about hiding one part of the mechanism that can be mathematically analyzed to give an estimate of how long it will take to break it.

The major difference here is this: with security through obscurity, someone can reverse engineer one product and then they've broken all products. This is why someone upstream said "security through obscurity doesn't scale". Security through obscurity is often okay if you're protecting one thing, but if you're using it to protect a system (like a pacemaker) that is going to be used by a lot of people, the more people who use it, the more valuable a reverse engineering hack becomes. Security through obscurity can't be individualized to provide security to each individual--if one system is broken all systems are broken.

Compare this with key-based security--if each instance of the system has an individual, randomized key with a large enough keyspace, breaking a key will only get you into a single instance of the system. It scales because the reward for breaking the security doesn't grow as the number of system instances grows.

Note that the problem with security through obscurity is basically the same problem with master keys, i.e. those used for backdoors or DRM. If someone can obtain the master key for the system, they can break all the instances of the system.


If anything, I would use a lock analogy.

Locks are rated in how many seconds/minutes they can withstand from a dedicated attacker. Perhaps that would be a way to determine a similar safety rating for passwords/crypto based systems.

In passwords: how many passwords can you try per second before the server refuses? Then Password space/# per second = total seconds for guaranteed entry.

Crypto: how many keys do you have to calculate before you succeed in finding the correct key? Keys/second * how many machines / keyspace.

But in the end, I don't believe we really have a formalized difference between types of obscurities, aside the "go to /root and get root" obvious badness. It would be a rather nice way to provide security in "seconds to millenia depending on techniques used".


We measure password and cryptographic key security based on their entropy (keyspace) and speed (key tests / second). Given current attacks (GNFS), a 2048-bit RSA key has ~112 bits of security^1 and would take ~20,000 years to brute force using every computer ever made^2. Passwords and cryptographic keys are selected as the single point of obscurity in these systems so that many eyes may secure the other components. If the system is otherwise secure, then it is as weak as the passwords/keys which are (hopefully) picked to be very strong.

Most individuals defending algorithmic security through obscurity believe that hiding the algorithm improves security. That may be true in an extremely technical sense (the attacker must recover the algorithm first), but it is very misleading and unprofessional commentary. Algorithmic security through obscurity is at best calculated in difficulty-to-reverse-engineer (or difficulty-to-steal), which doesn't provide per-use(r) specificity (per-user password) nor scale in complexity (a 256-bit key is generally 2^128 times stronger than a 128-bit key, but doubling the algorithm length increases reversing time by slightly less than a factor of 2).

Algorithmic security through obscurity provides negligible security, but what's the harm? Why should we care? Attempting to hide the algorithm provides a false sense of security, limits review to "approved" parties, and induces legal/social efforts to "protect" the secret. The limited review is particularly noteworthy since it promotes bugs in both the algorithm and the implementation. The end result is a facade of security, some very unhappy whitehats, some very happy blackhats, and more users betrayed through poor security practices.

[1] http://csrc.nist.gov/publications/nistpubs/800-57/sp800-57_p... [2] http://tjscott.net/crypto/64bitcrack.htm#INTELG


You can change a password, and you can calculate how hard it is to for an attacker to obtain a randomly generated password.

It is much harder to formalise how hard it is for an attacker to find out what algorithm you use, so it is risky relying too much on him not being able to do so.


Theoretically yes, but the problem with obscurity is that it creates moral hazard by lowering visibility of all other measures that are or are not taken to protect the system. It's not unreasonable to decide that such extra layer of protection is not worth of making your decision makers subject to being able to cut corners as there is no feedback loop for them.


Its only a moral hazard if you don't trust the people who are certifying the system and therefore aren't subject to the obscurity.

The trust question is the problem with obscurity. Do you trust the people making it obscure?

In this particular case, where safety-critical standards are relatively well known (within the industry) and not themselves obscured, they deserve to be trusted.


As long as "independent certification" companies are selected in a competitive market and paid by the system makers, they can't remove moral hazard - only shift it around.

After all, if you're a system maker, why would you hire hardasses who have rejected your products in the past? And if you're a certification house, why would you $$$ on many hours from experienced engineers when you could use fewer hours and junior employees giving you happier customers and higher profit margins at the same time?

You can hire "independent" people to tell you what you want in a lot of industries. You want an "independent salary survey" to tell you that $50,000 is the market rate for an experienced programmer, but that your CEO needs a $5 million raise? Or an "independent credit rating agency" to tell you your subprime mortgage backed security is triple-A rated? The free market will happily provide such "independent" reports at the right price.


Very true... that is a real effect...

But the government(s) also employs similar agencies to perform the enforcement of the certifications.

Its not a perfect system, but its a lot better than the developer-on-the-street realises.


Just like we trusted the people certifying VW's engine control software?


Thats the point.

Do we trust them or not?

The problem with VW was not a 'bug'. it was a malicious design.


I think you're right that it can be used as a layer, but the reason we admonish against security-by-obscurity is that when you hide something, you often put less work into securing it properly.

It's like when you leave a key for someone under a door mat. You don't often consider that the door might be easily kicked in by an intruder.


"you often put less work into securing it properly."

Thats the problem right there.... not obscurity.


But if you do secure it properly, what value do you get from obscurity?

I think the big problem with obscurity is that its impact is asymmetric in the wrong direction: it inconveniences white hats a lot more than black hats.


Defence in depth acknowledges that there is no perfect security system.

Even if mathematically unbreakable, the implementation won't be.

This is the whole premise of defence-in-depth... delay rather than prevent.


Well sure, but there are still good and bad security systems. How does the cost/benefit of obscurity compare to alternatives?


that would be an interesting study... but one that is impractical I think.


Well until there's evidence of it's effectiveness I'm going to avoid using obscurity. I know how to achieve an acceptably low break-in rate using mathematically valid encryption etc.. Defense in depth shouldn't be an excuse for using practices you haven't evaluated the effectiveness of at all.


You're missing the point.

An acceptably low break in rare using mathematically valid encryption.... Yes, fine... Given a perfect implementation.

You haven't got one of those.


No, you're missing the point. I'm talking about the real-world implementation that I have.

I don't think it's too much to ask before adopting a given security policy that it provide some evidence that it increases security. Or should I also be gathering a collection of rocks that keep hackers away?


Well, it would be if we were robots who could effectively separate the two problems. But we're not, so we need to avoid obscurity if we want security.


Design is a separate process and skill than implementation.

This applies to security and safety. There is no reason not to implement obscurity also.

One thing I can guarantee is that your implementation of one security-type will not be unbreakable.... and neither will mine.

Better to add obscurity than not.


Agree. But one should design software without the obscurity layer, then it's okay to explain how obscurity can be set up during deployment.


absolutely... given everything else is equal... adding obscurity is a positive.


The flaw may be assuming everything else can be equal in the real world. Obscuring the algorithm has downstream consequences that may/will reduce overall security.

For example, hiding the algorithm from whitehats may prevent/discourage them from hunting/reporting bugs.


> For example, hiding the algorithm from whitehats may prevent/discourage them from hunting/reporting bugs.

Yea, this is a serious concern, I guess it depends on use cases. For sure "security by obscurity considered harmful" could be true, thats the thing people overgeneralize and fight over it when this should be weighted depending on the circumstances.


True, but history has demonstrated countless times that closed source code doesn't provide near enough obscurity to deter hackers, and automated fuzzing tools make it even easier.


It also demonstrates that obscurity can significantly reduce the number of attack attempts that are made against you. See e.g. why people move SSH to non-standard ports - raising the entry bar for the attackers has some value.


Of course, obscurity on its own is not a defence.

its one part of a whole system, and in that, if it does delay or deter (even slightly) then it has worked.


And as shown by OpenSSH being open source does not help much in the security department either.


Security by obscurity is in practice almost always a bad idea.

1# security by obscurity gives a false sense of security. Under no circumstance should obscurity be used as a deciding factor behind a management decision.

2# security by obscurity cost money and time, and should only be used when all real form of security measures has been implemented. Even the military are currently not always implementing multi-token authentication, ipsec and selinux. Instead of trusting that the medical deceive is safe behind two layers, a static password and a secret port, add a certificate and implement challenge and response.

3# the priority to implement security by obscurity should be far lower than all the real security technologies. When reading security reports by pen testers, its important to understand the difference between a verified code injection vulnerability and a system information disclosure. Fixing a remotely code injection bug is much more important than hiding the fact that a system is running a up-to-date stable version of Debian, yet many security guides and reports for pen-testing tools rarely priorities.

4# security by obscurity often has real cost in support, brittleness of the system, and debugging. There is still in 2016 firewalls that will permanent block any ip address that has sent a icmp package to them. The amount of work employees are spending to unblock customers that accidentally end up in the block list could be spent on making sure that the system is just that more perfect against the serious attackers who can afford to spend $50 to access a botnet for a few hours.


I agree with your numbered points, but not the conclusion that its always a bad idea.

Its common sense that I can't pick a lock if I cant find the lock.

This says nothing about the quality of the lock or what is behind the lock.


Almost never. I know where the bank is, I know where the door to the bank is, but that should not make it easy to break into the bank. However, a gold storage might want to keep the location hidden, as they should have already implemented all the security procedures of a bank plus extra.

Spending time on security by obscurity should be a job for the small minority of people who already done everything else, and then only if there is a cost-benefit analyze that show cost of the obscurity to be less than the calculated gains.


Order-of-implementation is entirely a project management issue. This does not affect quality of the final design

These types of products are entirely designed up front and analysed before any code is written so the implementation order is irrelevant.


If I make a lock for myself, then it makes sense to keep it hidden. If I'm buying a lock from someone else, I'd like to know where it is, so it makes less sense to keep it hidden, at least from me and my agents.


Security by obscurity is not security, I think that's the main objection behind the phrase. Inherently, a system does not become more secure by postponing its subversion. If you have broken security, fix it. If you don't, obscurity is unnecessary.

It's qualitative versus quantitative. Security is a quality, which means it can become absolute: the complete absence of security holes. Obscurity, on the other hand, is quantitative, because you can always add more obscurity. There is no "absolute" obscurity.

"Buying time" doesn't make sense in this context. It's pacemaker software. Who's buying time by making proprietary pacemaker software and for what reason?

Buying time is only useful if you can see when an attacker begins trying to subvert your system. If he can sit at home working on it for a year, "buying time" to improve security makes no sense, as you can't spend the time to improve security,p when you don't know you're about to be attacked.


the discussion on security comes about because the problem discussed is about messaging security (or lack of) on the pacemaker itself.... not about any bugs (or lack of) in the pacemaker software.

No layer of security is ever perfectly implemented, mathematically perfect tho the algorithms may be.... this is the key point that defence-in-depth acknowledges... and hence is the key point that obscurity addresses.

Buying time certainly does gain you a lot in this context. If someone was attempting to bypass security to get into a pacemaker in side me.... I would damn-sure prefer the apparent "lock" to be hidden rather than in plain sight! (given everything else is equal).


> I would damn-sure prefer the apparent "lock" to be hidden rather than in plain sight

I would prefer the lock be visible to me over either situation. Otherwise, how would I know how easy is it to bypass?


Not just open sourcing, because if they just open source and ignore the vulnerabilities, that's more trouble. Device should be allowed to be uploaded with code by the user or by the community, and the original people should just write good code in the first place.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: