I saw a talk about medical device security (or lack thereof) at the Eleventh Hope a few weekends ago. Very scary. They started off with a story about patients in a hospital who became horribly addicted to morphine because they were able to hack the machine from resources found online (http://www.massdevice.com/hospital-patient-hacks-his-own-mor...). Go on Shodan and search for medical devices and terminology (e.g. "radiology") and you'll see the state of things. Sensitive machinery exposed on the open internet. A lot of medical devices have hardcoded passwords that are used for remote operations by technicians.
Open sourcing this code would do a lot to mitigate these issues.
"Defense in depth is originally a military strategy that seeks to delay rather than prevent the advance of an attacker by yielding space to buy time".
We have to acknowledge that no system is perfect, there will always be holes, therefore a good approach is to layer up the imperfect systems which delays the attacker.
Obscurity is one of those layers, a system will always be more secure if you have to find it first.
Obscurity is one possible layer, but it's not very good. Obscurity has a cost for anyone working with the system.
Obscurity don't scale. Things that are commonly used should not use obscurity.
Somebody who mass produces computing equipment or software that many use can't use obscurity because it's economically efficient for attackers to look past obscurity. It's also unproductive to advice others to use some obscuring methods, because as soon as something becomes even slightly common, it can be detected and security of obscurity vanishes.
Obscurity must be obscure. Great minds think alike and it's very easy to build obscurity that is similar to what everyone else thinks is nice trick.
Genuine obscurity can provides additional security layer (in probabilistic expected value sense) against automatic or routine attacks. If obscurity requires even small time to figure it out, it's likely that attacker moves to next target. But it's hard to know how well the obscurity is working.
Passwords are obscurity, are they not? And in the end, so are 2048 bit RSA keys. It's just a prime-number needle in the haystack. Look enough places/try enough passwords, and you will find it.
This is interesting and quite philosophical. Is there a difference between procedures and data? Isn't it all just transistors and capcitors anyway? And is anyting really anything? Isn't it all just really quantum fields?
Practically speaking, obscurity is a "platform" that lets you bypass everything, whereas knowing a password is more limited since it grants access to a single user. But practically speaking, obscurity and root passwords are similar..
I wonder if there are formal definitions here that makes the separation clear.
The way I see the distinction is that obscurity is about hiding the security mechanism, whereas a key is about hiding one part of the mechanism that can be mathematically analyzed to give an estimate of how long it will take to break it.
The major difference here is this: with security through obscurity, someone can reverse engineer one product and then they've broken all products. This is why someone upstream said "security through obscurity doesn't scale". Security through obscurity is often okay if you're protecting one thing, but if you're using it to protect a system (like a pacemaker) that is going to be used by a lot of people, the more people who use it, the more valuable a reverse engineering hack becomes. Security through obscurity can't be individualized to provide security to each individual--if one system is broken all systems are broken.
Compare this with key-based security--if each instance of the system has an individual, randomized key with a large enough keyspace, breaking a key will only get you into a single instance of the system. It scales because the reward for breaking the security doesn't grow as the number of system instances grows.
Note that the problem with security through obscurity is basically the same problem with master keys, i.e. those used for backdoors or DRM. If someone can obtain the master key for the system, they can break all the instances of the system.
Locks are rated in how many seconds/minutes they can withstand from a dedicated attacker. Perhaps that would be a way to determine a similar safety rating for passwords/crypto based systems.
In passwords: how many passwords can you try per second before the server refuses? Then Password space/# per second = total seconds for guaranteed entry.
Crypto: how many keys do you have to calculate before you succeed in finding the correct key? Keys/second * how many machines / keyspace.
But in the end, I don't believe we really have a formalized difference between types of obscurities, aside the "go to /root and get root" obvious badness. It would be a rather nice way to provide security in "seconds to millenia depending on techniques used".
We measure password and cryptographic key security based on their entropy (keyspace) and speed (key tests / second). Given current attacks (GNFS), a 2048-bit RSA key has ~112 bits of security^1 and would take ~20,000 years to brute force using every computer ever made^2. Passwords and cryptographic keys are selected as the single point of obscurity in these systems so that many eyes may secure the other components. If the system is otherwise secure, then it is as weak as the passwords/keys which are (hopefully) picked to be very strong.
Most individuals defending algorithmic security through obscurity believe that hiding the algorithm improves security. That may be true in an extremely technical sense (the attacker must recover the algorithm first), but it is very misleading and unprofessional commentary. Algorithmic security through obscurity is at best calculated in difficulty-to-reverse-engineer (or difficulty-to-steal), which doesn't provide per-use(r) specificity (per-user password) nor scale in complexity (a 256-bit key is generally 2^128 times stronger than a 128-bit key, but doubling the algorithm length increases reversing time by slightly less than a factor of 2).
Algorithmic security through obscurity provides negligible security, but what's the harm? Why should we care? Attempting to hide the algorithm provides a false sense of security, limits review to "approved" parties, and induces legal/social efforts to "protect" the secret. The limited review is particularly noteworthy since it promotes bugs in both the algorithm and the implementation. The end result is a facade of security, some very unhappy whitehats, some very happy blackhats, and more users betrayed through poor security practices.
You can change a password, and you can calculate how hard it is to for an attacker to obtain a randomly generated password.
It is much harder to formalise how hard it is for an attacker to find out what algorithm you use, so it is risky relying too much on him not being able to do so.
Theoretically yes, but the problem with obscurity is that it creates moral hazard by lowering visibility of all other measures that are or are not taken to protect the system. It's not unreasonable to decide that such extra layer of protection is not worth of making your decision makers subject to being able to cut corners as there is no feedback loop for them.
Its only a moral hazard if you don't trust the people who are certifying the system and therefore aren't subject to the obscurity.
The trust question is the problem with obscurity. Do you trust the people making it obscure?
In this particular case, where safety-critical standards are relatively well known (within the industry) and not themselves obscured, they deserve to be trusted.
As long as "independent certification" companies are selected in a competitive market and paid by the system makers, they can't remove moral hazard - only shift it around.
After all, if you're a system maker, why would you hire hardasses who have rejected your products in the past? And if you're a certification house, why would you $$$ on many hours from experienced engineers when you could use fewer hours and junior employees giving you happier customers and higher profit margins at the same time?
You can hire "independent" people to tell you what you want in a lot of industries. You want an "independent salary survey" to tell you that $50,000 is the market rate for an experienced programmer, but that your CEO needs a $5 million raise? Or an "independent credit rating agency" to tell you your subprime mortgage backed security is triple-A rated? The free market will happily provide such "independent" reports at the right price.
I think you're right that it can be used as a layer, but the reason we admonish against security-by-obscurity is that when you hide something, you often put less work into securing it properly.
It's like when you leave a key for someone under a door mat. You don't often consider that the door might be easily kicked in by an intruder.
But if you do secure it properly, what value do you get from obscurity?
I think the big problem with obscurity is that its impact is asymmetric in the wrong direction: it inconveniences white hats a lot more than black hats.
Well until there's evidence of it's effectiveness I'm going to avoid using obscurity. I know how to achieve an acceptably low break-in rate using mathematically valid encryption etc.. Defense in depth shouldn't be an excuse for using practices you haven't evaluated the effectiveness of at all.
No, you're missing the point. I'm talking about the real-world implementation that I have.
I don't think it's too much to ask before adopting a given security policy that it provide some evidence that it increases security. Or should I also be gathering a collection of rocks that keep hackers away?
The flaw may be assuming everything else can be equal in the real world. Obscuring the algorithm has downstream consequences that may/will reduce overall security.
For example, hiding the algorithm from whitehats may prevent/discourage them from hunting/reporting bugs.
> For example, hiding the algorithm from whitehats may prevent/discourage them from hunting/reporting bugs.
Yea, this is a serious concern, I guess it depends on use cases. For sure "security by obscurity considered harmful" could be true, thats the thing people overgeneralize and fight over it when this should be weighted depending on the circumstances.
True, but history has demonstrated countless times that closed source code doesn't provide near enough obscurity to deter hackers, and automated fuzzing tools make it even easier.
It also demonstrates that obscurity can significantly reduce the number of attack attempts that are made against you. See e.g. why people move SSH to non-standard ports - raising the entry bar for the attackers has some value.
Security by obscurity is in practice almost always a bad idea.
1# security by obscurity gives a false sense of security. Under no circumstance should obscurity be used as a deciding factor behind a management decision.
2# security by obscurity cost money and time, and should only be used when all real form of security measures has been implemented. Even the military are currently not always implementing multi-token authentication, ipsec and selinux. Instead of trusting that the medical deceive is safe behind two layers, a static password and a secret port, add a certificate and implement challenge and response.
3# the priority to implement security by obscurity should be far lower than all the real security technologies. When reading security reports by pen testers, its important to understand the difference between a verified code injection vulnerability and a system information disclosure. Fixing a remotely code injection bug is much more important than hiding the fact that a system is running a up-to-date stable version of Debian, yet many security guides and reports for pen-testing tools rarely priorities.
4# security by obscurity often has real cost in support, brittleness of the system, and debugging. There is still in 2016 firewalls that will permanent block any ip address that has sent a icmp package to them. The amount of work employees are spending to unblock customers that accidentally end up in the block list could be spent on making sure that the system is just that more perfect against the serious attackers who can afford to spend $50 to access a botnet for a few hours.
Almost never. I know where the bank is, I know where the door to the bank is, but that should not make it easy to break into the bank. However, a gold storage might want to keep the location hidden, as they should have already implemented all the security procedures of a bank plus extra.
Spending time on security by obscurity should be a job for the small minority of people who already done everything else, and then only if there is a cost-benefit analyze that show cost of the obscurity to be less than the calculated gains.
If I make a lock for myself, then it makes sense to keep it hidden. If I'm buying a lock from someone else, I'd like to know where it is, so it makes less sense to keep it hidden, at least from me and my agents.
Security by obscurity is not security, I think that's the main objection behind the phrase. Inherently, a system does not become more secure by postponing its subversion. If you have broken security, fix it. If you don't, obscurity is unnecessary.
It's qualitative versus quantitative. Security is a quality, which means it can become absolute: the complete absence of security holes. Obscurity, on the other hand, is quantitative, because you can always add more obscurity. There is no "absolute" obscurity.
"Buying time" doesn't make sense in this context. It's pacemaker software. Who's buying time by making proprietary pacemaker software and for what reason?
Buying time is only useful if you can see when an attacker begins trying to subvert your system. If he can sit at home working on it for a year, "buying time" to improve security makes no sense, as you can't spend the time to improve security,p when you don't know you're about to be attacked.
the discussion on security comes about because the problem discussed is about messaging security (or lack of) on the pacemaker itself.... not about any bugs (or lack of) in the pacemaker software.
No layer of security is ever perfectly implemented, mathematically perfect tho the algorithms may be.... this is the key point that defence-in-depth acknowledges... and hence is the key point that obscurity addresses.
Buying time certainly does gain you a lot in this context. If someone was attempting to bypass security to get into a pacemaker in side me.... I would damn-sure prefer the apparent "lock" to be hidden rather than in plain sight! (given everything else is equal).
Not just open sourcing, because if they just open source and ignore the vulnerabilities, that's more trouble. Device should be allowed to be uploaded with code by the user or by the community, and the original people should just write good code in the first place.
You most certainly don't want people to be able to modify safety critical code within a pacemaker.
What most developers don't realise is the level of engineering strictness that goes into anything safety-related. The rules and regulations related to anything that affects the human body is in a different league than what most developers are familiar with.
What is a problem here, is that the design (not the code) apparently did not take into account any messaging security, relying on obscurity as its only defence.
If the code was open-sourced, don't expect to find lots of buffer overflow attack vectors, or simple things like that. Its the design of the system as a whole at fault, and that is already open.
Medical devices such as these are not black boxes to the people that certify them, everything is open to them, source included.
Having worked in that sort of area, I trust the systems that are in place.
Industrial control systems are also safety critical systems that have to adhere to very similar regulations as medical systems. Yet, when you ask hackers in that field you'll quickly learn that they have terrible code and abysmal security. Rules and regulations do very little to improve code quality and security, imho. Most things in these regulations are either best practices that any software engineer does (write tests, for example, or do code reviews), or just checkboxes for the QA to check.
I'd argue that over time safety critical systems have worse code than normal software because refactoring is almost impossibly expensive due to all the paperwork that each software change ensues.
"industrial control" covers both certified and non-certified code, you will have to be more specific than that for the purposes of this conversation.
Quite often, cases such as the pacemaker are flaws in system design, not code. (e.g. no secure messaging, probably due to the lack of awareness when it was designed).
That is not to say that safety-critical code is perfect... just that it has a lot more rigour and inspection involved than run-of-the-mill website code.
True, the expense and overhead does indeed affect the level of change that is acceptable to a business, but given that change is allowed, the quality of that change is what we're discussing here and all I'm saying is that there is a lot more rigour involved than most developers here are aware of.
> That is not to say that safety-critical code is perfect... just that it has a lot more rigour and inspection involved than run-of-the-mill website code.
I had assumed that as well until all of the horror stories around Toyota's firmware came to light.
Yes, the toyota case is a well publicised case.
Consider though, the number of safety critical systems that are out there performing perfectly everyday.
Of course, that is not proof of much, but the fact that you can name the Toyota case (and probably the Therac 25 case) means that the process generally works.
When the paperwork is describing the system design from high level down to low-level to sufficient detail that an external auditor can use it to examine the source then no, that is not true.
The security/safety processes (as applicable) are examined at every level from system-design all the way down to implementation.
Its not perfect, but its best practice we have for secure/safe product development.
Software Freedom Law Centers claims that the FDA requirements for medical devices put most of the responsibility of quality assurance on the manufacturer. It does not seem like the FDA provides much of an assurance of quality. Note: this is not my field of expertise, there is good chance I am very wrong, just following a short paper trail of laws, there maybe more laws (or practices) that I am missing that may indicate different FDA regulation controls for medical devices.
rules and regulations [...] in a different league than what
most developers are familiar with. [...] the design (not the
code) apparently did not take into account any messaging
security, relying on obscurity as its only defence.
Imagine you were building a suspension bridge to the highest safety standards, and you had people with microscopes manually inspect every grain of sand and cement that went into the foundations. But the suspension cables were made of old washing lines.
You would be achieving a very high and a very low standard of inspection at the same time.
Some people would say standards of inspection are only as strong as their weakest link; that the whole claim of such inspection is to prove the absence of such weak links; and that a standard that has approved such a bridge has, by so doing, shown the claimed proof of safety is a useless joke.
Other people would say the inspection is a checklist of common errors, rather than a proof of the safety of every aspect of the system; that proof of perfect safety was never its claim or goal; and that this merely shows there should be some extra boxes on the checklist, which was in any case a constantly evolving document.
Inspection in this case is a proof that the design you're certifying is fit-for-purpose.
So in your example, it would show that the loading and stresses on your washing lines were sufficiently low to meet the safety margin of the bridge.
Certification of safety-related and medical devices is not a check-box exercise, it looks at the dynamic and static behaviour of your system as a whole (not just the software).
I mean the washing lines weren't fit for purpose (just like the insecure comms wasn't fit for purpose) but the inspection didn't identify them as unfit for purpose (just like the inspection didn't identify the insecure comms as unfit for purpose)
Given that something unfit for purpose was certified, certification doesn't prove something is fit for purpose.
Of course, certification may indicate something is probably more fit for purpose than something that failed the same certification or didn't attempt it.
Software certification is a joke, and mostly consists of checking off a few boxes on workflow and development methods.
Smart electricity meters are safety critical and I know students writing on it in their summer jobs. Airplanes are safety critical and they accidentally bridged the on plane wifi into the fly by wire system.
Unless this stuff is written in COQ and formally verified, I won't trust it a bit.
Not Wifi, but IIRC a guy called Chris Roberts was able to leverage a wireline connection to an underseat IFE controller into FADEC control on multiple aircraft. Last I heard of the case, he might be going to prison for it, so apparently at least some people find him credible.
Similarly, witness recent revelations about car hackability - I forget if it was Blackhat or DEFCON where a couple of guys demoed a fully remote attack through the entertainment system's cell network access that culminated in the ability to override the steering, throttle, and brake controls.
Maybe these are not what is meant by "safety critical" in your use of the phrase. If that's so, I think it would clarify matters greatly for you to define what you mean by it that's more specific than "can kill people if it goes wrong".
- Yes, the hack of the braking and steering system via the entertainment system is a breakdown of the certification system. That should have been found earlier. This is why critical systems are air-gapped. I personally have never worked on automotive systems so am not familiar with the regulations there, but I was/am extremely surprised this was possible.
SIL usually deals with complete systems and deals with probability that the system will deviate from it's designed behavior, whether such designed behavior is actually correct is somewhat orthogonal problem that is often mostly ignored. (same thing applies to attempts to apply formal proofs to software). The wikipedia article lists some of the problems with SIL rating itself, and I've personally seen multiple instances of "running two redundant SIL N systems produces SIL N+1 system", often with the original N being derived by wishful thinking (this seems to be especially prevalent with engineers with railway signalling background, where I accept that for systems based on relays with redundant coils it holds, but it's complete BS for things with non-trivial software).
Various attacks on car ECUs cleanly show what the problem with current relevant certification processes is: each component is certified separately and nobody cares about the complete resulting system (head unit is not safety critical, engine ECU is, but has no untrusted inputs and there is bunch of things in between that are classified as one of these two categories). What is ironic in the automotive case is that the whole reason why there is immense number of separate ECU's in typical car[1] is safety (ie. limiting impact of one ECU completely failing) and safety certification.
[1] my car has separate ECU for each door even though rear doors does not have power windows and central locking uses dedicated wires. I assume that only purpose of said ECU is that diagnostics system can detect when the door is missing.
Interesting points about the automotive world.... and I absolutely agree with you about the certification problems with a mix of safe/non-safe components.
I've never heard the 2xN = 1x(N+1) argument, but will that improve for >2 ? e.g. triply-redundant systems?
I think the root problem there is the "wishful thinking" isn't it?
The 2xN = 1x(N+1) argument is usually presented as being somehow directly derived from IEC 61508, which is certainly non-sense. I've even seen reasoning along the lines of "it runs on Windows NT, thus it's inherently SIL2, there are two redundant IPCs running that code, so it's SIL3".
As for dual-redundant vs. triply-redundant it highly depends on whether shutting the system down on failure of one redundant component is desirable outcome, both railway signaling and most industrial systems can and are designed that way, but for systems where that is not possible (either because they need some non-trivial actions to get into safe state or because the whole SIL dance is about keeping the thing running at all costs for business/legal reasons) the dual-redundancy actually decreases the reliability, because additional components handling the redundancy also have their own failure probability (and in many systems does fail more often than whatever it is supposed to protect from failure, especially in master-slave systems that attempt to detect which of the halves had failed and respond to that by failover to the other one).
Interesting approach that is often used for road traffic signalling and general industrial control is that there is second control system, that only checks that outputs of the primary control system are consistent (for traffic signals, it is trivial boolean function of the outputs, usually implemented in hardwired circuitry) and shuts the whole thing down when they are not.
"The described technique cannot engage or control the aircraft's autopilot system using the FMS or prevent a pilot from overriding the autopilot," the FAA's statement explained. "Therefore, a hacker cannot obtain 'full control of an aircraft' as the technology consultant has claimed."
"The statement went on to explain that although Teso may have been able to exploit aviation software running on a simulator, as he described in his presentation, the same approach wouldn't work on software running on certified flight hardware."
Certifiers typically don't review source code. They are operating at least two levels of abstraction away: they review documents that describe procedures that are supposed to ensure quality.
> If the code was open-sourced, don't expect to find lots of buffer overflow attack vectors
That is a big fat citation needed. Researchers working without access to source code have demonstrated multiple vulnerabilities in safety-critical medical devices. We have no way of knowing how many more they would find if they also had access to the source.
I used to Reverse-Engineer medical equipment (MRIs, CTs, etc). The lack of engineering strictness in theses systems was amazing to me. They are horribly insecure, BY DESIGN
My experience is that, no, this is not an acceptable solution. I have no reason to believe that implanted medical devices will be any better.
We hear a lot about how digital obsolescence is a growing problem, and almost all of it refers to not being able to access your old family photos and movies, or maybe old documents and spreadsheets. But what happens when your pacemaker is obsolete, the source code is long lost, and no-one knows how to update it?
Is this problem being addressed in any real way? 50 years in the future some of today's devices may still be operating in peoples' bodies, and it seems hard to believe that anyone would still have the knowledge and/or tools to upgrade them. And surely it's quite a big deal to open someone up to replace the hardware every 5 years?
The pacemaker or ICD generator is replaced when the battery is exhausted, typically 8–10 years. The procedure is not a big deal, it is commonly outpatient and done under local. Outside of some durable orthopedic implants, few implants will survive in the body for 50 years: it is a very hostile environment.
I've never thought of it this way, and you are right from both a technological and biological perspective.
Biologically we are wonderful containers of nutrients, but we have an army only the very sneaky or militant can overcome. Once that army stands down we are rapidly colonized - which is why we must be so careful with food/meat storage.
You just made me realize why bacteria and other things want to kill us so badly... we are precisely that, really extremely-high concentration stores of nutrients.
Useful to know. Of course, batteries get better and electronics gets lower powered - I can imagine in the near future implanted devices being effectively passive (such as RFID)... but your point about the body being a hostile environment is definitely something I hadn't considered before.
To expand on one aspect of hostile, some white blood cells greet foreign bodies with micromolar concentrations of a variety of oxidants and radicals: hypohalous acids (not just the familiar household bleach, hypochlorous acid, but also the nastier hypobromous acid), peroxides, superoxide, nitroxides, maybe even hydroxyl radical.
Realizing that, it's easy to see why autoimmune diseases are so devastating.
Of course, it's chemical warfare down there. Using strong acids and the like to destroy bacteria and virus structures is something I'd heard about, at the microscopic level, but I'd absolutely never thought about its implications on a macro scale before.
Presumably they sheath these devices in inert plastics or something in an attempt to counteract all this?
We do. Silicone, gold, porcelain, and titanium are actually pretty good for being bio-inert as well as many many other compounds. The metals obviously come with many issues too (heav metal poisoning, weight, different elasticity, etc) and the plastics are not as durable or tough as the metals. Also, these things tend not to be 'squishy' so impacts and wear and tear are tough on them.
Fun fact: Gold teeth caps are really good for you, though maybe not your love life. Gold has very very similar mechanical parameters to your dentin.
Overall, bioengineers are making great strides into these compounds. We are developing new heart stents that coil and expand in response to heat or can just be injected and will solidify and expand wherever they are needed. A cool development sector is in 'smart' hip replacements. They have nano-pores in the titanium to induce bone growth into the implant and create a better and more healthy bond.
Does anyone know if at least the FDA is allowed to review the source code for pacemakers? Or is it a complete blackbox? Personally I would be appalled if even the FDA is not allowed to.
That's a tough one to answer. The cynic in me says probably not. That they're so focused on pharmaceuticals and "analog" medical devices that they haven't developed those capabilities.
But I also know that the FDA is a massive organization, and there's no reason they couldn't hire for this specific purpose. But then the cynic says that government pay grades may not be up to snuff.
See the HCA rollout and subsequent rewrite.
Sorry, that was a long way of writing, "I don't know".
There have been medical devices external to the body for a long time. Therac 25 (1982) is world wide web (1989). Airplanes were first flight controlled by computer in 1958, and commercially in the concorde in 1969.
Think about that for a minute. There has been an official government review process for safety of computer controlled airplanes since the 70s, and a good decade for medical devices (likely earlier) 10 years earlier than the WWW existed.
I think it's kind of funny that you'd think that you were doing it better than production life critical systems THAT CAN ACTUALLY KILL PEOPLE that have been around at least ten if not 30 years longer than the tech you likely (sorry assumption) base your career around.
Uh, did you really just cite Therac 25 in favor of safety review? You do know that was the one that had a bug which slipped past review and killed some people, right?
Your overall point is well taken, but maybe put a little more thought into the examples you pick to support it...
I cited it as a thing that killed people and caused an industry overhaul in how things were reviewed... Remember that toyota just was forced to do somthing like this only a few years ago. I'd say the FDA has been cognizant of life threatening code for a lot longer.
EDIT: Not sure what's up w/ the downvotes. There are well-established ways for regulatory agencies (whether FDA, FCC, etc) to obtain firmware for devices -- and it almost always involves a warrant under extraneous circumstances -vs- proactively receiving proprietary code.
I believe the downvotes are because the FDA has to approve medical devices before they are marketed, and GP was referring to the possibility that the FDA would review the source code in connection with device approval. (It reviews most other aspects of the device's functioning, after all.) Definitely no warrant required in that context.
What if they had to subpoena a tomato grown in a field next to the toxic waste dump? What if they opted not to examine that tomato because they didn't have the resources to issue, process, and support, the lengthy bureaucratic process involved in such things?
But the government can and should have the right to inspect a tomato (not necessarily a specific tomato) if all the tomatoes in that field are slated to go direct to consumers, right?
Similarly, what about testing for drug quality? You could even extrapolate it out to the SEC's right to examine a private financial transaction in order to determine legality. Or the IRS's right to inspect one's taxes to determine compliance.
Point is (IMO) there is a need put on government by society to bypass some of our "Inalienable" rights. In most cases this societal decision is necessary and makes sense, and I'm arguing that code inspection of life-critical systems is a reasonable example of such a case.
To extend the metaphor, if the farm that grew the tomato is selling them to people to eat, I don't see that a subpoena is or should be required. But it breaks down anyway, because a tomato's source code is right there in it, with no opaque binary blobs to worry about trying to decompile.
In general, you do not submit source code for review - just all your procedures and results for testing.
In normal auditing, they will not inspect your source code - they may inspect everything around your source code (what you procedures for changes are, how you do your testing, etc etc).
However, I believe there's a general understanding that if you fuck up, your source code will be open to inspection - along with everything else. Cause you'll either voluntarily surrender it in hopes of getting on the FDA's good side, or cause they'll subpoena cause you killed someone.
> However, I believe there's a general understanding that if you fuck up, your source code will be open to inspection
Hold it right there, Karl Marx. We can't just be giving the proletariat access to the means of software production because of one little boo-boo. That could totally bankrupt a company and would be a theft of IP. Rest assured that the proper procedures will be followed and the flaws corrected, but under no circumstances can we take the lawful property of entrepreneurs and daring businessmen.
I used to work at a diagnostic medical equipment company. Rules were slightly different for us (diagnostic rather than therapeutic = lower bar) and we had FDA audits. They were basically making sure we had written down our processes to an adequate degree, and checking that we actually followed what we wrote down. They most definitely were not doing code audits (which isn't their job or area of expertise).
> and checking that we actually followed what we wrote down
We never got that. For the most part, checking that we followed what we wrote down involved reading more documents that said that we did what we wrote down. The auditors weren't even allowed to walk around freely in our office, nor even be left alone without supervision, and we were all carefully coached on how to answer interviews with non-compromising responses. "I do what the SOP says. I cannot quite recall at the moment, but if you let me refresh my memory by reading the SOP, I will gladly clear it up for you."
It's just smoke and mirrors to pretend like important testing took place. Probably also to make sure whom to blame when things go bad. Some testing does frequently happen, between the document writing about the testing.
Our FDA inspector hung around the QA manager and staff, didn't go into R&D much at all, and spent some time on the factory floor checking processes.
I guess that they function as regular police do - the actual experience is less than ideal and leaves a lot to be desired, but without any at all, the world would be horrible.
The FDA probably doesn't even know what source code is. They have vague regulations on how medical devices should be tested, which by tradition has been interpreted in a particular way to mean certain kinds of documents have to be prepared. There are auditors that check that those documents are written. Nobody checks that what the documents say about the software is in fact true because neither those writing the regulations nor those auditing the documents really know anything about computers.
As someone who has been on the receiving end of several FDA audits, I would really like to know on what basis you're saying all of this, because everything in my 10 years of experience of doing this is contrary to what you've said.
Maybe we worked in different fields? I only did 3 years, only received one FDA audit, but many other audits from pharmaceutical companies. I was in the more diagnostic side, not therapeutic, although we did have some safety checks where we had to quickly raise an alarm if our analysis showed a potential medical emergency.
Did the FDA actually check your software or just your documentation? Did the auditors very carefully grill you or just languidly ticked off boxes on a form? Did you find the process of writing documentation instrumental in ensuring that your software was carefully tested? Did you ensure that your tests were reproducible and comprehensive?
None of these things were done very carefully in our case. My superiors were very insistent on the documentation and were quite proud of the quality of our software but they mostly never had any interaction with it and had no real idea of what we did for software validation. They were more concerned with making sure signatures and dates were correct and that our documentation didn't make us look bad.
I think the audits must look more impressive when you're in charge, but as the one actually writing the software I was thinking... that's it? The FDA doesn't really give a damn about what I really worked on, do they?
Class III devices (pacemakers, deep brain stimulators, insulin pumps)
> Did the FDA actually check your software or just your documentation?
In a submission, occasionally. In an audit, it depended on the nature of an audit. I've been part of one audit that was due to persistent SW failure and yes, they checked code.
> Did you find the process of writing documentation instrumental in ensuring that your software was carefully tested?
No, but that's not really the point of documentation.
> Did you ensure that your tests were reproducible and comprehensive?
Yes, we could kill people if we screwed something up.
> They were more concerned with making sure signatures and dates were correct and that our documentation didn't make us look bad.
This practice is common, I grant. Doesn't mean it's universal or right.
> The FDA doesn't really give a damn about what I really worked on, do they?
FDA heavily triages activity based on risk. If your product is life-sustaining they will definitely be all up in your business.
I can't recommend this talk enough. If you have the time, it is one the best keynotes I've ever been privileged enough to watch.
Ever since then, I've been very interested in all the paperwork sent to my grand father about his pacemaker + defib implant. Recently they mentioned to him that they will send out a remote monitoring device and Karens talk immediately jumped to mind.
Initially the documentation talked about how the remote monitoring device needed to be held close to the heart for a few seconds in order to download the relevant data. This gave me a modicum of hope that at the very least it was some sort of NFC type communication. This would at least make it harder to physically exploit.
However, they continued to say that if you want it to monitor you every night, just put it within three metres of your bed while you sleep. I had a search online and there does not seem to be a lot of people particularly interested in this area, although repos such as https://github.com/openaps/decocare give me hope in the ability of the community to reverse the relevant protocols and investigate these devices.
One could frame the entire computing industry as a distributed genetic algorithm, executed by the real computers in order to understand themselves and the environment around them. One could further posit that we don't really have a good handle on the right fitness function yet.
(I realize this sounds like a low-effort joke, but think about it for a second.)
Well, computing today seems like primordial soup flowing through the pipes set up by Moloch[0] - we keep doing random shit, somewhat directed by economic incentives.
Not similar at all actually. You're talking about software written over a couple of years (by humans) that is supporting our current biology versus biological processes that have "optimized" us over tens of thousands of years.
Do you want to control the device, or do you want someone else to control your devices?
If you're OK with someone else (who probably doesn't have your well-being anywhere in their list of priorities) controlling all of your devices, then the answer is no, you are not required to have access to the source code and output data.
EDIT: I didn't realize this was such a controversial statement. I stand by it, though; even as I knowingly own many such devices myself. I can certainly see, however, how I may want more control over a pacemaker than, say, my phone.
If someone else is controlling the device, then they de facto own it and should be liable for any damage it causes. The user cannot both be liable for something and be unable to manage/fix it.
In the case of a pacemaker or other medical devices, this liability could even include manslaughter.
There's another whole level to this question of control, which is do you want someone else's algorithm, written in the past, to control your device, or do you want them to have realtime access to change your device's behaviour whenever they want? The latter is becoming more and more common and, I think, is far more disturbing than the former.
(As an aside, the suggestion that a software developer writing firmware for pacemakers "probably doesn't have your well-being anywhere in their list of priorities" seems unfair. You wouldn't believe the amount of effort that goes into ensuring that critical systems behave properly.)
> You wouldn't believe the amount of effort that goes into ensuring that critical systems behave properly.
I wouldn't either. I have seen some of it, and not that much effort goes into it. A lot of paperwork goes into saying that the effort was done, but the actual effort carried out pales in comparison.
It's more about finding someone else to blame than to make sure nothing bad happens. Organisations like the frickin' FDA who have no idea how software is developed are in charge of worldwide standards of medical software bureaucracy.
In theory, open sourcing pacemaker software make sense, but in practice, the pool of people qualified to review and edit that software may not be very large. Most people would still be depending on someone else to control their devices in any case.
There are probably very few parts of the code of a pacemaker that requires knowledge about medical devices or pacemakers specifically - probably the drivers that connect to the sensors and the algorithms that determine what the pacemaker should do based on the result of the sensors.
The majority of the code, especially if as said in the article it has wifi and listens to external requests, is code that requires only expertise in more general software, of which there are many many people who are qualified to review such software (or write it).
If I had a pacemaker I would most definitely review the source code for it if I could get access to it. Even if it meant learning golang or ARM assembly or whatever the kids use to build pacemakers these days.
I'm sure I'd lack a lot of the heart mechanics side of the equation, but I would be very highly incentivised to hunt out programming bugs.
Y'know, back in the early 2000's and the days of Slashdot, it was quite common to find people who advocated for free software everywhere.
Now we find people who like yourself have to specify that the radical position that all software should be free is something worthy of serious consideration. That they're not joking or trying to be deliberately provocative.
What happened to us? Why did we go from boasting about installing Linux on a dead badger and talking about how all software will some day be free to being afraid to seriously consider the proposition?
Because Linux was supposed to become a great thing, but instead it remained a paradise for geeks to do what they think is best. Software built to make money, on the other hand, was built to improve things like ease of use, aesthetics, and buyer's happiness, because that's what buyers were looking for. The open-source people never really cared about the dumb people and lay folks, the ignoramuses that didn't care to learn how to compose commands and figure out regular expressions. And that Linux was supposed to be the shining example of open-source software, one that so many people installed and tried, only to find out it blows and is effectively unusable for their needs.
I was under the impression that Mac OS, iOS, Android and recently Windows all run "Linux" under the hood nowadays. Not to mention the presence of Linux in the cloud, which is essential for the functioning of most websites, apps and mobile devices. Am I wrong?
Basically, it's easier to tell where there is no Linux than the opposite. That surely doesn't sound like such a failure. It's as if all the other OSes and systems are front-ends for Linux subsystems.
Only Android actually uses Linux. macOS/iOS are proprietary Apple things with bits of mach and FreeBSD. The Windows thing you are probably thinking of is that Microsoft reimplemented various Linux APIs, so software built for Linux can run under Windows without Linux being involved.
My theory is this is HN-specific - and what you saw on slashdot was slashdot-specific.
Because HN started as part of YC its culture really likes VC-backed startups. And we think VCs want the kind of huge returns that are seen more often by closed source companies - they want to back the next Microsoft or Apple or Google or Facebook or Paypal or Amazon, not the next Red Hat or Canonical or MySQL.
It wouldn't make much sense for people to believe free software is a moral imperative, while aspiring to launch huge closed-source companies.
I see a similar lack of enthusiasm for free software everywhere, not just in HN (maybe confirmation bias), in Lobsters, Reddit, IRC. I really think that Unix in Apple and Android has given us what Steinbeck calls "a bored and slothful cynicism, in which rebellion against the world as it is, and myself as I am, are submerged in listless self-satisfaction."
It's good enough for most people, we have some Unix under the hood if we go looking for it, free software has mostly won. Or something. I really don't know. I miss the rebels of yesteryear.
The impression I get is that it's mostly the opposite: people have mostly given up on the notion of having all (or majority of) software being Free, and see that more and more as a pipe dream. Hence they don't want to "waste" their enthusiasm on something that feels like a (literal) utopia now.
There's a lot of reasons combined to get people there:
* seeing corporate behemoths everywhere that want to keep critical software proprietary
* many more people wanting to monetise their own software, and seeing that GPL-ing software offers only a few (and
not-universally-applicable) monetisation strategies
* seeing that the "all bugs are shallow" and "everybody pitching in will make Free software the best there is" turn out not really true in practice
* seeing that in practical terms, having access to read and modify source code means nothing in 99% of cases because of software overload and others' code being so hard to get into that it's often practically impenetrable.
Not to say that I agree with all or any of these completely, but it's easy to see such a multi-pronged attack pushing back hard against the FSF's message. Further reducing FSF's ability to spread the message is the widespread character assassination of RMS as an "insane, overly idealistic, and rude slob" that has been going on for at least a decade.
As someone that read the Halloween documents non-stop overnight, it makes me a bit sad too.
Maybe it's just that the rebels these days are quieter and more establishmentarian, because they have /become/ the establishment. TBF, I quite like this state of affairs and don't miss the rebellion-for-the-sake-of-rebellion attitudes of yesteryear.
By what extension? There's a very real criteria here, it literally controls the rhythm of a persons heartbeat. This isn't a trivial device that runs on a wall outlet. "By extension" is doing a significant amount of work in this comment.
How about when I'm flying an airplane; I'm also putting my life in the hands of people that wrote the code that controls it and I have to trust that the plane won't shut itself down mid-flight because of faulty code. Should a similar argument be made here?
Perhaps you are being sarcastic, but I shall attempt to answer the question earnestly anyway.
I've been reading up on aviation regs due to a recent interest in getting a pilot's license. By my understanding, airplanes certified by the FAA as airworthy undergo some fairly heavy testing to exactly determine and prove what their capabilities and limits are. Given that getting your prototype wrong can cause the plane to crash and kill your test pilot, there's incentive to get this stuff right. Furthermore, once a plane is type certified, it can't be modified from that configuration without further testing to prove the modified configuration. Thus you'd better be damned certain that your engine control computers are correct, lest the FAA revoke the plane's airworthiness certificate, making that model unsellable, nevermind the lawsuits of the survivors of deceased passengers or insurance companies recouping their losses.
Additionally, from what I've seen, most general aviation airplanes are stuck in the 60s as far as engine tech goes, in part because of the strict FAA regs. We're talking air-cooled engines with carburettors here, no engine computer to speak of, or if you're running a fancy modern engine, mechanical fuel injection. Unless you're flying a brand new Cirrus SR22 or Diamond DA20, there's no code to inspect. Even if there was, it'd be in the avionics. You don't need a nav beaon, radio, or transponder to land an airplane safely -- though they can make it much easier, and the FAA is going to want to know what happened once you're back on the ground.
As far is large passenger jets? I'd say as a passenger, no, you don't have access to the code. It's not your airplane; it belongs to the carrier.
Personally, if I were to own a plane and fly it, I very much would want access to any and all code that makes the plane go. Though if I'm going to be hacking on said code, I suppose that's what the FAA's Experimental category is for.
> Furthermore, once a plane is type certified, it can't be modified from that configuration without further testing to prove the modified configuration.
Would this work similarly with autonomous cars? For example, what if google wants to change one line of code in their car?
I think the difference between these two are 2-fold - first is ownership and second is personal.
A pacemaker is something you bought and owned, when flying in a plane you are buying a service, this is similar to earlier discussions about being able to change your car's software under DMCA etc...
A pacemaker is also personal, in that it's something that only you have and for your specific pacemaker the only affected party is you.
A specific flight has hundreds of affected people which makes the burden of responsibility (for lack of a better word) shared between many people.
Sure about that? How much did you pay for, and how much did your insurance cover? What's the proportion look like?
Before you downvote, note: I don't like this argument. I find it downright horrifying. But I can't imagine no one will ever make it in a serious way, so it bears considering how to respond.
I don't like this argument but for another reason - it implies a horrible direction things will most definitely go. The pacemaker will become a service. Just like your car, your house and your washing machine. Oh sorry, not your anymore - soon we'll all be renting them, because there's every business incentive for that to happen, and close to zero incentives that would stop it.
I guess bought is subjective here, but ownership is not.
And you can still own something if you got it as a gift or through other means.
If no one can take it from you (or you need to return it) without your consent then you probably own it.
It might be a little vague legally, but we aren't talking about legal issues here (since it's perfectly legal to not have access to the sourcecode of something you own), but rather a perception issue.
> when I'm flying an airplane; I'm also putting my life in the hands of people that wrote the code
Yes, but you don't have to. You're not gonna die if you don't put your life in their hands. But with pacemakers and such, you have to get one or you die. Then, you depend on the manufacturer.
Sounds like they could do with a law similar to the freedom of information ones for software of this type. Without that device manufacturers are not going to want to publish as competitors could copy it and some people may sue over perceived errors.
I am imagining that young folks have Library Anxiety, and old folks have "Google Anxiety".
Ask any question and you will find an answer. Any question you have, no matter how banal or left-field. What is the weather? Is my grandson a lesbian? How do I eat pizza in Italy?
Where is the biography section? How do I understand the Dewey Decimal System? What is in the Special Collections, and what are the hours -- and do I need an appointment? The computers are down... is there a way I can search for books offline without randomly roaming the stacks?
I want to inspect the blueprints of every building I walk into and know the sourcing and composition of all the structural components as well. For my life depends on these things to be true and properly constructed.
Earthquake certifications are just like other certifications, in that most of the process is not actually reviewed by any government agency. They check blueprints, sign off on the process, but they don't actually make sure that the ibeams are connected correctly and that the angles are just right. Done.
Those are two different things. Private industry's safety assurance will grow up steadily as the demand from consumers goes up. Currently we are just thankful to be alive.
For example if someone invents an artificial device that helps me get rid of diabetes I would be super happy. It is only a generation later we would demand that device meet certain quality floor. That is the case with all innovations.
Government however can put a very hard nail into the head of an innovation.
> “You’re pulling data from my cardiac device that I paid for, implanted inside my body, the most intimate piece of technology anyone can have, and yet I’m devoid of access to the device? That moved me to my core,” he says. “That’s just not right.”
I'm sure she must have signed a user license agreement of some kind upon buying the device. So she shouldn't have to complain.
Reasons to NOT open up the code:
1> Loss of competitive advantage
2> Open source is not necessarily any safer (heartbleed bug ... )
3> If software for the pacemaker is allowed to be updated like that on a computer, someone will update it with buggy software that can cause adverse side effects. Who owns the liability in that case?
None of these are particularly compelling reasons IMO.
> 1> Loss of competitive advantage
Patient safety trumps business considerations. Skipping clinical trials would be a major competitive advantage (lower costs, quicker to market etc.), but we don't allow that for the same reason
> 2> Open source is not necessarily any safer (heartbleed bug ... )
I'll agree, in so far as saying making it open source doesn't necessarily make it safer. However, it should, at worst, be no less safe - the requirements on the manufacturer (testing etc.) should be the same regardless
> 3> If software for the pacemaker is allowed to be updated like that on a computer, someone will update it with buggy software that can cause adverse side effects.
You can open the code but, for example, required a signed binary before a device can be updated. The openness of the code and the openness of the device itself are linked but separate issues.
> There's a competitive advantage in keeping pacemakers' source code proprietary?
There definitely is. If your company takes ~2 years to develop a pacemaker's software, it's not to your advantage to let your competitors catch up. I'm not saying it's a good thing that it's closed source, but there is definitely incentive to keep your research to yourself.
> If your company takes ~2 years to develop a pacemaker's software, it's not to your advantage to let your competitors catch up.
Why should the patient who has the pacemaker implanted care? This seems like a clear situation in which the patient's interests trump everybody else's. Pacemaker manufacturers should be competing in how well their devices meet patient needs. Closed source doesn't meet a key patient need.
As heartless as this sounds: the patient isn't the only person in the equation. There are investors that fund the medical research, a company has employees to pay, the hospital wants the best pacemaker available, etc.
And in the same vein, why not ask investors to if they'll fund a pacemaker company in a purely philanthropic fashion? Isn't it better to get pacemakers in a limited fashion than not at all?
Sure. Soon we'll have cloud-connected subscription-based freemium pacemakers that will shut down and kill everyone when the company making them gets acquihired.
I mean, I get the business incentives involved, but I also think it's high time to realize they're getting insane and should be altered.
> what patient need do you believe open source would meet that is not being met by the current closed-source development process?
The need to know, and be able to control if necessary, what a device that affects your life and health is doing.
I'll give an example from my own experience. I have sleep apnea and have to use a CPAP machine. There is a nice little SD card in the machine that records detailed data from every use--which means it records how long I sleep, how soundly I sleep, how many (if any) apnea episodes I have, etc. This would be very useful information to me, but I have no way of getting it, because the software and data inside the device is proprietary. The only way for me to have anyone look at this data is to pull the SD card out and take it to a sleep therapy doctor, and even then I won't see the actual raw data; the best I'll get is some proprietary analysis report designed by the device manufacturer.
Furthermore, the ostensible use for the data is to be able to adjust the pressure the CPAP machine is set at in order to improve the therapy. If I had access to the data and the settings inside the machine, I could easily do this myself. Instead, if I want any adjustment made, again, I have to go see a sleep therapy doctor--which means I need to make an appointment, which usually means waiting weeks or months, and I need to take off from work to go to the appointment, etc., etc.
It should be obvious how open source would greatly improve this situation.
The need to avoid Therac-25 [1] type bugs in the proprietary code that are fatal to the patient. Open source in conjunction with a bug bounty would make me far more confident than simply trusting that some large corporate has got their shit wired tight when history throws up so many counter-examples.
Spent 15+ years developing code for medical devices. I doubt very much that opening the source would have much impact on quality.
For starters, without access to the hardware and understanding what it's supposed to be doing, how do you know if the code is right or not?
Let me clarify that I'm not opposed to open sourcing the code, once issues around Trade Secrets are handle. But I don't think it would have even remotely the impact this group believes. The vast majority of device problems I've seen were system issues, not just a coding error.
I don't need to know much about what it's supposed to be doing to critique a data channel that transmits plaintext over a radio, or accepts commands without cryptographically authenticating the sender.
I also don't need to know much about the intended function of the device to find coding styles that cause problems in an embedded environments.
Manufacturer must support the product for 10 years after the last sale. In the case that they go belly up, that I don't have experience with, but it's likely that another company would step in to fill the role. Support can be quite lucrative.
Well now, this isn't quite the same thing as your normal IoT device. You would need to be Really Damn Sure any updates are bug-free before you could push them, which would take actual financial resources. Open source would help here, but not quite as much as it normally would.
If every pacemaker manufacturer is required to publish source couldn't you just release it under a license that doesn't allow your competitors to use your code and wouldn't it be readily apparent if they did?
Even if we assume that these arguments are sound (which they aren't), they do not appear to provide a proper justification for letting a human being die.
Put another way: If you had to explain Marie Moe's death to one of her relatives, which of your three points would make the relative understand your position?
These are sound arguments. Medical devices have to be certified in all applicable jurisdictions and companies want to protect their competitive advantage. This is no different from, say, new drugs, which are similarly proprietary and the maker retains exclusivity for several years. Further, medications too are risky black-boxes towards end-users.
They are not sound in the context of human welfare.
We do not allow food companies to hide the ingredients of their products, because we know that companies (in the interest of profit) will fail to inform consumers about the dangers of eating unhealthily. Similarly, we do not allow meat producers to leave their meat ungraded because it endangers their "competitive advantage" - we understand (empirically) that doing so leads to food contamination and otherwise preventable disease.
Your choice of proprietary medicine as a counterexample is an interesting one. The pharmaceutical industry in the United States has a long history of "protecting its competitive advantage" at the cost of individual welfare - consider how frequently "reformulations" of the same base chemical are patented to continue milking a lucrative product that could improve the lives of thousands if genericized.
Perhaps even more pointedly, consider the fact that we deem it acceptable (and necessary) that the FDA step in and regulate the release of new drugs. Is there a valuable distinction to be drawn between the sort of regulation and evaluation that the FDA does and the sort that would be possible if programmers could openly evaluate medical devices?
You're attempting to justify the release of a (potentially fatally) flawed product on the grounds of lost "competitive advantage".
If you lost a family member to a faulty (yet certified!) medical device, would you feel that their death is justified by a marginal improvement to some company's bottom line? What if it wasn't a medical device with closed code, but a miracle pill whose manufacturer refused to release the formula to the government for analysis?
The argument I made in my original comment does not rely on the (un)soundness of your original claims - it is a moral argument about the value we place on human life. I'd be willing to bet that, even if you think that closed source is superior in every fashion, you would never be satisfied by your own justifications in the case of a personal tragedy. This contradiction suggests that, at the very least, we must draw a distinction between justifiably and unjustifiably closed source code.
1) The software in a pacemaker is not a competitive advantage. The hardware and rights to distribute to hospitals are.
2) Open and closed are equally vulnerable. Closed may be even more vulnerable, since security researches have to attack a black box and can't run ordinary code linting tools.
3) Liability after update is a good question, but one which can be (and is) solved with traditional contract agreements. See cpap machines.
The code to the Jeep Grand Cherokee is closed source, yet security researchers (and now thieves) have been able to take complete control over the vehicle via simple protocol sniffing and reverse engineering. Closed source is no safer than open source.
Number 2 isn't reason NOT to open up the code. You can't just claim because it doesn't do something that it's a good reason. Open source software isn't necessarily faster or more efficient either. So?
Maybe in a complete vacuum, but in reality, having access to source certainly makes it easier to look for vulnerabilities, and if the same software is in many devices, the cost of finding vulnerabilities is amortized.
Security by obscurity obviously doesn't stop a determined attacker, but it does raise the barrier to entry for script kiddies.
Anyone you're likely to classify as a "script kiddy" is not going to be able to read the kind of code going into embedded devices like a pacemaker to a deep enough level to find any problems. And if they can the software is really problematic, most likely.
Security by obscurity is never a good idea, but especially not when it might prevent a white hat from finding a bug that would allow a malicious actor to remotely STOP MY HEART.
I agree with you in general, but since we're talking about embedded devices that can't be updated, here's a concrete scenario:
1) White hat finds a vulnerability in the source code which applies to a large number of devices.
2) Source is patched but vulnerable devices exist in wild
Now all an attacker needs to do is find a vulnerable device; because the source code is public like OP suggests, figuring out which devices are vulnerable is trivial.
Unless I'm missing something drastic, this is actually a problem in the embedded space where obscurity seems to help.
Tell me, how many vulnerabilities are running wild on Linux, the software that powers... well, pretty much anything (including the servers through which you read this content)? Even if you find a vulnerability, it gets patched within hours and it may take a day or two for it to be distributed to everyone.
> Security by obscurity [...] does raise the barrier to entry for script kiddies.
Which script can help you find a vulnerability inside of the source code? You'll need a script that can understand the code it's looking at. I'm not aware of any such script.
In the interest of honesty - first of all, "it gets patched within hours and it may take a day or two for it to be distributed to everyone" is not true. No matter how fast a vulnerability is patched, the distribution process usually takes days to weeks (c.f. Heartbleed bug, Canonical was apparently the first to find and fix it, and yet I've waited weeks to get the fix on my Ubuntu machine) for those who care and monitor those issues constantly, and months to years for everyone else.
Now there is an argument that there is a trivial way to find vulnerabilities in Open Source code - just diff the commits to look for fixed bugs, and attack those who didn't manage to update their software yet. That's part of the reason why e.g. Wordpress blogs and PHPBB forums get spammed so heavily.
Whether or not the benefits of Open Source are greater than those problems is another topic, but let's not pretend opensourcing doesn't lower the entry bar for attackers.
If the same vulnerability is present across a large range of devices, a public exploit has a much larger impact.
I'm not advocating for security by obscurity in the slightest because on balance I think it's bad, but we should acknowledge that publishing your source does change the potential cost of mounting an attack in various scenarios, and some of them might actually favor obscurity.
Nobody except the most determined attacker will attack a device with some custom, unpublished code. On the other hand, popular software have a variety of exploits in the wild because their popularity makes them more attractive targets, one consequence of which is enabling script kiddies (since the hard work can be outsourced).
> it gets patched within hours and it may take a day or two for it to be distributed to everyone
This doesn't work for embedded devices. Hence why it might not be a great idea to publish their vulnerabilities, or tell the whole world that you're running on old vulnerable source.
This is patently incorrect. Even the underpowered Z80-clone micros with 18kB RAM I was writing firmware for 15 years ago had trivially updated firmware; modern devices are even easier.
The article even mentions that wireless firmware updates is a feature:
Then she bought a pacemaker programmer online,
and she and other hackers figured out that it
could be used to update the code on her implant.
> Nobody except the most determined attacker will attack
Relying on the laziness and ignorance of the attacker is a terrible idea.
> Hence why it might not be a great idea to publish their vulnerabilities
This is why any it's important to practice responsible disclosure. The manufacturer should have a reasonable time to make their patch before telling the internet.
Even if embedded devices are updatable in principle, in practice how often do they receive security patches? Pointing to a feature list isn't a realistic evaluation of what actually happens.
We live in a world where even phones don't get patched as frequently as they should; you expect end users to patch their pacemakers?
Putting them online and allowing auto-patching would probably be worse since it also increases their attack surface drastically.
> This is why any it's important to practice responsible disclosure.
Right, and fact is that 'responsible disclosure' ends up looking a lot like obscurity in a world where you can't guarantee that devices will be patched before an attacker would be interested in exploitation.
> in practice how often do they receive security patches?
I have no idea what the current patch rate is for pacemakers, but they do happen. The use of radio was a feature specifically to allow updating and management of the pacemakers while avoiding the serious risks of surgery. The pacemakers would be patched when the patient shows up for their next checkup appointment, which is probably every 1-2 months. They already connect to the devices for regular diagnostic purposes at those times.
If the problem was severe enough, calls would be made to the patients to come in right away. Medical services already handle problems on a priority basis ("triage"). This already happens for other types of problems.
> 'responsible disclosure' ends up looking a lot like obscurity
That's a circular argument. You're implying that the manufacturer wouldn't want to fix their product, which is highly unlikely. The only reason they are resistant to the idea at present is because the source is closed. The entire point is that by opening up the source the community can work with the manufacturers to fix these problems.
You're arguing that because the current system currently doesn't patch bugs that often, we shouldn't allow more debugging. Pretending that either bugs don't exist or that malicious actors won't find them without the source code is dangerous. "Pride goes before the fall"; do you really want to bet - potentially with your life - that all malicious actors are too stupid to find security problems? Hint: many medical devices have already been hacked (without the source). Or do you want to let the community at least attempt to find the bugs first?
>
The use of radio was a feature specifically to allow updating and management of the pacemakers while avoiding the serious risks of surgery.
In other words, it's a ready-made vector for potential attacks, as long as an attacker can get close to a pacemaker with a radio. You could probably even put a powerful device near a hospital and pick up random people coming and going.
> Or do you want to let the community at least _attempt_ to find the bugs first?
The promise of open-source security has always been that you let many eyes look for problems, then you patch ahead of potential attackers.
Logistically, that's extremely problematic for embedded devices. Every time a vulnerability is found, you ask the patient to go back to the doctor to get it updated? That doesn't scale at all.
The idea that device manufacturers will change their entire engineering and security philosophy is somewhere between idealistic and naive. I sincerely hope it's the former and not the latter.
Open sourcing this code would do a lot to mitigate these issues.