Not Wifi, but IIRC a guy called Chris Roberts was able to leverage a wireline connection to an underseat IFE controller into FADEC control on multiple aircraft. Last I heard of the case, he might be going to prison for it, so apparently at least some people find him credible.
Similarly, witness recent revelations about car hackability - I forget if it was Blackhat or DEFCON where a couple of guys demoed a fully remote attack through the entertainment system's cell network access that culminated in the ability to override the steering, throttle, and brake controls.
Maybe these are not what is meant by "safety critical" in your use of the phrase. If that's so, I think it would clarify matters greatly for you to define what you mean by it that's more specific than "can kill people if it goes wrong".
- Yes, the hack of the braking and steering system via the entertainment system is a breakdown of the certification system. That should have been found earlier. This is why critical systems are air-gapped. I personally have never worked on automotive systems so am not familiar with the regulations there, but I was/am extremely surprised this was possible.
SIL usually deals with complete systems and deals with probability that the system will deviate from it's designed behavior, whether such designed behavior is actually correct is somewhat orthogonal problem that is often mostly ignored. (same thing applies to attempts to apply formal proofs to software). The wikipedia article lists some of the problems with SIL rating itself, and I've personally seen multiple instances of "running two redundant SIL N systems produces SIL N+1 system", often with the original N being derived by wishful thinking (this seems to be especially prevalent with engineers with railway signalling background, where I accept that for systems based on relays with redundant coils it holds, but it's complete BS for things with non-trivial software).
Various attacks on car ECUs cleanly show what the problem with current relevant certification processes is: each component is certified separately and nobody cares about the complete resulting system (head unit is not safety critical, engine ECU is, but has no untrusted inputs and there is bunch of things in between that are classified as one of these two categories). What is ironic in the automotive case is that the whole reason why there is immense number of separate ECU's in typical car[1] is safety (ie. limiting impact of one ECU completely failing) and safety certification.
[1] my car has separate ECU for each door even though rear doors does not have power windows and central locking uses dedicated wires. I assume that only purpose of said ECU is that diagnostics system can detect when the door is missing.
Interesting points about the automotive world.... and I absolutely agree with you about the certification problems with a mix of safe/non-safe components.
I've never heard the 2xN = 1x(N+1) argument, but will that improve for >2 ? e.g. triply-redundant systems?
I think the root problem there is the "wishful thinking" isn't it?
The 2xN = 1x(N+1) argument is usually presented as being somehow directly derived from IEC 61508, which is certainly non-sense. I've even seen reasoning along the lines of "it runs on Windows NT, thus it's inherently SIL2, there are two redundant IPCs running that code, so it's SIL3".
As for dual-redundant vs. triply-redundant it highly depends on whether shutting the system down on failure of one redundant component is desirable outcome, both railway signaling and most industrial systems can and are designed that way, but for systems where that is not possible (either because they need some non-trivial actions to get into safe state or because the whole SIL dance is about keeping the thing running at all costs for business/legal reasons) the dual-redundancy actually decreases the reliability, because additional components handling the redundancy also have their own failure probability (and in many systems does fail more often than whatever it is supposed to protect from failure, especially in master-slave systems that attempt to detect which of the halves had failed and respond to that by failover to the other one).
Interesting approach that is often used for road traffic signalling and general industrial control is that there is second control system, that only checks that outputs of the primary control system are consistent (for traffic signals, it is trivial boolean function of the outputs, usually implemented in hardwired circuitry) and shuts the whole thing down when they are not.
"The described technique cannot engage or control the aircraft's autopilot system using the FMS or prevent a pilot from overriding the autopilot," the FAA's statement explained. "Therefore, a hacker cannot obtain 'full control of an aircraft' as the technology consultant has claimed."
"The statement went on to explain that although Teso may have been able to exploit aviation software running on a simulator, as he described in his presentation, the same approach wouldn't work on software running on certified flight hardware."
It involves a hell of a lot of work, enough to double or triple the software portion of a project.
Whatever certification you have been involved with may have been a joke.
Safety-critical certification most certainly is not.
BTW: I'm s/w architect on a set of smart meters. And they most certainly are not safety critical. (IMO They should be, but thats a separate issue).
...and no Wifi was "accidently" bridged. evidence?