> Before the very first shuttle flight, NASA estimated that the chance of death was between 1 in 500 and 1 in 5,000. Later, after the agency had compiled data from shuttle flights, it went back and came up with a very different number. The chance of death was actually 1 in 12.
The fact is, these numbers are made up because the systems are too complicated to calculate based on individual components. Until they have something like 50+ launches (after the kinks are worked out), we wont really know the actual chances of blowing up a rocket are. Further, how well do the escape capsule(s) work? If they work the majority of the time (saving astronauts from he explosion), that'd also dramatically drive down the risk.
If say, there was a 1 in 30 chance of a rocket exploding using the method described, but the escape capsule failed to save the astronauts 1 out of 10 times. Then the real risk of killing the astronauts is 1 in 300. Unfortunately, we simply don't have either of those numbers, so the actual "risk" level is essentially made up.
I used to do reliability analyses for a large aerospace contractor, I may have some insights. The stuff I worked on was not HSF (Human Space Flight) just automated sats and the like. Each and every single part is spoken for and tallied. Each resistor on a PCB, each explosive bolt, each screw and panel. All of it has a lifetime curve, typically a Bathtub plot, to an extreme degree. Resistors were tested in vast batches, over vast I-V ranges, in hard radiation and vacuum, in vibe, for long periods. You calculate it all out. Something approaching the far upper reaches of the expected 'black swans' is tested and the part is proofed against it. Our stuff was just comms and the like, granted, but even then, the expected lifetime of each satellite was fairly well known out to about 20 years in orbit. A typical figure for a 15 year mission was a 75% chance of it making it the whole way, including launch. It's checked to an extreme degree. I've had people come back and correct an un-dotted 'i' symbol in a handwritten notebook. Really.
HSF is easily 100x higher tolerances.
Some stuff, you have to live with, some stuff (like the panels and the o-rings) is too far fetched for even NASA to think up beforehand. And if you've ever been to The Cape, you know they will never forgive themselves. They try their very best to think it all up. They really really do. NASA may be full of brains, but damned if they aren't all real heroes there on the ground. Those folk have heart more than anything else.
> A typical figure for a 15 year mission was a 75% chance of it making it the whole way, including launch. It's checked to an extreme degree. I've had people come back and correct an un-dotted 'i' symbol in a handwritten notebook. Really.
Can you break down this calculation? Where did 75% come from? I think your parent poster's point is this: no one really knows where these numbers are coming from.
>...no one really knows where these numbers are coming from.
That was actually explained in the (excellent) comment. Snippet below, but please re read the comment or example how resistors are actually tested in batches to give real failure rates.
>Each resistor on a PCB, each explosive bolt, each screw and panel. All of it has a lifetime curve, typically a Bathtub plot, to an extreme degree
I'll admit, a lot of the time, you just can't get good enough vacuum on Earth. The best we could do was the nano-torr range, but space is much better. This does effect testing somewhat. Also, a lot of the time, it's the 'black swans' that really matter. Fun fact: the Hubble flies butt first, as they experimentally discovered that meteorites will affect the telescope less than flying it lenses first. I can't find the pic right now, but there is a softball sized hole in the computer (butt) section.
> If say, there was a 1 in 30 chance of a rocket exploding using the method described, but the escape capsule failed to save the astronauts 1 out of 10 times. Then the real risk of killing the astronauts is 1 in 300.
That's the same kind of mistake NASA made, just multiplying stuff like it was a high school textbook question.
What if the very things that make a rocket explode also make the escape capsule fail?
Escape capsules are not used when the rocket is ok. So, 1 in 10 chance of failure would likely include dammage from an exploding rocket.
NASA's problem was their two failures did not result from any one component of the original design failing. One was used out of spec and the foam had been changed from the initial design.
>Escape capsules are not used when the rocket is ok. So, 1 in 10 chance of failure would likely include dammage from an exploding rocket.
I agree that a very effective escape system, as it appears the f9 has, makes the rocket far safer. However understanding failure probabilities is much complex than this thread is making it sound. For instance you could have a escape system failure that triggers the exploding rocket or a software system failure which causes both the rocket to explode and the escape system to fail. Take a look at 'Engineering a safer world' for an idea of how NASA and other parties actually calculate these probabilities: https://mitpress.mit.edu/books/engineering-safer-world
If you have the actual probabilities, which include cascade failures and one of the systems is a safety backup system not in normal use then multiplication is reasonably though not completely accurate. Though, you do need to consider the failure of the safety system when the normal system is operating correctly, but that's also much easier to avoid so likely very very low.
Cascade failures are a far more complex situation and thus harder to predict. So again, it depends on what the failure rate is describing not just what the number is.
> The fact is, these numbers are made up because the systems are too complicated to calculate based on individual components
No, you can perfectly well calculate component-by-component and come up with the 1 in 500 number. That's what NASA did (at great expense). What they failed to realize was this was only considering known-unknowns, and actual experience using the stack is required to learn about the unknown-unknowns which dominated the analysis.
This response deserves to be listed higher. I participated in risk assessment analyses, specifically for nuclear powerplants, specifically for extremely rare wind events. The nuclear industry is like the space one and has had to learn about unknown unknowns the hard way.
Yup. It comes up in other contexts too. In project risk assessment (“will we deliver on time?”) it is known as the inside view vs the outside view. The inside view is: here are all the parts we need to make, and how long each creation and integration step by itself should take, with known risks factored in. The outside view is: ok but the last 3 times we did something on this scale it took 2+ years when we thought it would take just 6 months.
You can use an inside-view analysis to get a minimum risk assessment. Something would have been very wrong in the analysis if we didn’t lose 1 in 500, since that’s the risk we expected from known factors.
And the outside view can give you a reasonable handle on maximums: objectively the risk of shuttle loss was about 1:75 or so, because that is what actually happened. If we were to have continued the shuttle program instead of cannibalizing it for the now defunct Constellation, we could have confidence that failures going forward would be at least 1:75 or better as we fix things, assuming regressions are rare.
The failure by NASA was in treating the inside view known-unknowns risk assessment as a maximum (“risk of loss is NO MORE THAN...”) when it was really a minimum (“risk of loss is AT LEAST...”).
Since the actual disaster record for the shuttle was 2/135, they must have done some other adjustments to come up with 1/12. Still, at this point it seems very likely that SpaceX will be safer than the shuttle.
The first flights were more risky than later ones. They made improvements. But there were many close calls (including main engine failure).
But SpaceX necessarily WILL be safer because they're using a proven vehicle and a more robust capsule. Plus a launch abort system. Shuttle flew crewed on the very first launch.
There's also the concept of the "Drift into Failure", which I believe was coined by Sydney Dekker (https://www.amazon.com/Drift-into-Failure-Components-Underst...), where small compromises in operations eventually cause very important security and reliability protections to be eroded away.
i tend to think this is a strength of the Russian R7/Soyuz: the vast majority of the R7 launch family launches are uncrewed (and with a lower level of "mission assurance," i.e. less double-checking), therefore any systemic problems (i.e. due to "drift into failure," etc) are almost certainly going to be caught in an uncrewed launch first. Likewise, uncrewed Progress cargo spacecraft missions use a very similar platform as the Soyuz spacecraft, I think on the same line, so problems with the spacecraft can also be found on uncrewed missions.
Falcon 9 is planned to launch maybe twice a year crewed, compared to 20 or 30 uncrewed launches (including like 2 or 3 uncrewed Dragon launches), meaning any new systemic problems with the launch vehicle will almost certainly pop up in an uncrewed launch before they result in loss of crew. Shuttle ONLY launched uncrewed, therefore any systemic problem which resulted in a loss of vehicle necessarily resulted in loss-of-crew.
Yes, so in the F9 case, that only works well if NASA doesn't insist that the one crew launch per year doesn't use a super-unique rocket. Which appears to be something that the safety committee appreciates, but maybe not far enough.
> Shuttle ONLY launched uncrewed, therefore any systemic problem which resulted in a loss of vehicle necessarily resulted in loss-of-crew.
Don't you mean that shuttle only launched crewed? Either that or I think I do not have the same definition as you do for "uncrewed" -- I take it to mean that there is no crew on board?
One of the three SSMEs failed on STS-51F https://en.wikipedia.org/wiki/STS-51-F due to sensor malfunction. Another engine nearly failed for the same reason, but quick action from the crew prevented shutdown. The Shuttle performed an Abort-to-orbit, meaning it continued to a lower contingency orbit.
And STS-93 had a pretty bad close call, too, with some of the engine coolant lines rupturing in-flight (leading to early shutdown) due to a oxidizer post plug coming undone as well as an electrical short taking down some of the engine controllers. That was pretty early in flight, so would've been incredibly risky in the case of an abort. https://en.wikipedia.org/wiki/STS-93
Falcon 9 has higher levels of first stage redundancy than Shuttle (can lose a first stage engine immediately at lift-off and reach orbit just fine, and lose multiple engines later in flight), and uses just a single upper stage engine, reducing probability of failure.
Take an example of why this number is probably not as meaningful as you might think. We have only one astronaut. And this astronaut flies on a ship 1000 times. And then on the 1000th time it blows up. Well we had 1 astronaut that flew and he died. So it would be fair to say that 100% of astronauts that flew on our imaginary ship died on it, but it would lack much of any meaning.
Accounting for repeats, there were 833 individuals that flew on the shuttle and 14 died. The ratio there is about 60:1 which is predictably comparable to the ratio of successful:failed flights.
Oh my lord you guys that is not how probabilities work. You can’t look at what actually happened and then say “Well that was the chance.” If I flip a coin 20 times and happen to end up with a 15:5 split that doesn’t mean the chance of heads is 3:1.
Also the context of that 1 in 12 in the quote is important. They’re saying that was the chance on the first flight in retrospect. The shuttle and it’s supporting facilities and processes didn’t stay static, they improved over time.
>Oh my lord you guys that is not how probabilities work. You can’t look at what actually happened and then say “Well that was the chance.” If I flip a coin 20 times and happen to end up with a 15:5 split that doesn’t mean the chance of heads is 3:1.
Erm, over a large sample size, yes this is exactly how things work. Sure 15/5 may be too small, but 150/50 isn't. That's well after the point where its reasonable to believe that your coin is rigged.
As the other user mentioned, if you don't have the prior that the coin is fair, a 15/5 outcome may (and does) indeed imply that heads are more likely than tails. In fact, 15/5 would imply a less than 1% chance of a fair coin, given uniform priors. (Beta[15, 5] -> 1st percentile is 50.175)
That's not quite the same question as what I'm answering. I'm saying "given you flip a coin once and it comes up heads, what is the probability that its a fair coin" (or previously, given you flipped a coin 20 times, and it came up 15/5, what is the likelyhood it is a fair coin). And with a single flip, you can't actually answer that question (BetaDist[1, 0] is undefined).
Now, once you've answered that question, you can integrate to get a probability that the next coin is heads, but with a single flip you can't even answer the first question. You need to have gotten heads and tails each at least once.
No, the beta distribution BetaDist[a, b] is only defined for a, b strictly > 0.
When a or b is very large and the other is zero, you can treat it as 1 and get a decent approximation, but it'll be a an estimate that biases towards the center.
I should also mention that if you're willing to make assumptions like a uniform prior, this totally works, but it can lead to some surprising results for small a, b. Notably a uniform prior is Beta[1, 1]. Beta[2,1] implies something like a 75% chance of the true mean being above .5, which seems a tad overzealous.
Exactly, that's not the question you're answering, but it is the question that is being asked in this context: what is the probability of rapid unplanned disassembly of SpaceX rockets.
If your rocket has a 50% chance of being explosion proof and a 50% chance of blowing up half the time (you could imagine this as any number of situations, like two otherwise identical rockets which you use), any given launch has a 25% chance of rapid unplanned disassembly.
You can do the same thing over a continuous random variable.
The beta distribution[1] is a cool statistical distribution defined by BetaDist{a,b} (or alpha, beta, but that's too much work), where a is the number of successes and b is the number of failures you've sampled.
It has a number of cool properties, chief among them that given X = BetaDist{a,b}, then cdf(X, x) = the probability that the mean of the distribution you are approximating is less than x. It has a bunch of other nice properties too (like E[X] = a / (a + b), which should be obvious), but those aren't as relevant here.
So let's say that you assume a uniform prior. This is defined as BetaDist{1,1} [2]. this is probably the wrong prior. So you might have a better idea. If, for example, you believe there is a 10% chance of your rocket exploding based on some calculations you've done, you might use a differently tuned beta distribution, like BetaDist{9,1} or BetaDist{4.5,.5} if you were feeling uncertain (but in general it would likely be better to use {8,2} in that situation iirc). But let's assume {1,1} for now.
So you launch your rocket. Everything goes great. You update your distribution. Its a success. So you get BetaDist{2,1} [3]. So what is the chance your rocket explodes? Well the cdf of your beta distribution is the probability that the mean is less than x. So the pdf of the beta distribution is the probability that the mean is exactly x. So then
The integral from 0 -> 1 of `(1 - x) * pdf(X, x) dx` is the estimated probability that your rocket explodes on its next launch, since that's "for every value x, the likelyhood of the distribution being that one multiplied by the chance your rocket explodes given that distribution". For the one rocket case, this happens to be equal to E[X] = a / (a + b), so it's 1/3.
For the two rocket case, you apply reinforcement learning/k-armed bandit strategies like UBC1[4] or Thompson Sampling[5]. These are algorithms that will result in you picking the best rocket with as few unnecessary explosions as possible, provably.
You can see some related discussion I've had on HN about these algorithms [6].
Yes if you actually run an experiment with real sample sizes you can begin to form hypothesis around and unknown probability.
But my point was you cannot look at a historical event and say "Well this happened, so that was the chance." That's the basis for the silly internet joke "The chance is 50/50, either it happens or it doesn't."
>But my point was you cannot look at a historical event and say "Well this happened, so that was the chance."
If the event happens repeatedly, you absolutely can! If someone attempts something 20 times, and it works 10 of them, you can conclude that there's an approximately 50% chance of success. You have a sample size! The exact same thing is true for a statement like "the chance of an astronaut dying is approximately 1 in 25". We have the sample size to show that.
>That's the basis for the silly internet joke "The chance is 50/50, either it happens or it doesn't."
No, that's totally different. That's a misunderstanding of priors. What you're doing is more like forgetting that the law of large numbers is a thing.
I’m not forgetting anything. The actual shuttle missions were not a statistical experiment or a consistent action where you can say “well this is what happened so that was the chance of death on an individual shuttle flight.”
It only means that’s your chance if you have the technology from quantum leap (the tv show) and you randomly land in the body of one of the participants.
as I said already the shuttle and everything around it evolved. The crews changed. Trying to say “the actual chance of death was this based on how many people died.” is silly.
>It only means that’s your chance if you have the technology from quantum leap (the tv show) and you randomly land in the body of one of the participants
No, then the event already happened, so you know the outcome with certainty.
> The actual shuttle missions were not a statistical experiment or a consistent action where you can say “well this is what happened so that was the chance of death on an individual shuttle flight.”
Indeed there are confounding factors that make the error nontrivial. This doesn't invalidate the entire experiment.
>Trying to say “the actual chance of death was this based on how many people died.” is silly.
Its exactly as silly as saying "the actual chance of getting heads is based on how many heads you get". The problem with your argument is that that isn't silly. To calculate how likely getting heads is, you flip the coin a bunch of times, see how many heads you get, and then you have your answer (and a confidence level). Its the opposite of silly.
In other words, there's no big difference between "We'll flip a bunch of coins to see how likely we are to pull heads" and "we'll launch a bunch of people into space to see how likely they are to end up dead". The second isn't as rigorously controlled as the first, but that's fine as long as you account for it.
I think the difference is the scope of the problem. A coin flip is a deterministic, known event with two possible answers in most situations.
A rocket or space shuttle launch is like a thousand coin flips, where any one result, sequence of results or other combination of events results in death. As an added bonus, any number of unknown external events, from
Ambient temperature, to bird strike, to sabotage can render the model useless and kill you in some unforeseen way.
The scope is different, but the principle is the same.
Say you launch your rocket 50 times, out of which 3 times it explodes - first time because of a bird, second time because of the legendary ULA Sniper, third time because of internal problems. That 3/50 is still closer to the truth than just assuming "I really don't know" (1/2) or refusing to answer the question - it has huge error bars, but implicitly captures some of the phenomena that make launches go wrong.
My perspective was probably a little impacted from spending the first truly beautiful day of the summer dealing with a failed "high availability" system. :)
If statistics didn't converge to probabilities in the high-n limit then science would be impossible. In the 15:5 coin example, the conclusion is skewed by a massive prior that coins are fair. When we're talking about things like space shuttle launches, it's not reasonable to expect our understanding of the system to be so good that we can boldly forge through with estimates that disagree with results.
> If I flip a coin 20 times and happen to end up with a 15:5 split that doesn’t mean the chance of heads is 3:1.
Isn't it the best you can say, though? 20 is a small number but when it's all you have, there's nothing you can do about it. Of course we know the "real" probability is a 1/2 because we know how coin tosses work, but if we didn't know it and coin tosses were black boxes, we'd have to go with this 3:1 chance.
The chance of heads being 3:1 is the maximum likelihood estimate, which indeed is the best we can do if we don't know anything else. But usually we do have an idea of reasonable values.
I didn't say it was a probability. Quite literally, if you look at how many people flew and how many of those died, 1 in 25 died. There was a 100% 'chance' of 1 in 25 dying on the shuttle because that's what actually happened. There's no probability there.
I believe he is right though. You are considering future probability chance, and are right, but he is considering past average. In this case his factual outcome trumps your potential forecast.
That's one very twisted way of computing the chance of dying on the shuttle. If you launched the same crew over and over again they would all have died. If you used 'fresh' people for every launch then the chance according to your computation would have been lower.
And yet, through all of that the underlying machinery would be identical and that is the thing we're trying to evaluate here.
So even though you can compute that particular number it is not a very useful number to calculate.
A better way would be to look at the 2/135 number to determine vehicle safety and to use the number of astronauts on board to get to a generic figure and for each individual astronaut to use the 2/135 multiplied by the number of flights they took to figure out their chances.
Of course these are still very low numbers and the error bars will therefore be large.
It reminds me a bit of Feynman investigating the Challenger disaster:
>Engineers at marshal estimate it as 1/300, while NASA management, to whom these engineers report, claims it is 1/100,000. An independent engineer consulting for NASA thought 1 or 2 per 100 a reasonable estimate
>Feynman was so critical of flaws in NASA's "safety culture" that he threatened to remove his name from the report unless it included his personal observations on the reliability of the shuttle, which appeared as Appendix F.
Waiting for an accident to occur before determining that the modifications that allowed it to happen were unsafe -- because the estimates are "essentially made up" -- is just naive.
On practice you will want some independent protection systems, and fix problems as they appear on each of those. On the real world, what is naive is expecting you did in fact know all the failure modes beforehand.
The entire problem with rockets is that you can't fit many such systems into them. So the accidents happen.
I believe in such super complicated systems there will be a long list of tail risks, probably too hard to compute.
Also at this stage of space exploration we should be focusing on rapid development of technology while keeping humans safe but not putting too much emphasis on safety that we might want to see in our cars.
Agree with point on rapid development. Engineering is an iterative process, trying to come up with a perfect system is an anti-pattern. Make something that works and then never stop improving it. Russian Soyuz, SpaceX Merlin, iPhone, ICE, and microprocessors are all examples of where you can get with iterative approach.
On a related note, I don't see how BFS atop BFR would have that escape capability. OTOH they seem to be aiming for reliability rather than figuring out how to fail safely.
There are two ways to do it. You can load the fuel first, then have the astronauts board a rocket full of stuff ready to go boom. Or you can load the astronauts first, and load the fuel once they’re strapped in and ready to go.
In the first scenario, there’s a large window where an exploding rocket results in dead crew. The launch escape system doesn’t do them any good if they’re still on the walkway or the elevator. In the second scenario, there is no such window. At all times, either the rocket is unloaded or the launch escape system can save the crew.
If I were flying on Dragon, I’d really want to be in the scenario where I can always be saved.
There isn't anything wrong with your reasoning, it just isn't the reasoning that NASA came up with.
NASA has the entire history of US Spaceflight in its collective memory. It has many memories of accidents occurring during fueling that resulted in the loss of the spacecraft. It has no memories of a crew being saved by the crew abort systems because it has only happened once[1] and that was with a Russian rocket.
Thus, going by what they "know", NASA knows that rockets can blow up when they are being fueled and they know that SpaceX blew up one of their rockets while fueling it, but they do not know if a launch escape system would save astronauts in that situation. Therefore SpaceX's plan is "risky" because they are exposing the crew to a "known" danger and relying on an "unknown" system to protect them from that danger. It is this kind of thinking that keeps NASA moving slowly and carefully.
SpaceX has essentially said, "Trust us, if anything went wrong the crew escape system would save them." Unfortunately they don't have a launch pad they can blow up on purpose to demonstrate that for NASA (it would help if they could). The last time they blew up a rocket while fueling it, it took months to recondition the pad for launches again.
[1] In 1983, an escape rocket pulled the three cosmonauts of Soyuz T-10-1 to safety when its booster exploded on the launch pad. As of 2015, this is the only case where an abort system saved a crew from a launchpad accident. -- https://www.space.com/29260-how-spacecraft-launch-aborts-wor...
> they do not know if a launch escape system would save astronauts in that situation.
1. In the video of the Amos 6 explosion[2], you see the payload fairing falling to the ground, with the satellite inside. You see it happened slowly, the payload did not explore, it burned down after falling. The abort system would have saved the crew in that situation. We were also told[3] the Dragon 1 would have been just fine if it was programmed to open its chutes after the second stage disintegrated under it during ascent on CRS-7. NASA participated in analyzing both failures, they know this.
2. SpaceX have done a pad abort test in 2015[1] that demonstrated the capsule reacting to "something bad happened to the rocket" on the launch pad and saving a test dummy's life.
“It has many memories of accidents occurring during fueling that resulted in the loss of the spacecraft.”
Does it? I did a brief search and only found SpaceX’s explosion and an old Soviet ICBM accident where the engines ignited during fueling. I certainly may have missed something.
It's not really NASA, but Atlas D and similar early vehicles did have fueling failures. But they also failed sitting empty on the pad and a hundred other ways.
Propellant loading is a state change, and so it's theoretically more dangerous. But examples of real problems are rare.
I hit up Wikipedia to see if they discussed any of those. The Atlas had so many problems that they don’t dedicate much space to discussing individual ones, but I did come across this amazing quote:
“Convair engineers noted with some pride that there had never been a repeat of the same failure more than three times....”
A functional launchpad has to survive a launch. It needs tons of concrete, a proper heavy retractable tower, durable cables, water systems, and will be built with a lot of other support structures around it.
Instead of a hundred million dollars making a really nice launch pad, you can make a $1M gantry on a small foundation and some barebones tanks.
That is a good question, perhaps it is the destructiveness of the test. I have seen documentaries where they have purposely crashed a jetliner and they always seem to learn a lot from that, and yet it isn't like the NHTSB crashing cars left and right. It is expensive and takes a lot of real estate to crash a plane.
"it isn't like the NHTSB crashing cars left and right"
Well, not the NTSB, but the IIHS is crashing cars all the time - I've worked on their crash facility. Check out the dates on the videos here: https://www.youtube.com/user/iihs/videos.
But IIHS's rate is dwarfed by individual car manufactures. One car company I've worked with was smashing four to five cars a day, every day of the week, at their crash center.
I don’t think it’s necessart. They know how roughly quickly and how violently the rocket might explode, so testing the escape system by itself tells you enough about whether it would survive such an event.
The efforts to build a launch pad that is reflective of reality is millions of dollars and several months of expensive, concerted effort. And it's one-time use.
I’d rather take a higher chance of survivable failure than a lower chance of certain death. Fault tolerance is better than being brittle and trying to never fail.
But I’m just a software guy so this is just uninformed opinion!
I think it really depends on the actual numbers for both "chance" and "survivable." You want to minimize something like the product of "chance of occurrence" (frequency) times "chance of death, if event occurs" (severity). The numbers really matter.
Right, but those numbers aren’t really known. You can estimate, but that’s always error-prone. (NASA famously estimated the Shuttle’s loss of crew risk at 1 in 100,000, when the actual rate was about 1 in 70, for example.)
Still, I’d be really interested in seeing the estimated numbers here.
There was poor math and methodology behind that 1 in 100,000 estimate though. I don't think that level of error should be held against an estimate made with rigourous and scientifically valid methodology.
We agree! :) I was just pointing out that your first comment expressing a preference for one style over another presupposes the relative magnitude of those numbers.
That is my thinking too. The launch abort system can get the capsule away faster than the rocket blows up from a leak catching on fire. Several people have overlaid the launch abort test video with the one that caught fire on the launch pad to roughly confirm this.
This is more of the current culture of being so risk adverse that you end up with more risky behavior with the ‘mitigation’ than the original risk.
That is, if we assume 0% failure rate of the abort system then your scenario works.
If we assume (for argument's sake) a 50% failure rate in the abort system, and a 10% chance during fueling, then there's a 5% failure leading to human deaths.
Without those numbers, it's hard to draw a conclusion.
That’s a good point. I suspect the abort system is much less likely to kill you, since it’s fairly simple, but it would be good to actually quantify that.
A bit of a nitpick: the observed failure rate is quite a bit lower. Each launch involves at least two fuelings, since they do a static fire before every launch (which is when the accident occurred). Every booster also gets tested at their facility in Texas, and there have been a lot of scrubs after fueling, so the average number of fuelings per launch is probably 3 or so. That’s around 150 total fueling events for the Falcon 9, or around 100 if you only count the ones with densifier propellants, making the failure rate so far around 0.6-1%.
Not only that, but there's also a bunch of ground crew that are at risk if there's a rocket explosion while crew are being loaded. So two or three times as many people can die, none of which will have access to a LAS for a while.
We don't know how effective the launch escape system is in a pad explosion. It might only save the crew 50% of the time, or 10% of the time.
If we assume the launch escape system is unreliable, then the numbers strongly point towards loading the crew after the rocket has been fuelled. The crew avoid the riskiest part of the process, and they are on the rocket for a shorter amount of time.
One of SpaceX’s stated reasons for using the same thrusters for maneuvering as for launch escape is to give the system many more flight hours. Every unmanned Dragon flight is also a test of the launch abort system.
Can the launch escape system work during fueling? Would the gantry/service tower be in the way? I suspect if the les were to fire off during fueling it'd be .... pretty dramatic.
I see this as SLS (Senate Launch System) backers feeling threatened by their own bad economics and lack of ability to deliver. So it's time to deploy the PR submarine fleet...
You sound like the kind of person that unironically writes Micro$oft, mistakenly believing it supports their view point. At best, it makes you look juvenile.
The SLS is exactly the rare case where he's right and you're wrong. It is a project whose goal is blatantly to spend money, not to fly any payload on any mission.
After spending tens of billions of dollars on development, you get a rocket that will cost a billion to fly, will never achieve the flight numbers to be able to honestly rely on it.
It will likely fly very few times before it is cancelled.
It will be uneconomical to use for any mission it flies, and any mission that uses it will only do that if they are forced to (the mission requirements changed such that only SLS allows the mission to be done, or the project is cancelled) or if the SLS budget pays for the rocket rather than the mission's budget.
>but name-calling makes you look bad regardless of the merits of your argument
You're right. While it can be funny and it can make one feel good doing so, it almost never helps your cause. The people who laugh along are those who already agree with the point. The rest will be more likely to discount anything of merit that is said because of the apparent bias.
I strongly disagree. In this case, I think understanding of the SLS is not something that would be expected from the audience reading his comment, and summing it up as the Senate Launch System, of course lacks nuance, but simultaneously does a great job of explaining what it is, why it's so critically flawed, and finally why individuals of certain affiliations would have surreptitious motivations in something that seems like it ought be as distanced from politics as possible.
Yup. By the time SLS gets its Block 0 flying, around 2020, SpaceX will have had around 100 launches with two vehicle types (F9 and F9H), and maybe be certified for humans in Dragon.
> For a short period, the Soviets achieved even higher densities by super-chilling the kerosene in a rocket’s fuel tanks, ... The latest version of Falcon 9, Falcon 9 Full Thrust, also has the capability of sub-cooling the RP-1 fuel to −7 °C, giving a 2.5–4% density increase.
Man I really wish we would get nuclear propulsion going again one of these days. Fine, fine, not as a launch system, just orbit to orbit... but we could move much larger, more dependable payloads so much further and not have to fiddle with little tweaks like these to surf the tiny margins of chemical propulsion.
It could have been a very mature technology by now. I feel like we created all the nightmares and quit the technology before we got to the actually good uses.
Nuclear propulsion is more efficient but doesn’t produce enough thrust for surface launch(\)
We are stuck with chemical rockets for that. Nuclear could help us get to Mars faster or launch less mass from the surface. It may even be relevant for the upper stage of a launcher. But not for getting off the ground.
For some historical perspective: on the Ferdinand Magellan expedition about 230 men out of 270, including Magellan himself, died of violence and/or disease (1 in 1.2 chance of death).
As Musk has said in the past, a safety first philosophy and space travel don't mix. Safety third might work.
Than it sounds like Musk is disagreeing with SpaceX. From the article:
> SpaceX says it is serious about flying people safely and is going to great lengths to study every aspect of the vehicle, down to individual valves, so that it will meet and surpass the 1-in-270 chance-of-death metric, said Benji Reed, the director of SpaceX's commercial crew program.
> When Reed was down at Cape Canaveral, Florida, on a recent trip, he came across a room on a special tour where the astronauts' families from the shuttle program used to wait ahead of the rocket launch.
> They were stunned to see that a whiteboard with drawings made by the children of the crew lost in the 2003 Columbia disaster was still there, preserved.
> "That really drives it home," Reed said. "This isn't just the people that we're flying — these are all of their families. So we take this extremely seriously, and we understand that our job is to fly people safely and bring them back safely. To do that you have to humanize it. You have to see them as your friends and as your colleagues."
> SpaceX says it is serious about flying people safely
> So we take this extremely seriously
Musk didn't say they don't take it seriously, he said it's not the highest priority.
But also there's a difference between what Space X says and what it does. I don't think we'll fully know the extent of Space X's safety consideration until some astronauts die in take off.
NASA are cautious because they've had people die and that lives long in the memory. Space X engineers haven't had to go through the process of finding out which one of them missed the bug that killed people.
Not to mention, SpaceX wants to do routine suborbital passenger flights with the same rocket they use for flights to Mars. Safety has to be an extremely high priority for that to happen.
I sure hope the 16th century is not the standard of safety for the 21st. Who cares what risks Magellan took? Should the standards of medical practice and sanitation from the 16th century also be applied now?
There's a deep moral responsibility to respect life, and a calculation about benefit from the risk. Getting other people killed for your benefit is wrong. As a society, we need to do everything possible to protect them. The benefits are exciting, but not worth saving money on cheaper fuel.
SpaceX's goal is to colonize Mars. And the first expedition there will undoubtedly be extremely dangerous. Even just ignoring the dangers getting there. When you're there you're literally 140 million miles away from the nearest aid, and you're doing something that literally no human has ever done before in an environment where if you breathe the atmosphere, you die. If you run out of electricty, you die. If you're exposed to the pressure of the atmosphere, you die. If your food resources go bad, you die. If your air filtering system breaks, you die. And so on.
And I would wager that they will have extremely qualified people lining up out the door to take on that risk, and even pay for it if necessary. The ability to be part of something incredible and the immense challenge and possibility there are going to be an incredible driver for the right sort of people.
For reference look at our developments leading up to the Apollo mission. The danger there cannot be overstated, and guys like Buzz Aldrin aren't just adrenalin junky pilots. He received his doctorate from MIT. His thesis (Line-of-Sight Guidance Techniques for Manned Orbital Rendezvous) having the dedication, "In the hopes that this work may in some way contribute to their exploration of space, this is dedicated to the crew members of this country’s present and future manned space programs. If only I could join them in their exciting endeavors!".
I have trouble seeing this as anything more than irrational territorialism and/or a disingenuous defense of the pork barrel status quo.
If we are unwilling to try something new because we decided fifty fucking years ago that it was a bad idea, we're done. Time for NASA to hang up its space helmets and watch China (or possibly SpaceX, BO, etc. using private funds) take the next steps.
I wonder how you get get a number like "1 in 270" for the acceptable chance of death. It's not a number you'd choose as a starting point in a top-down calculation. It must be a bottom-up calculation from a sum of existing risks. I'd be curious if anyone knows more about it.
"The thing that drives the 1 in 270 is really micrometeorites and orbital debris … whatever things that are in space that you can collide with. So that’s what drops that number down, because you’ve got to look at the 210 days, the fact that your heat shield or something might be exposed to whatever that debris is for that period of time. NASA looks at Loss of Vehicle the same as Loss of Crew. If the vehicle is damaged and it may not be detected prior to de-orbit, then you have loss of crew.
Like any number of things, they probably want it to work at least as well as they did before.
I see this in product development all the time. A customer has performance requirements. You sell them a product that you've characterized and documented its performance. Then the next time the customer is doing RFQs, you get your product specification back as a requirements specification. Regressions are not allowed even if you previously exceeded their needs. In this case, I suspect the 270 is something similar based on historic failure rates.
Possibly Fault Tree Analysis. It’s commonly used in the aerospace industry, and SpaceX explicitly mentioned it while searching for the Amos 5 problem. Probably only one of several methods, though.
No, I think it’s the other way around, you make a target safety number as the customer, and the engineers use a fault tree to try to reach it. So this number can’t come from a fault tree ( unless it’s a weird comparison like “we had 1/90 on our fault tree in our internal projects last time, we want private companies to reach 3 times better”)
I don't get it. I imagine HN feels there's between 5% to 30% chance that humanity kills itself in the next 300 years.
If we accept those odds, isn't there a point where space travel becomes the best our best plan, regardless of dozens, hundreds, or even thousands of astronaut deaths?
Once upon a time, thousands of people died fishing for ambergris. Whale anus wax. People died all the time.
Do we really think people are going to die exploring space? Yeah, they very fucking well are going to be deaths exploring space. We can take precautions as much as practical, but this zero risk tolerance philosophy is not going to work. We haven't done anything of note when it comes to manned space exploration in half a century. Space is so dangerous that the only way to keep risk to a minimum is to not go. And that's basically what we've done.
It's just that now these things are televised and PR is such a disaster when you have somebody die on air. It's not like a plumber is going to be watched by the entire world as he works in some kind of risky situation. People die doing that, too. People die doing anything. But if it's in the limelight like space exploration, it's absolutely unacceptable for anyone to risk death. That's why. Because we're watching and we can bear it only if we don't have to know about it or watch it happen.
We've got to start taking a hell of a lot more risk. We're taking enormous risk by not furthering space exploration. But risk of inaction is a strange thing, psychologically speaking. People don't fret about it, even though it's a real killer. Our climate is about to be the biggest killer of all, and we only feel vaguely uncomfortable about doing absolutely nothing.
And when I talk about taking risk, I'm not talking about go fever by the way--dumb political risk that leads to ignoring engineers. I mean the kind of risk that got us to the moon in the 60s. People died doing that. Mostly Soviet people, but a few Americans, too. But it was important enough. And it got done. We really shouldn't have stopped there.
It's not really NASA's fault, but they haven't gotten shit done with manned spaceflight since then. Haven't done a damn thing. Yet people have still died doing it. I hope private industry at least gets somebody above the Van Allen belt in the coming years. We need to get back on the horse before we lose our chance for good.
isn't there a point where space travel becomes ur best plan
Depends on the scenario you are think about. But even if our goal is to try to preserve human life after a catastrophe that makes the surface of the earth uninhabitable and the atmosphere unbreathable, it's still probably easier and more rational to focus on deep sea colonies rather than space colonies.
It seems to me there is a reasonably low cost solution to demonstrate safety: SpaceX should load and unload the fuel multiple times for its human-qualification spaceflights. If SpaceX is correctly confident fuel loading is safe, the extra handling won't result in incidents. This would be a fairly minor expense given nothing is consumed or destroyed -- just handling costs and incidental losses due to evaporation and so on.
It's a good point that indeed there might be something different at actual-launch-time versus testing time, but I'd imagine that doing the whole procedure a few times (as complete as possible) would reveal any important unknown unknowns if present. (Where I define important as 'reasonably likely to occur': if they are indeed so likely, they'll probably occur with repeated tests.)
I'm no scientist, but I'd expect there is a long list of things SpaceX has done that violate the "last 50 years of national and international safety standards".
Wanna know why it's fifty years of safety standards, not sixty and not forty?
Because fifty years is how long it has been since we have done anything meaningful with manned space flight. We went back to the shallow end of the pool.
That's a good point. Closing the loop probably did more for the advancement of space travel than anything else because it allows you to find out what almost failed.
So, what I don’t understand is why NASA doesn’t make this really simple — allow SpaceX to use the “load-and-go” process on flights that don’t involve putting living beings into space, and when they have repeatedly and sufficiently proven how safe the system is, then start talking about whether or not it could be used to launch humans.
Don’t even talk about using the potentially more dangerous system until it has enough of a safety record.
SpaceX has been using supercooled propellants (and "load-and-go") for every launch since early 2016, including NASA cargo launches. It's been used over thirty times by now. Whatever additional assurance that NASA's advisory board wants, another few launches are unlikely to provide it.
Somewhat true, I'd say that risk-averse is more appropriate than safety-obsessed. For such organizations, being "safety-obsessed" usually means a massive amount of bureaucracy around certain known risks, but such organizations don't tend to be consistently good at finding and addressing unknown risks.
SpaceX has lost 2 F9 so far. The first exploded in flight, but the Dragon capsule could have been retreived, if the parashutes had not been disbled in the starting phase. The second fueling for the static fire test. In both cases, no crew would have been in danger, and of course the causes of the failures has been adressed.
But it will at least take another 12 months till the first planned crew flight. If everything goes well, the Falcon 9 will have about 30 more flights till then. A lot of opportunity to build a safety record.
> the Dragon capsule could have been retreived, if the parashutes had not been disbled in the starting phase
The chance of it being retrieved would have been much greater if the the parachutes hadn't been disabled. I expect a forum of engineers to know better than to assume everything would have worked correctly.
Everyone expects the LAS (launch abort system) to not be very reliable. So you're a bit optimistic about the outcome of the second failure: big danger for the crew.
It was a static fire, so crew wouldn't be on board anyway. 100% no danger to crew in that case. LAS is likely to be very reliable for ground aborts. Design criteria insist on high (say, 95%) total reliability for LAS.
Part of the reason you might want to do such static fires is as a way to trigger (and thus reveal) such problems without endangering crew.
NASA Challenger and others have proven record of putting lives at risk. It's a dangerous job. As for hypothetical danger, that's more of a clickbait "analysis", timely after recent lash-out by Musk on Tesla conference call, IMHO.
I for one who would rather sit on top of a rocket being loaded but have a LAS ready to go than going anywhere near a loaded one with no way to escape within a few seconds.
Just goes to show that its way easier to change technology than culture. Even Congresscritters are still thinking they're calling the shots. If NASA doesn't want to use SpaceX, they're free to do so. They can go with Boeing (some of the very people pointing fingers) or the Russians.
The "culture of safety" at NASA had more to do with a "culture of protecting funding lines" by ensuring that parts of the shuttle were done in important congressional districts.
I mean the only safe ship is a ship in a harbor and that's not what ships are for....buttttt I am skeptical that a for-profit-company run by elon musk has rigorous safety culture-particularly given he apparently has no concerns with libeling people killed by his firm's autopilot software
A few months ago I had a chat to one of the engineers working on the SLS during a NASA outreach presentation. Super defensive about it and dismissive of SpaceX and their accomplishments. The goals of the SLS mission before 2030, build an orbiting moon space station, isn't even that ambitious or scientifically valuable.
2 years ago, I was having a similar chat to someone who was a retired ULA ~30 launch veteran (I forget exactly their title). But they didn't want to entertain the idea that SpaceX even had a business model. 'The market that they're aiming for is only a few launches a year'
> The market that they're aiming for is only a few launches a year'
In 1958, the chairman of IBM stated 'I think there is a world market for about five computers'.
He wasn't wrong, because back then computers were extraordinarily expensive and occupied entire buildings. There was maybe a worldwide market of about 5 of those kinds of computers.
What happened though was that due to technological advances, computers became cheaper, smaller, more efficient and more reliable and it drastically changed the entire market, and now almost everyone has a computer in their pocket.
The market for rocket launches will also significantly take off if SpaceX can bring the cost down and the reliability up.
But is the densified fuel actually needed to fly the Dragon 2 to a LEO destination? Can SpaceX do their Falcon 9 HSF missions the old fashioned way, making the safety of load-and-go a moot point?
I had this same thought! Presumably, ISS missions do not need total fuel capacity so boosters could be fueled only enough for mission plus landing plus safety margin. Even if they did not wish to change loading procedures they could load densified fuel and then load the astronauts. The delay to load crew would allow the fuel to warm up and expand but so what?
Sending men into space is a vanity mission, not a science mission.[1] Unmanned mission solve the risk problem for SpaceX type progress and yield more answers for a lot less money.
[1] With the exception of politically valuable missions such as the International space station which employed Russian scientist in a bid to keep them from selling their skills to undesirable state actors.
That's just oversimplified and not true. Sending men into space in the sixties and seventies got so much more done than automated missions to the moon because it forced us to work harder and solve more important problems. We're still reaping the rewards gained from technology developed to solve those problems.
Sending probes to learn trivia about the universe is not as important as developing better ways to support and transport human life. At all.
Shouldn't this headline read "could put lives at unnecessary risk" ?
All rocket launches are going to carry some amount of mortal risk. The headline implies this isn't the case except for SpaceX's technology, which is hogwash.
Challenger, Columbia, and now NASA is concerned about astronaut safety? Hah.
There is always risk; simple rockets are (probably) less risky than the shuttle program. More unmanned launches will help quantify that risk; NASA can then determine whether the risk is acceptable, or not.
> Robert Lightfoot, the former acting NASA administrator, lamented in candid terms how the agency, with society as a whole, has become too risk-averse. He charged the agency with recapturing some of the youthful swagger that sent men to the moon during the Apollo era.
> "I worry, to be perfectly honest, if we would have ever launched Apollo in our environment here today," he said during a speech at the Space Symposium last month, "if Buzz [Aldrin] and Neil [Armstrong] would have ever been able to go to the moon in the risk environment we have today."
That is a really underhanded way to say, "Let's kill more astronauts so we can get to space faster."
The folks who risked their lives and, in some cases, gave their lives to get us to where we are today deserve better.
Disagree. We make trade offs wrt safety all the time. Driving in a car is pretty dangerous, yet many of us drive our cars every single day. Because the payoff is worth the risk. Those quotes speak to this trade off. They are advocating for a stance that is willing to take more risk, because they assume that there are tremendous benefits to be had by doing so. They are not advocating taking risks for the sake of killing astronauts.
I get what you're saying but in this case I have to ask, can we really not get those benefits without risking lives? Would it become impossible? Or would it just take longer?
Can robots do it?
I hate arguing this position because I want to go to space physically in person in my lifetime. But it doesn't make sense to send humans if robots can do the job because space is inimicable to life and life support is very hard and expensive and heavy.
Risk lives to stave off an asteroid but not to mine it.
- - - -
I don't want to get into a side-thing about driving, but I don't drive in part because I see it as a Faustian bargain: we get transportation that's so convenient at the cost of forcing people to participate in a death-and-mayhem lottery. Traffic is a meat-grinder.
We’ve sent many robots to space and to mars. But they are very limited in their scope and ability. Where a human can have near limitless scope and ability.
Watch some of the mars rover videos and it becomes clear how frustrating it must be for the scientists on earth to do research on mars remotely.
More frustrating than being blown up on the launch pad!?
Mark S. Miller had a site with a list of "adages", here's one I really liked:
"A Computer's Perspective on Moore's Law: Humans are getting more expensive at an exponential rate."
This is true, although slower, for robots.
Look, if you want to risk your life to go to space, I'm fine with that. I do too.
But if NASA or Musk or Bezos or whoever want to unnecessarily risk other peoples' lives to make a buck faster, or even just explore Mars, that's irresponsible and kinda fucked up, is my opinion.
For example, I think Mars One is awesome! (Stupid and doomed, but still awesome.) Colonizing Mars to have a "backup" Earth is important enough (in my opinion) that it's noble to risk your life to achieve it ASAP.
Mining an asteroid, although I personally would love to try it, is not quite so noble, eh?
The less "risk-averse" space program was happening in a very different context: the "Space Race" [1] which was in many ways a continuation or extension of the Cold War, which was in many ways a continuation or extension of WWII.
Space science is cool and useful, but is it so crucial to humanity's progress that we shouldn't be super extra careful with our astronaut's lives? Waiting an extra year or two or ten to learn about how fruit fly DNA mutates in space seems fine when the alternative is maybe killing astronauts with a decent probability.
> What about letting the astronauts decide what level of risk they are comfortable with?
That sounds like an easy solution and philosophically appealing in a libertarian sort of way.
However, it doesn't work. People can be put under enormous pressure to make the choice that satisfies others - their boss, their peers, their community, etc.; it's commonsense experience, there is a lot of research about that, and a lot of implemented practices to prevent it in safety situations. How many priests in the Catholic Church or employees at Penn State allowed child rape to continue - something which provided zero benefit - under pressure? Or remember the article posted here a few months ago about the sea captain who, apparently under pressure, sailed into a hurricane and killed his crew.
Also, people don't always make good decisions; they shouldn't die for it, we shouldn't exploit them, and we don't want a system that attracts and promotes people for making bad decisions.
> Also, people don't always make good decisions; they shouldn't die for it, we shouldn't exploit them, and we don't want a system that attracts and promotes people for making bad decisions.
So, your alternative to letting someone else make their own decision is you (or someone else) making a decision for them?
All of the examples you just listed cut against your argument as much as they cut for it.
Bosses, peers, communities, priests in catholic churches or penn state employees _all_ have a history of abusing power as much as anyone else.
So, since abuse of power exists, and people can make good and bad decisions, it seems reasonable to expect a certain amount of bad decisions.
it seems safer to let individuals make their own bad (and good) decisions, rather than centralizing decision-making in a group of people that we hope will make good ones, but we must expect that a centralized group will make a certain number of bad ones.
Enshrining bad actors in a position of power is likely to have much worse outcomes than letting individuals occasionally make bad decisions.
Society seeks a balance, generally. Think of labor regulations: Factories can't risk employees' lives. They can't ask employees if they want to risk dying to do their jobs; it would be insane.
Astronauts are a bit different: Highly skilled, with relatively good knowledge of the risks (but they are not professional engineers, for the most part, and are not immersed in the engineering of their equipment), and sometimes the reward is much higher (e.g., being the first to walk on Mars).
But the same mechanisms apply of risk, organizations, and the human response to them apply. Think of it: The proposal is to risk their lives so we can save money on fuel. That's disgusting.
Actually it doesn't save them anything, in fact using densified propellant where it's not actually needed to get sufficient performance for the mission (i.e a ISS launch) will cost more in fuel as SpaceX always fully fuels the rocket even if all that fuel is not needed (provides margin in case of issues and for landing the rocket)
Arguably it'd be riskier to use non densified propellants for these missions as then they'd be using different procedures than the cargo missions they launch regularly and will have less experience with it and opportunities to find and fix any issues on a cargo mission before it affects a manned one.
The benefit is for satellite launches, the increased performance allows for launching heavier satellites.
It's highly debatable if there is a higher risk to human life as well - unlike with rockets where you fuel up before the astronauts board, you don't have people approaching a fueled up rocket which could explode with no possibility of escape. Instead if there is a issue while fueling up, the astronauts are in a capsule with a escape system which should pull them safely away from the rocket if there is a problem.
> Waiting an extra year or two or ten to learn about how fruit fly DNA mutates in space seems fine
We're beyond that. NASA recently (within the last couple years) concluded an experiment where an astronaut stayed an extended period in space so they could study this, by sending one of a pair of identical twin astronauts. [1]
The experiment I'm talking about is to understand parasite host dynamics in microgravity and also to validate some cool new life sciences hardware for the ISS (and I thought I remembered something about radiation exposure from talking to the PI, but I don't see it in the press release, so maybe not). It's pretty interesting, but I still question whether it'd be worth decreasing safety parameters.
> Before the very first shuttle flight, NASA estimated that the chance of death was between 1 in 500 and 1 in 5,000. Later, after the agency had compiled data from shuttle flights, it went back and came up with a very different number. The chance of death was actually 1 in 12.
The fact is, these numbers are made up because the systems are too complicated to calculate based on individual components. Until they have something like 50+ launches (after the kinks are worked out), we wont really know the actual chances of blowing up a rocket are. Further, how well do the escape capsule(s) work? If they work the majority of the time (saving astronauts from he explosion), that'd also dramatically drive down the risk.
If say, there was a 1 in 30 chance of a rocket exploding using the method described, but the escape capsule failed to save the astronauts 1 out of 10 times. Then the real risk of killing the astronauts is 1 in 300. Unfortunately, we simply don't have either of those numbers, so the actual "risk" level is essentially made up.