I live several miles from a Minuteman silo in Montana, maintained by Malmstrom Air Force Base. The underground cabling between sites is also an interesting read (https://minutemanmissile.com/hics.html). Anytime I want to dig on my property, I have to make sure it won't interfere with their pressurized cables. I have heard a story from someone that did accidentally cut a cable, and Malmstrom AFB was able to locate the break and respond rapidly. I am a volunteer firefighter, and our station has a VHS tape and a paper guide titled "Incident Guide for Missile Field Fire Response" provided to us by the DoD regarding our role in responding to fiře incidents near or at a silo. A year or so ago, we did respond to a fire near a silo, but it occurred entirely outside the security fencing. My understanding is that the personnel at the silos also have their own ability to respond to fires.
But isn't it the case that there is typically no personnel at the silo (or Launch Facility LF) itself? Instead, the Missile wing Commanders at the Launch Control Centers (LCC) some distance away. The LCC commands some number of LFs remotely.
Good question. They definitely do have launch control centers. All available information online does seem to indicate that is the case that the silos themselves are unmanned. My understanding is that there was some security on site, but that is just based on second hand stories I've heard, and may not be true.
I do see military vehicles traveling to and from the one that I am close to semi regularly, perhaps a month or so on average.
As far as fire response, they likely have their equipment for that at the control centers as well.
Given we are talking about nuclear missiles, I’d like to think that while unmanned and housing decades-old tech, that the sites themselves have state of the art top secret levels of security. Then again I grew up in northern VA and we all used to assume as kids that the pentagon had missiles to protect from attack, then 9/11 happens, things like Jan 6, and you lose that confidence..
The missile silos and control facilities are connected in a sort of semi-mesh topology, so any one cable break is unlikely to cause a communications failure. There are backup launch control interfaces available via airborne platforms (specifically the E-6 today) as well.
Digging into one of the cables is going to get you a prompt and unpleasant visit from base security.
The idea behind inertial navigation is to keep track of the missile's position by constantly measuring its acceleration. By integrating the acceleration, you get the velocity. And by integrating the velocity, you get the position.
This sounds like it couldn't possibly work (surely all the little errors compound?) but apparently it's how Apollo navigated
That is how all self-guided weapons systems worked before GPS was viable. Many still retain that capability as a fallback. Notably, the Tomahawks fired during Desert Storm had to transit over Iranian airspace because they needed the mountainous terrain to correct for their inertial drift before turning toward their targets over the flat Iraqi plains.
GPS can be jammed (see Russia-UKraine war), so inertial systems are still very important for rockets, for example some HIMARS rockets start with GPS and then rely only on inertial while getting close to target.
Himars relies on inertial navigation the entire flight and uses GPS updates to course correct. If the GPS is blocked for a sufficient amount of flightime, even with the intertial navigation, the accuracy can become unusably low.
This is how the Russians have been throwing double digit percentages of launches off course.
Terminal guidance since ~1995 on higher-end weapons has switched to hybrid inertial + scene matching (various sensor types).
F.ex. the 90s Tomahawk used terrain contour matching to orient itself
For more details see https://apps.dtic.mil/sti/tr/pdf/ADA315439.pdf (US translation of a mid-90s Chinese survey of the guidance space, but it covers the material and is publicly available)
Afaik, most modern systems use infrared target matching for final course correction. (Initially developed to allow anti-shipping missiles to autonomously prioritize targets, but now advanced enough to use in land scenarios as well)
I don't think ATACMS or GMLRS missiles have any terminal guidance apart from their aim point. The GMLRS missiles that carry german SMART munitions do, technically, since the SMART munition has it's own targeting system.
It wouldn't make much sense to me, as most ATACMS warheads are area based, not point target based, so they wouldn't expect to aim at a single target. Also these systems are relatively cheap compared to things that DO have such guidance
Almost all. Walleye television-guided glide bombs used edge detection on a television signal to aim themselves in. A human would designate a target at the start but then the bomb would autonomously track the target. An optical fire-and-forget system developed in the 1960s.
Sidewinders are another example. Both developed at China Lake.
When Nintendo Wii motes first appeared, they were some of the few devices at the time with cheap MESM accelerometers and gyroscopes that were programmer-friendly.
I remember taping two together back to back and integrating acceleration across them. That's when I learned Kalman filters. It was accurate enough so I could throw it across my desk and measure the desk length :)
When the MacBook got the acceleration sensor, I hacked up a little program to estimate velocity, and a button to reset at stoplights. Some friends drove me around, it worked poorly. it did pretty ok on the highway, but awful in the city.
I think if I kept messing with it, it'd get a lot better, but I sorta lost interest. This was more of a fun weekend toy.
I think all phones have them, and they might be reachable through chrome/safari. And it is kinda fun to play with, but you'll probably hit sampling rate errors pretty quick. you gotta guess the shape of the curve between datapoints.
It is how Apollo navigated, although both the ground (via ground tracking) as well as the crew (via locating stars through a extant, and the Apollo computer having a database of the position of several dozen bright stars) could update their current position throughout the flight.
Apollo used star sightings to check the accuracy of the gyros that measured which way the spacecraft was pointed. The stars could not be used to determine position like a ship at sea could do.
Besides inertial navigation, they had a transponder that would echo back a continuous pseudorandom bit stream, and the delay gave a precise measurement of distance.
Thank you for the correction, but are you sure that is accurate? I was definitely under the impression that although their position was normally updated by the ground (to the AGC, via their uplink capability), and the sextant was normally used to determine their orientation, the astronauts could use their optical equipment and calculations to determine their position as well as their orientation, albeit it with less precision. This NASA website (https://www.nasa.gov/history/afj/compessay.html#:~:text=Opti...) seems to say as much:
"Optical navigation subsystem sightings of celestial bodies and landmarks on the Moon and Earth are used by the computer subsystem to determine the spacecraft's position and velocity and to establish proper alignment of the stable platform."
"The CM optical unit had a precision sextant (SXT) fixed to the IMU frame that could measure angles between stars and Earth or Moon landmarks or the horizon. It had two lines of sight, 28× magnification and a 1.8° field of view. The optical unit also included a low-magnification wide field of view (60°) scanning telescope (SCT) for star sightings. The optical unit could be used to determine CM position and orientation in space."
The errors would be less of a problem than Apollo's, when your longest possible flight is only 45 minutes or so. And I'm not sure, but I guess the ballistic portion of the flight is uncontrolled (since the steering is from the rocket motors), so perhaps only the first few minutes are all it needs to maintain accuracy for?
The little errors do compound, but the errors have been made progressively littler; a modern ring-laser gyro INS has a drift of one millidegree per hour or less.
Or you can add an external correcting factor, such the Trident's astronav system that takes star-shots to recalibrate the INS.
Isn't it about finding the time difference between pseudorandom coded signals. Granted the satellite positions and paths need to be known, which is another part of the puzzle. That involves some calculus, I'm sure.
Yes but measuring diffs in either the pseudocode itself or the underlying carrier wave is basically measuring relative velocities wrt each sat and the observer.
It's all summing dx/dt + dy/dt + dz/dt, for i paths between satellites and ground stations (or more receivers for differential or rtk or vrs style). [2]
Which reduces most of the time to summing DELTA-Xi + DELTA-Yi + DELTA-Zi + delta-t(timeerrors). For i paths between each sat and ground receiver.
Which you should recognize the transformation if you've ever taken calculus. Even if you don't integrate every time you get a fix.
Part of what I describe as math 'magic' is that you can cancel out most of the unknowns and most of the unsolved calculus if you add a second fixed receiver.
Google and Apple location services 'cheat' and do this via subbing a nearby wifi MAC with known coordinates, which for them is good-enough. But augmented gps from FAA or DOT or coastguard etc work the same way, but with real gps receivers on the ground in realtime. Obviously without having to substitute anything.
Either way- the extra known variable greatly simplifies the math via canceling-out terms.
Plus there are both closed and open form solutions developed since initial GPS deployment that allow solving without direct integration.
Chapter 12 of [0] Surveying gets into the math, including transformations, if you want to see the math details.
Or [1] GPS by van Sickle for a good overview of the various methods/ technologies. (Also survey-centric).
[2] despite wgs84 and lat/lon being associated as default 'GPS coordinates', the 'raw' gps system data is xyz Cartesian in feet, then transformed to lat lon or whatever else.
My assumption was that GPS doesn't use dead reckoning to get a fix (other than the satellite paths). Do receivers use the Doppler effect to directly measure velocity?
Dead reckoning isn't really the right term- there are broadcast and published ephemera (ephemeris-es) for the satellites of various qualities. (Both predicted and observed and then even levels of correction days or weeks later, for high precision/accuracy or strictly static obvs. stuff.)
The doppler mostly comes into play with the small delta-t errors, but again, more math magic cancels most of it out in most cases, or what remains is negligible.
It's more of a signals/sync thing that gets into antenna design and (to simplify) getting all signal cycles from the various satellites working within a single aligned synced cycle, if that makes sense.
One reason the old gps units needed a long time to get an initial fix was waiting to download the broadcast, in ~bits/sec. This can now be downloaded much quicker via internet or other methods.
And there are dozens of other similar shortcuts possible depending on receiver capabilities/ connectivity/ observervation methods.
Which is to say that there's no one 'right' way to get a fix- and the 'most' correct original design was the ~hour long broadcast download. And no one does that anymore.
But just about every method (I'm aware of) is derived one way or another from the general eqns I gave above.
(But my exposure is almost entirely geodesy, engineering, and surveying, and my military (encrypted) knowledge comes from my PLS instructor being ex army intelligence, not hands on. But which is also why I am at least aware of so much of the missile tie-in issues.)
And there are signals processing and CS tricks also, which I only barely grasp.
But if something says it starts with baseline (propagating signal path) lengths to get position, it's skipping the step of how it measures/ estimates those initial baseline lengths.
I also could not believe inertial navigation systems worked as well as they do when I first learned about them. At some point in time the most sophisticated IMUs were actually export-controlled!
Maybe this has changed or is ineffective now that smartphone/quadcopter IMUs have caught up.
Advanced IMUs are still export controlled and the state-of-the-art is classified. The US military considers this a cornerstone technology and has invested heavily in R&D over the years. The IMUs that are widely available commercially have improved significantly over the years but so have the military versions.
> Maybe this has changed or is ineffective now that smartphone/quadcopter IMUs have caught up.
They did not caught up. There are two kind of IMUs: one where you have to account for the rotation of the Earth during signal processing and one where there is no point because it will be lost in the noise anyway. The smartphone/quadcoptee IMUs are the second kind. The first kind is still export controlled.
consumer-grade IMUs are still well below the performance of even much older military-grade IMUs (which tend to be impressive feats of precision engineering with pricetags to match, but also physically much larger). You'll still find anything that's useful for working out position over any time period is export-controlled (dual-use or stricter).
At least modern ICBMs do a star sight to calibrate at the top of their trajectory but, yes, that’s what inertial guidance is. Draper Labs basically pioneered.
Ground-based launch sites can supply both position and orientation very accurately (few seconds of angle), more accurately than stellar correction can, and their gyro platform can keep it within a few seconds of angle as well.
Submarines can determine their position accurately enough, but their orientation data can be improved upon using stars.
The MIRV bus takes in the angular fix just before it starts giving the warheads their individual nudges.
I mean it's a nuclear missile, millimeter accuracy isn't really necessary. Somewhere in the general vicinity is good enough for it's purpose of going boom.
Well, accuracy makes a big difference if you're trying to hit a hardened target like a missile silo. Missile guidance has been a constant effort to squeeze out more and more accuracy. Minuteman I started with an accuracy of 2 km, but now Minuteman III is said to have an accuracy of 120 meters. The Peacekeeper (MX) missile, no longer in service, is said to have an accuracy of 40 meters. You can use a much, much smaller warhead if you're 40 meters away compared to 2 kilometers.
The START II treaty limited Russia and the US's ICBMs to a single warhead each, and the Peacekeepers were optimised as a platform to host multiple independently targetable re-entry vehicles (MIRVs) and when the US agreed to revert to a single warhead per missile the Minuteman III was much cheaper to maintain than the Peacekeeper.
So even though Russia withdrew from START II almost immediately, the US continued to unilaterally remove the MIRV capability from its ICBM fleet and stick to single warhead Minuteman IIIs.
Random errors (i.e. noise) cancel out in the long run thanks to integration.
You're then only left with systematic offset errors which can presumably be calibrated out to a large extent.
We can assume the error will have a random (whether it's actually truly random or merely pseudo-random doesn't matter here, just assume it's indistinguishable from truly random for this discussion) and a non-random component.
The random component I assume to be gaussian (thermal noise, for example) and therefore symmetrical around the real value.
It's obvious we can remove this type of noise through averaging (of which the core operation is integration).
The non-random component I assume to be a skew that can be calibrated out.
With these two assumptions in mind you can see that yes, it's indeed a random walk, but a very well behaved one.
No, you can't remove the random walk error by integrating. The point is that after integrating, what you're left with the random walk error. To make this concrete, if you buy a commercial-grade gyroscope for $10, it will have a random walk error of several º/√h. So after summing the errors for an hour, you're left with several degrees of random error, which is bad. If you spend $100,000 on a navigation-grade gyroscope, you'll get a random walk error < 0.002º/√h, which is much better.
As far as calibrating out the skew, of course you can do that to some extent, but it's not a magic bullet. The Minuteman periodically measures skew and even applies equations for the change in skew with acceleration. The problem is that skew is not constant; it changes with time, changes with temperature, changes with position, and changes randomly, so you can't just calibrate it out. That's one reason why missiles use strategic-grade IMUs for a million dollars rather than a commercial-grade gyro for $10: you're getting drift of .0001º/hour instead of .1º/second.
You are correct, I forgot to separate between long-term and short-term random effects.
Short-term random effects (as in, the part of the gyro's random walk error significantly higher in frequency than the inverse of the integration period) will get cancelled out by integration, assuming they're Gaussian.
Long-term random effects (mainly from time and temperature like you mentioned) will instead tend accumulate with integration aka worsening with time.
P.S. great fan of your many ventures into retro tech, keep them coming!
that was not what you forgot and your summary is still wrong. kens is correct. i suggest programming some simple simulations using a random number generator to get a better feel for the space
try it: take gaussian white noise with zero mean and integrate it twice. You'll see the signal does not stay close to zero, in fact it will drift arbitrarily far away from it over time (it's only necessary to integrate once for this to be true, but doing it twice as an IMU needs to will make it more obvious).
You are correct, I initially did not explicitly separate the noise according to its frequency (my mistake).
Integration only helps with high frequency error and can actually worsen low frequency error, more details in my second reply to kens.
One can't cancel out random errors by integrating. You should take kragen's suggestion and write a quick simulation. To make this concrete, flip a coin 10 times. Take a step to the left for heads and a step to the right for tails. Most of the time you won't end up where you started, i.e. you have residual error.
> One can't cancel out random errors by integrating.
An ideal integrator has a response of 1/s. That's just a 1st order low-pass filter with the pole at 0. Therefore, it will filter out high frequency noise.
> Take a step to the left for heads and a step to the right for tails. Most of the time you won't end up where you started, i.e. you have residual error.
I wrote a quick simulation based on your suggestion [1].
Started by generating 1e6 random points and then applied a high-pass filter.
Calculated the cumulative sum on both the original and the filtered version.
TL;DR: filtered version has small and very fast variations but doesn't feature the much larger amplitude swings seen in the original.
Integration indeed does not help for those large slow swings (I'd call it drift in case of a gyroscope), but that's what I was trying to get at when I started to distinguish between short and long term random effects.
What I was trying to get across originally is that "all the little errors" (which I read to mean tiny fast variations, forgetting that drift is a much bigger issue in gyroscopes) which OP mentioned get filtered/canceled out.
I totally missed to explain that this will vary with frequency, which was my bad.
yes! but also keep in mind that 1/s is never 0 for any finite s, so even at high frequencies the error resulting from random noise is never zero, it's just strongly attenuated
> if you buy a commercial-grade gyroscope for [us]$10, it will have a random walk error of several º/√h. So after summing the errors for an hour, you're left with several degrees of random error, which is bad. If you spend [us]$100,000 on a navigation-grade gyroscope, you'll get a random walk error < 0.002º/√h, which is much better.
if the slope was anything else, the unit of °/√h wouldn't make sense; it would have to be °/h or °/∛h or something. similarly for noise figures given in nanovolts/√Hz
Even as someone who's actually done the math that video is trying to explain(in an actual class covering missile geodesy), it's still so absurd.
But honestly only about half as crazy as GPS would sound if you tried to put it in similar terms. And that's before considering the signal itself is way below the noise floor.
As much as I love these technical writeups, I wish more people who understand the slightly expanded bigger picture/ implications of the missile technologies would write what they know before the generation goes extinct.
There's so many dots that are easily connected from articles like these... but I suspect some level of classification prevents those in the know from being able to publish.
The bulletin of atomic scientists is the only non-fringe source that ever comes close. (Their article on the new generation of warhead fuses is a great resource for those wanting to go down the rabbit hole. Even it now seems to have been scrubbed from their site. [0])
Wow, so this thing needs to be pointed directly at the target 8,000 miles away and will miss the target by the amount of error in aim.
"To target a Minuteman I missile, the missile had to be physically rotated in the silo to be aligned with the target, an angle called the launch azimuth. This angle had to be extremely precise, since even a tiny angle error will be greatly magnified over the missile's journey. " ... "The guidance platform was completely redesigned for Minuteman II and III, eliminating the time-consuming alignment that Minuteman I required. The new platform had an alignment block with rotating mirrors. Instead of rotating the missile, the autocollimator remained fixed in the East position and the mirror (and thus the stable platform) was rotated to the desired launch azimuth. "
There are two factors. First, any missile with inertial guidance needs to have a precise angle reference as a basis for the guidance system. If the guidance system starts off slightly wrong about which way is North, it's going to miss the target. Second, the guidance system in Minuteman I could only turn about 10 degrees from its initial angle before the wires would get tangled up. The solution in Minuteman I was to use the launch azimuth as the reference angle, so it was precisely lined up against this angle. Most of the alignment was physically rotating the missile, but the last bit of alignment was by constantly rotating the stable platform for alignment with the light beam from the autocollimator.
I noticed that too. That seemed odd at first read... after all, it has a guidance system, it's not relying on exact aim. I'm assuming it's more that its guidance system can only has so much fuel at its disposal and ability to correct errors, and if it's aimed incorrectly it would exhaust its fuel before it corrects its trajectory.
Sometimes it's less work to engineer a hard problem into an easy one, than to solve the hard problem.
Most of the tech for the Minuteman I was developed in the mid-1950s.
With that level of processing, would you rather solve a 2d problem by precisely orienting the missile before launch? Or a 3d one by requiring it to orient during flight?
Keep in mind: any equipment to self-orient in-flight also needs to be carried on the missile itself, while being tolerant of launch, acceleration, and reentry forces.
Any precision machinery at the launch site has no such requirements.
This doesn't make sense to me. I would assume the engines starting by themselves would introduce enough error to throw the entire system off. Let alone natural seismic events in the ground, plus wind.
I would guess you must solve the 3D problem at least to some degree.
I haven't looked at submarine systems in detail, but my understanding is that the big problem is that an ICBM knows where it's starting, but the submarine travels. So submarines have super-accurate inertial navigation systems on board to determine their position.
Shooting a projectile, accounting for the earths rotation and wind, is essentially a solved problem (with computers). So I don't think this is that outlandish and I imagine it gets pretty accurate. Creating an analytical solution by hand is a junior level physics problem.
Just a bit of additional trivia. Jim Williams (somewhat famous EE at Linear) had a minuteman computer on his living room wall known as "the tapestry": https://www.eetimes.com/photo-gallery-remembering-jim-willia... (last picture in particle). Not sure what revision.
The article states that the system was cooled using a solution of sodium chromate to inhibit corrosion. However the wiki page of sodium chromate states that it is very corrosive. Is it a typo or something?
It's also mentioned that the computer uses one of the first integrated circuits for miniaturization. Do you know if this can be definitely traced to advances in industrial/consumer products? It's a common trope that military research trickles down - so it's a "good" thing. It's not clear if this actually happens or if progress would have been made eventually without the need for these machines.
Sodium chromate is highly corrosive to humans (as well as carcinogenic, see the movie Erin Brockovich). However, it inhibits corrosion in metal, acting as a passivating inhibitor, forming some sort of protective oxide.
I've been doing a lot of research on the impact of Minuteman and Apollo on the IC industry (which led to the current post). The Air Force likes to take credit for the IC industry, as does NASA, but the actual influence is debatable. My take is that both projects had a large impact on the IC industry, more from Minuteman. However, even in the absence of both projects, there was a lot of interest and demand for ICs. If I had to take a quantitative guess, I'd say that those projects advanced ICs by maybe a year, but the basic trajectory would have remained the same.
Ken, the story I heard wasn't so much that the MM3 demand created the IC industry, but rather that it created the quality culture that then led to ICs being widely accepted, because they were reliable. The AF had such purchasing power due to the program that they were able to impose quality standards on the industry that hitherto were not expected.
The article states that the system was cooled using a solution of sodium chromate to inhibit corrosion. However the wiki page of sodium chromate states that it is very corrosive. Is it a typo or something?
Chromates are effective corrosion inhibitors for aluminum alloys and some other metals. Here's a brief article about how they work with aluminum:
"Inhibition of Aluminum Alloy Corrosion by Chromates"
When the Wikipedia entry's "Safety" section says that sodium chromate is corrosive, in context it means "destructive to human tissue by contact." That is, like sodium hydroxide (lye) and many other chemicals, in concentrated form it can destroy skin and eyes.
In a serial computer, you have a 1-bit ALU, say an full adder that generate a sum and carry. Each clock cycle you read two bits and feed them into the adder, and then you write back the sum. You hold the carry in a flip-flop to use in the next clock cycle. It's just like doing a binary addition with pencil and paper, one bit at a time.
Note that you need to start with the lowest bit with a serial computer, which explains why x86 is little-endian. It goes back to the Datapoint 2200, a desktop computer made from TTL chips and running serially. The Intel 8008 processor was a copy of the Datapoint 2200 (as was the Texas Instruments TMX 1795). Although the 8008 was parallel, it copied the little-endian architecture of the Datapoint 2200.
I've often wondered if serial computers could have a useful role again. At very high clock speeds and wide data paths, you hear about trouble controlling signal skews. In contrast, imagine a serial computer clocking data around at 8 GHz vs. an 8-bit computer clocking data at 1 GHz. You have to deal with faster speeds, but no skew, and it seems like a 1-bit ALU might be simpler (and faster) than a 64-bit one.
Hmm, I see - how do the opcodes work and jumping then? Do you also read them bit-by-bit and reconfigure the ALU / codepaths? Is addressing also single-bit?
Here's a 16-bit bit-serial computer I made and tested on an FGPA https://github.com/howerj/bit-serial. If you look at `bit.c` it looks like an ordinary 16-bit Accumulator based Virtual Machine with a few odd instructions that make more sense when you know how a bit serial CPU works, nothing special about it. However the VHDL in `bit.vhd` shows how all those instructions are processed in a bit serial fashion, how data is fetched and stored in shift registers, etcetera.
The bit serial CPU in `bit.vhd` is actually customizable, you can make a 32-bit, a 14-bit, or a 27-bit CPU if you want from that VHDL quite easily.
if you write down two binary numbers on paper and add them, you will almost certainly do the computation bit-serially, adding each pair of corresponding bits with the carry from the previous operation. serial computers work the same way
I've got the book on my desk right now :-) It's a bit of an unusual book because it is full of technical details but it also has a fair bit of sociological content like "the construction of technical facts", "technological determinism", and "sociology of technological knowledge". This is in contrast to, say, "Minuteman: A Technical History", which is strictly facts and details. They are both good books, but it is interesting how they have completely different styles and focuses.
I agree with your observations about "Inventing Accuracy". Personally I found the sociology focus a bit unexpected, and maybe a little too strong in some chapters, but still a worthwhile point of view.
I will definitely be taking a look at "Minuteman: A Technical History". Books dealing at least in part with the history of IMUs are few and far between.
> The new guidance platform also added a gyrocompass under the alignment block, a special compass that could precisely align itself to North by precessing against the Earth's rotation. At first, the gyrocompass was used as a backup check against the autocollimator, but eventually the gyrocompass became the primary alignment. For calibration, the alignment block also includes electrolytic bubble levels to position the stable platform in known orientations with respect to local gravity.
Had never heard of gyrocompasses before. I worked on a small robot in the past and remember having to calibrate the magnetic compass, which was not very accurate (similar to smartphone compasses). I never thought about how they’d get super precise headings for ICBMs.
The Encyclopedia Britannica article on gyrocompasses is really good. Here it explains why you can’t use a gyrocompass on a vehicle on fast aircraft (and I guess small robots that are jostled around a lot):
> A major contribution by Schuler was the discovery that, when the period of oscillation is 2π√(Earth radius/gravity), the heading precession of the gyroscope spin-axis due to acceleration is exactly the rate of change of the angle between the apparent and true meridians seen on a moving vehicle. The gyrocompass will then read true north at all times if its indicating reference is offset by the angle between these two meridians. The angle, at ship speeds, is a direct function of the north-south speed and is easily set into the system. The need for accurate speed measurement for this offset is the main reason why a gyrocompass is not practical for use in aircraft.
These systems are so vastly complicated, old, and rarely if ever launched. These aren't like data center generators which have testing schedules etc, and STILL there are failure points.
I really wonder what the failure rate would be if they were all actually launched today. And I mean failure, from not lifting off, to failure in flight, to misguided warheads etc.
But they do have testing schedules. They certainly tests the electronics regularly. And every so often they randomly pick a missile and launch them from Vandenberg.
> I really wonder what the failure rate would be if they were all actually launched today
I hope we will never find out. But certainly there would be many duds. But this is calculated into the effectiveness of the system by simply having more missiles. That is how it achieves its goal of dettering a would be attacker. (Not even talking about how there are two other totaly separate legs of the nuclear triad with dissimilar personel and technical solutions.)
That reminds me of the 2007 incident when the Air Force meant to transfer twelve unarmed cruise missiles from North Dakota to Louisiana but loaded up six nuclear-tipped missiles by mistake. The missiles remained unprotected on a B-52 for 36 hours before someone noticed the missiles were nuclear. A whole lot of people including 8 generals got in trouble over this.
Somewhat soberingly, earlier in the war with Ukraine, Russia managed to launch a ballistic missile with a concrete "Warhead simulator" installed, literally a mass designed to test the launch vehicle without using a nuclear device.
If that was an accident, that means they didn't properly know which warheads are where. That's.... upsetting.
They can test the missiles and reentry vehicles, everything except the nuclear warhead. The closest those come to a test is a supercomputer simulation, since those tests are forbidden by treaty.
Minuteman III has an excellent but not perfect [0] failure rate. Some other older systems, like the UK's submarine launched Trident missiles... not so much.
Th UK Trident 2 missiles are literally the same missiles used by the US submarines operating in the Atlantic - both sets of submarines are supplied by a shared pool:
I had the same thought. They lovingly take Minutemen out of their silos and launch them after reassembly from Vandenburg AFB or someplace for a test. I’m pretty sure trident tests happen at sea.
Strategic weapons are tested regularly, except for the warhead. The Trident II D5 has had about 200 tests in the last 35 years. That's about the same number of test missiles expended as are actively deployed at any given time. While there are a theoretical maximum of 344 deployed at any one time (14 Ohio-class, each with 20 tubes, plus 4 Vanguard-class with 16 tubes) some number of those boats are unarmed in refit at any given time.
They have operational models with statistical failures included so they understand what the interception and failure probabilities likely are for each item at component level. You can design that into a system and test for it.
Obviously you can't factor in unknown problems but that's what drills and test flights are for.
Air force takes one at random, transports it to Vandenberg sans warhead and lights the fuse to see if it works. Or at least they used to. I also heard that the Soviet equivalent of this random testing process was to simply retarget a random missile and launch it straight out the operational silo.
I have a morbid curiosity to know how much of all that old tech would actually work in a full scale nuclear war, launching all missiles. Seems so well thought-out, but also incredibly hard to test. Really fascinating article!
They did dozens of tests of the Minuteman missiles and reentry vehicles. The warheads were tested underground until the comprehensive test ban treaty of 1996. So it's pretty likely that the systems would work if needed. One risk is that something may have gone wrong with the warheads over 30 years. (Of course they maintain them, but without testing you can't be sure.) Another risk is that you don't know how the missiles would function in an environment with nuclear blasts and EMP all over the place. They put a whole lot of effort into mitigating these factors, but you can't be sure. Hopefully we never find out.
Note: While none of the Annex 2 countries that are signatories have conducted tests since 1996; the treaty never took effect because it was never ratified by all the required countries. Most notably the US, China, and Russia (although all three signed). In 2023 Russia officially withdrew, allegedly based on the US non-ratification. At least one political candidate for the presidency in the US has advocated for resuming testing. It is not inconceivable that testing could resume in the near future.
Opinion: I don't think the US would if Russia or China didn't first. China likely won't for the same reason the US doesn't need to: they have super-computers and the sims line up with the data from prior tests. Russia might however if only to saber rattle, although they likely don't need to either. Russia however is likely not in any hurry to have a test failure right now. So while testing could resume, I wouldn't put money on it.
There was an area of redundant symmetric electronic design, that auto compensated for component level failures. I remember reading an "aerospace" manual all about it when I was a kid. It was necessitated when the tolerance and reliability of components were terrible by today standards.
Note too, that mil spec silicon is different in that it is resistant to CMOS latch-up, redundant CRC protected self-correcting consensus register ops, and large gate sizes less sensitive to Gamma radiation.
It was an interesting time, and a few people still think living under the Sword of Damocles builds character. =3
Why would it be hard to test? We have our own anti-missile technology, so it's ostensibly as simple as not putting a payload on the missile, then launching it at your own test range.
The physical environment these weapons were designed for is extreme and only possible to simulate piecemeal.
Each stage needs to function in the presence of nearby nuclear detonations, resulting from both adversary and friendly weapons.
These detonations are expected to cause severe shock, thermal, radiation, and electromagnetic transients.
In the case of the most important targets, it is guaranteed that numerous detonations near the target, from ABM systems and friendly impacts, will occur, and these systems have been engineered and are expected to perform reliably under such conditions.
This weapon is an ICBM. The payloads are delivered to orbit then launched at the target from there. You're already facing severe shock, thermal, radiation and EM transients just to get to orbit. Once there, you're ultimately dropping MIRVs, the design of which is considerably simpler.
The delivery vehicle and the reentry/payload vehicle have entirely different life cycles and deployment concerns.
A key operational requirement for any fixed US ICBM is that you can launch during and after an enemy nuclear attack on your silo fields (silos dispersed and hardened, redundant command links, etc) and penetrate defended areas protected by nuclear-armed ABM systems (e.g. A-135/ABM-4 Gorgon). That means resistance to high radiation flux from nearby detonations at both launch and reentry, as well as the need to survive flying through the expected debris clouds kicked up by previous detonations.
If the enemy is using ICBMs you are very likely to get yours launched well before their weapons make the first impact. If that weren't true, then the high flux conditions do not last for a substantial period, so you're describing a problem that would occur if an enemy warhead hit at the precise moment yours was leaving the silo. Your enemy cannot possibly have this precision in timing.
> If the enemy is using ICBMs you are very likely to get yours launched well before their weapons make the first impact
This is a terrible assumption to make if you’re trying to deter nuclear war.
Unless any random outage or terrorist/conventional strike against one’s early-warning radars, or errant satellite launch by a low-grade nuclear power, is automatic grounds for a universal MAD offensive.
>Unless any random outage or terrorist/conventional strike against one’s early-warning radars, or errant satellite launch by a low-grade nuclear power, is automatic grounds for a universal MAD offensive.
They are, that's what Stanislav Petrov is famous for helping prevent. A single early warning satellite had an anomalous reading and everyone wanted to start nuclear Armageddon, only prevented because he didn't believe the US would send only a few warheads, to the point he was willing to bet his country on it.
Famously the US plans early on didn't really have any distinction as to who fired, and the only retaliatory option available was to go full send with everything on Soviet cities.
Specifically about satellite launches, yeah, that's a genuine concern, which is why they are talked about publicly, even when it's a classified spy satellite going up, why you are very loud and public about testing ballistic missiles anywhere you do so, and why both the US and USSR worked very very hard on making computers to quickly estimate the landing point of a ballistic launch.
Meanwhile the entire point of nuclear missile submarines is that ground based missiles are an explicit target of known enemies, and thus not likely to survive a first strike. They are not intended or planned to survive such a first strike, which is why we had SAC flying B52s 24/7 for like 40 years. Any missiles that aren't out of their tubes before enemy detonation are mostly assumed lost.
In fact, nuclear submarines have gone a long, long way to improve the situation, because now you don't have to rely on those ground based or air based warheads as much, so you can be more conservative in your judgement. If you're wrong and the soviets really did initiate nuclear war, oh well, Tridents will show them the error of their ways.
The biggest "eh, we should wait before we launch" cause is simply the lack of tension between world powers. Despite Russia's blustering, in official capacities they have not signaled that they are looking to launch nukes. They have not significantly increased their readiness, towards a large scale anti-NATO war, to the point that they are literally removing defenses from the border with Finland in order to divert those resources to invasion.
There has only been one "Oh shit oh god" moment that I know of; when an S300 missile (which I believe later turned out to be Ukrainian) landed in Poland and killed a farmer. There was a crisis meeting of NATO members.
Moreover, putting nuclear weapons in orbit would be a violation of the 1967 Outer Space Treaty. As an aside, Atlas and Titan were capable of reaching orbit, and were used for the Mercury and Gemini missions respectively. Minuteman, on the other hand, was not powerful enough to put a payload in orbit.
This is Youtuber has had several very well researched and thorough videos about topics that readers of Ken would enjoy, including Buran, the F-14 Air data computer, Elite, and the birth of Digital fly by wire out of the Apollo program.
Check him out and give him slack for his vocal fry. It's mildly annoying but the content is just so good.
I believe parts of the subsea industry uses a similar concept, i.e., gyroscope based for inertial navigation.
Obtaining position & veocity: I think it even more interesting when one compares the difficulties of getting these fundamental navigation data in an aerial, ground and undersea platforms.
What's the actual color of the yellow paint? Goldish like first pic of lemonish like latter pics. Contrast/aesthetics of the gold is just chefs kiss. There's something about American MIC pallette that rarely miss.
It depends. For Minuteman I, the missile needed to be physically rotated in the silo to be aligned with the target. Then the "Targeting Van" connected to the missile, downloaded the new targeting data to the disk, and checked that the guidance system was aligned. As for the targeting data, it was generated by a mainframe that determined the right trajectory and produced the optimized navigation polynomials that the targeting algorithm used. It was something like 740 words of data per target so only two targets could fit in the computer.
Minuteman III used a smarter targeting algorithm that only needed 70 words of data per target, so the missile could support something like 8 targets at once, selected by a knob on the launch console. (The launch officers didn't know what the targets were; they were just told to use target #3 for example.) The targeting data was read off punched tape for Minuteman II and a magnetic tape cartridge for Minuteman III.
It was organized primarily via wargame scenarios, such that one target group comprised targets for a given scenario.
Simplified launch orders via the football, etc.
One group scenario might have been silo coordinates for an offensive first-strike. One group city coordinates for launch on warning strategic counter-strike, etc.
Each missile got a target from each scenario list programmed into a 'memory slot' with some overlap.
The organization/ optimization is mind-boggling.
But few understand that this is WHY the wargames and strikes had to be pre-planned ahead of time. It wasn't political hubris, but a technical requirement due to memory allocation.
> It wasn't political hubris, but a technical requirement due to memory allocation.
I don’t understand why it would be “political hubris”.
Proper targeting is hard work. You need to map your enemy territory to make optimal choices. Not just in a geographical “what are the coordinates” sense, but also in a “what are the important nodes to get the coordinates for” sense. At the same time your enemy doesn’t want to be mapped and resists your efforts.
Doing this properly takes time. On the order of months. But once you are under attack you don’t have that time. So you have to select your targets ahead of time.
It is not because the missiles have limited memory. If they would have needed more memory they would have added more memory. It is because the President doesn’t have time once under attack to name each enemy railway depot one by one and decide which ones are important, and which ones are better left unharmed. Instead what they have is a menu of options. Something like option 1 destroy all red military ports, military airports and military bases; option 2 destroy major military installations plus main industrial centers; option 3 destroy main population centers.
Thinking that the memory allocation is why it is the way it is is super tech centered and quite frankly putting the cart before the horse.
One of the great things about chemical propulsion rockets is that they
can take off with little to no prep at all.
Ready to go at the press of a button.
The scary thing is that, when left alone for a long time, and these rockets
have been, "the plates" keeping the chemicals from meeting each other ahead of schedule, corrodes, just a tiny bit at a time. each time raising the possibility
of premature ejaculation just a tiny little fraction.
I think you're talking about the Titan missiles, which use hypergolic propellants. The Minuteman missiles are solid fuel, so there are no separated chemicals.
There are separated chemicals, the hypergolic means no ignition source required.
Which to his point would be even more scary, but just isn't the actual real world risk with the way the things were designed.
Plus hypergolics are usually toxic on their own, even without mixing and/or booming, in a quieter, more-deadly-to-technicians way.
Spills and defueling and meeting well-intentioned but bad safety guidelines that require abundant fiddling were the real source of danger. More fiddling == bad.
Iirc, the fuels/oxidizers/reagents/ whatever-liquids mainly behaved like aluminum oxidizing, such that reaction with the tanks components actually created an increased buffer layer of oxidation/ protection.
Tank corrosion wasn't high on the list of risks after it was figured out on a per-chemical basis.
I think it's one of the aspects covered fairly well in (the great, often posted) Ignition! [0]
The minuteman missiles are solid fueled. There are no liquids and no hypergolics involved in the stages which loft it towards the enemy. Structurally it is more similar to a candle than a fuel tank. There are no spills or defueling with this system. This is a fact. In this system you won’t find a separate oxidiser/fuel. The two components are mixed together and they form a kind of rubber like cylinder with a hole in the middle. The hole is shaped appropriately so the rocket engine burns at the right rates.
There are hypergolic fuels at the very end of the rocket in the payload. They are used for deorbiting and to control the return vehicles. But it is a much smaller part of the whole missile. (Both by mass, and by encapsulated energy.)
Yes I was speaking to the historically overblown concern of fuel and oxidizer tank corrosion, which yes, does not even apply to minuteman or solid fuels.
Should have prefaced that response with a big IFF/WHEN old liquid rockets.
Interesting tech and all, but ultimately efforts like this are a waste. If we humans could instead get over our self-perceived need to engage in warfare for childish reasons then we could dedicate such efforts to more productive things like helping homeless people get housing and skills, or developing better psychological sciences to help drug addicts get free from their disease of addiction, you name it
> O SON OF SPIRIT! The best beloved of all things in My sight is Justice; turn not away therefrom if thou desirest Me, and neglect it not that I may confide in thee. By its aid thou shalt see with thine own eyes and not through the eyes of others, and shalt know of thine own knowledge and not through the knowledge of thy neighbor. Ponder this in thy heart; how it behooveth thee to be. Verily justice is My gift to thee and the sign of My loving-kindness. Set it then before thine eyes.
>
> ~ Baha'i Teaching
It would be great if we could all share that sentiment. But ask the Ukrainians how unilateral disarmament worked out for them. Unfortunately, it seems that this particular waste of resources and intellect is still necessary.
> O rulers of the earth! Be reconciled among yourselves, that ye may need no more armaments save in a measure to safeguard your territories and dominions. Beware lest ye disregard the counsel of the All-Knowing, the Faithful.
> Be united, O kings of the earth, for thereby will the tempest of discord be stilled amongst you, and your peoples find rest, if ye be of them that comprehend. Should any one among you take up arms against another, rise ye all against him, for this is naught but manifest justice.
> (“Gleanings from the Writings of Bahá’u’lláh”, pp. 253-254)
Sorry, I don't mean anything against legitimate self defense, I'm talking rather about "self-perceived need to engage in warfare for childish reasons"
Examples of that would include ego, greed, petty revenge, etc...
But I guess the way I said it did come across that way, so yeah, my bad
I'm just saying that if we could evolve past such petty ego-based sentiments in the world, then wouldn't such pressure to develop weaponry in such massive amounts, and hence we could focus on actually making a functioning society
>I'm just saying that if we could evolve past such petty ego-based sentiments in the world
That would be nice, but it seems that would make people no longer human. Happily following a petty ego-based Dear Leader's orders into warfare seems to be the norm for much of the human population, looking at history, so this tendency appears to be deeply-rooted into the human psyche.
why? the majority of humans are peaceful and friendly. the problem is that we have been raised to follow leaders instead of thinking for ourselves. that is what needs to change. when people learn to think for themselves then we won't have that problem anymore.
as the quote above says:
thou shalt see with thine own eyes and not through the eyes of others, and shalt know of thine own knowledge and not through the knowledge of thy neighbor
it all comes down to better education, specifically moral education, and learning to critically evaluate any information we get.
if people blindly follow a leader, then this is exactly what they are missing.
So, the problem with that is that not everybody is equally good at "thinking for themselves." A sufficiently-large group of people will always divide itself, as inevitably as if Maxwell's Demon himself were prodding them, into subsets of people who can be easily herded and people who can't be. By the same token there will always be a small minority, drawn from both subsets, who are unusually good at doing the herding.
Everything that goes wrong later can be traced to that inequality. Things that go right can usually be traced back to it as well. Any faith or philosophy that doesn't begin with an understanding of that aspect of human nature is a waste of time at best. Thank you for coming to my TED talk.
> not everybody is equally good at "thinking for themselves." A sufficiently-large group of people will always divide itself, as inevitably as if Maxwell's Demon himself were prodding them
true, but perhaps we do have capacity to learn to think well enough to pick leaders who are better people than the most of us overall, e.g. learning to not mistake charisma for pure intention or intellectual insight
with some self awareness, it takes less effort overall to know who is better than us or our friends in terms of ethics and intellect, than it is to be right on every issue
and thats probably more important than the specifics of each issue, especially since the lay person doesn't have time to delve into the finer details of everything
now if we can get in a cycle of electing leaders like that then that would set us on a trajectory of constant societal growth, but to get to that our system probably also needs to change so as to actually present people with realistic options
that is probably a lot harder, and the two are certainly locked into a feedback loop
and I don't have answers on that one, but I believe if its possible to get to where we are from pond scum, then perhaps we can get to that point in the future as well