Hacker News new | past | comments | ask | show | jobs | submit login
The Webb Space Telescope’s profound data challenges (ieee.org)
318 points by MindGods on July 12, 2022 | hide | past | favorite | 160 comments



If you were designing the JWST today, you would probably also put onboard a GPU. That could be programmed to do some of the scientific work in space to reduce the amount of data that needs to be downloaded.

This would allow new types of science (for example, far shorter exposure times and stacking to do super resolution and get rid of vibrations in the spacecraft structure). It would also allow redundancy incase the data downlink malfunctions or is degraded - you can still get lots of useful results back over a much smaller engineering link if you have preprocessed the data.

Obviously, if that GPU malfunctions, or there isn't sufficient power or cooling for it due to other failures, data can still be directly downloaded as it is today.

Basicly, it adds a lot of flexibility.


It's hard to say -- I might disagree on whether to do something like that. Most often you want to be able to keep the raw data as long as you can, in anticipation that perhaps some day, some future technique or different calibrations/processing pipeline may improve. Or that you might find (or be looking for something) you didn't expect.

Especially for a scientific instrument whose usage patterns, operating conditions, and discoveries may change over time. (sensors too) Note for an instrument like this, the amount of people/researcher time studying the data afterwards is many times more than the amount of time taking the data. The value of it is incredibly high ($/hour), so you want to keep in as future-usable state as possible.

Once you process something on board for a certain purpose, unless for very low level integrity checks that are almost mandatory, etc, and discard the raw data, you lose the chance to do that in the future.

So unless you are really transmission constrained, I think they would prefer not to do it -- also because of the additional complications involved. I think once you get into "higher functions" becoming an obligation of the telescope's operations, those satellite / defense contractors etc. who have to launch and operate the thing start making requirements that are very difficult to live by.


I don’t know… that’s a lot of power and heat that needs to be dealt with for an onboard GPU. Heat is probably the biggest factor, as it might be enough to affect the image sensors (speculation). Plus, needing a radiation hardened GPU might be an issue. Just for data reprocessing purposes, I’d want to have copies of the rawest data terrestrially.


> If you were designing the JWST today, you would probably also put onboard a GPU. That could be programmed to do some of the scientific work in space

That's just not how science is done in astronomy. People want the raw data to analyze it for decades in different contexts. There's not much that can be done onboard that would make you not want to copy that data back.


The article alludes to laser comms - NASA is developing[0] laser based comms systems (as opposed to radio frequency) which would allow gigabit speed downloading of data. Hardening this technology to send back more raw data is probably a lot more straight forward than trying to do image processing on board.

[0] https://www.nasa.gov/mission_pages/tdm/lcrd/index.html


Depends on if the GPU is "space grade": https://arstechnica.com/science/2019/11/space-grade-cpus-how...

You can't just throw any consumer microprocessor into a machine subject to extreme vibrations, heat, cold, and radiation.


Do rad hard GPUs exist? If not I think it's wise that they stuck with FPGAs


I'm curious for anyone who may know the answer... with no mention of encryption, are these streams free for anyone with the equipment to receive? Conversely, what kind of security is in place on JWST for command updates to ensure that some rogue group couldn't cause mischief and send it commands?


There is a decent amateur community for receiving satellite transmissions. I'm not super knowledgable on it, but 2 resources that may interest you:

Scott Tilley, who gained a lot of recognition in the past year in analyzing radio signals to see how Russia was using satellites in Ukraine: https://twitter.com/coastal8049

An amateur group revived 2-way communications on an abandoned satellite: https://sservi.nasa.gov/articles/isee-3-reboot-project/#:~:t....



I think in scientific satellites, the downlink is unencrypted and only the command channel is usually encrypted for obvious reasons.


Is 25 Ghz something an amateur could practically expect to capture from earth without ridiculous (or improbable) electronics? My understanding of higher frequencies was that something this high is likely to have been almost completely absorbed by the atmosphere


You'll probably need a pretty big dish...


I remember talking to someone who works on the Deep Space Network. The commands they send to their devices are definitely encrypted and have checksums so nobody can inject bogus commands. It was super important that the device receives the correct command at the right time. Not sure if their downstream is secured.


This is an exceptionally old document, referencing the 2013 launch expectation, but it contains a bunch of interesting information on the platform database and the communications segment. [1]

Apparently, they have to have accurate ranging to receive from JWST. The interesting portion:

"Ranging is required for JWST, using alternate ground stations in the southern and northern hemisphere. For LEO and L2 missions the accuracy of the ranging is dependent on the tracking of the spacecraft across the sky. For the JWSTs L2 orbit, 21 days of tracking equals about 15 minutes of tracking for a LEO spacecraft."

https://ntrs.nasa.gov/api/citations/20080030196/downloads/20...


Reading this article makes me realise that it would take a relatively small let up in funding such projects to permanently lose the knowledge and expertise it takes to build these fantastic machines.


Yes. That's part of my reason for supporting higher and higher defense spending. Defense and science are eternally inseparable.


How about just higher and higher r&d spending instead so our choice of tech development is governed by public welfare rather then military utility? We've already had technologies like Nuclear fusion that have developed in inferior directions because they benefit adjacent military technologies.


Why?


The general public is hard-pressed to support basic research with taxes, but they do support unlimited defense funding. This is one hack that the US system uses to do basic research under the umbrella of defense.


This is anecdotal, but I've not run into a single real life American who hasn't agreed with the general premise that we spend too much money on "the military industrial complex". Even the hawkish folks agree with this notion to some extent, usually arguing that specific projects are a waste of money and that's where the cuts should come from.

Cynically, I think our congresspeople don't actually take our feedback about the budget into account. They'll talk a big game about wasteful spending and the need to reduce deficits but the pork never stops flowing.


Folks can say whatever they want, but the votes show that "no one ever got fired for increasing defense spending", while the opposite isn't true.


Because defense spending funds science research. See DARPA.

Really good introduction to it all https://www.amazon.com/gp/product/B07BLKXV68


But so does science research spending. Point seems to be why not let it follow directions independent of military utility.


Better science and tech than the other guy gives you a military advantage. Thus some fraction of your science funding is effectively military funding, and some fraction of your military funding is well spent on science.

Also, more science projects lead to more experienced scientists and engineers, which leads to a stronger pool for the defense industry.


This is the type of articles we (data junkies) need to see and read! Highly interesting to see the transmission rates, storage capacity and other data related considerations that went through the design of James Webb telescope. Now, if only there were more funds allocated to antennas/dishes on the DSN (Deep Space Network) [0] to be able to service all ongoing and future space missions, that would be great.

[0] https://eyes.nasa.gov/dsn/dsn.html


Not antennas, lasers and lenses. For intersatellite links optical is much better, in particular if something is far away like the JWT. Note that NASA is planning to have significant optical links as part of DSN.


The Menzel guy kind of address that at the end, though I wish he went into more detail.

My guess is his attitude is why use less tested technology when the capacity of the Ka band link does the job dependably.

As more probes go up and antenna time grows shorter, increasing link capacity will become more necessary. Until then, why experiment on a 10 billion dollar project?


I agree for JWT optics was not really an option, however moving forward many space missions will start to use optics. Last year NASA launched the Laser Communications Relay Demonstration and there are extensive plans for integrating logical links into the DSN


> In addition, according to Carl Hansen, a flight systems engineer at the Space Telescope Science Institute (the science operations center for JWST), a comparable X-band antenna would be so large that the spacecraft would have trouble remaining steady for imaging.

Why would a large antenna make the spacecraft less steady? What's the mechanism behind it?


At a guess: solar pressure.


The antenna is steered to point towards the DSN antenna on the earth. A larger moving mass would make it harder to maintain telescope pointing while the antenna is moving.

In reality, the antenna pointing is 'paused' during each science observation, unless pointing is needed due to the length of the observation.


Oh, the antenna doesn't need to just point in the general direction of Earth, it needs to point to somewhere in the surface. That makes sense, having a narrower beam would save power and achieve higher bitrates

Does this mean it has only a 12-hour window to transmit? Or there's multiple antennas on Earth?


It uses NASA’s Deep Space Network. There are three stations around the globe (California, Australia, and Spain), spaced so that there is near continuous coverage for deep space missions. JWST points it’s antenna at the station that is currently in view.

However, the ground stations are shared between many missions, so they are not available for JWST all the time. Expectation is that JWST gets 8-12 hours/day of DSN time.


Could also be more resonant modes in the structure that would need to be damped out.


It's because large, directive antennas are still 'dish' style and have to be mechanically pointed at the target (Earth based DSN receiving antennas). That pointing causes vibrations and potentially a shifting center of mass.

Phased arrays allow beam pointing without mechanical movement, but are very expensive for large high gain antennas.


I believe it is both due to solar wind and the fact that it would act as a tiny light sail.


Solar wind exposure perhaps?


The article didn't go into it I think, but I recall in many satellite missions of this type, there are not only data storage and transmission issues (normal issues you would expect), but also considerations that the antennas and transmission hardware themselves have a certain duty cycle or lifetime that is finite. As in transmission of data consumes that margin.

So you have to quite deliberate in considering how much data to be sending, which data, etc. because every GB eats a chunk of the satellite's expected life. (Again, I believe.)


Really? I've never heard of transmitter or antenna being considered a consumable (and I work in the space industry). Any idea where you got this idea from?


I thought about this some more and remembered that we do do "trending" of just about every subsystem on the spacecraft, and comms is one of them. Pretty sure there's a slide in a presentation every few months looking at how many times the radio has been turned on compared to the number of times it was designed to be turned on, but I assume the component in question is just the relay that switches it on. We have similar plots for everything that can be turned on and off. I think the point of these presentations is just to think about what's likely to die first and to make it obvious if we suddenly change how often we use things.


I will try to find a link, although it's of course quite specialized info that is not often written about.

But for example, I recall that for Spitzer space telescope (I believe) every activation of the transmission hardware consumed it's usable lifetime, or the finite amount of liquid helium coolant that was needed for the operation of the telescope (which only had an expected lifetime of 2.5 years, for the key instruments that relied on coolant).


I know I'm beating a dead horse here, but I thought I'd mention that the idea this isn't written about is incorrect. There are hundreds of papers and publicly available engineering documents about deep space transponder design.

I did a little more research and found that JWST is using the radios on its Raytheon ECLIPSE bus. There's a lot of conference papers and specs available. I haven't found any lifetime estimates yet, presumably because it's just not a concern.


On that, could you explain the term transponder when I would have thought transceiver would be a more apt description. From the SDST and Iris info out there they handle up and downlinks, telemetry and commands etc. which all seem like transceiver functions.


Hm, yeah, I would have called it a "transceiver", but SDST uses "transponder" in the name so I started using that term without thinking. I guess the terms are used interchangeably in this context...

Edit: what's Iris? Also now I'm not sure SDST was sending the data -- was that on a different radio?


Ok https://trs.jpl.nasa.gov/bitstream/handle/2014/38449/04-1359... is pretty clear (telecom section) that SDST was how the data got down. Also interesting that it says the radios were designed to last 5.2 years. I'm guessing it just wasn't worth trying to prove they would last longer.


Iris is another transponder/transceiver package for smaller satellites. Supports Turbo codes too btw. https://en.wikipedia.org/wiki/Iris_(transponder)


Sorry, I meant the aspect that for example, Spitzer, only had so much margin to transmit data (or other heat-causing) activities else its lifetime would be shortened. That was not much written about (outside of detailed technical circles).


??? helium cools the detector, not the radio...


BTW this is what Spitzer was using: https://en.wikipedia.org/wiki/Small_Deep_Space_Transponder . I haven't been able to find info on what Webb is using.


Helium, for sure. Coolant, propellant, battery charge cycles, flash write cycles are all consumables. Maybe even solar panels -- they wear out. The radio? I doubt it.


> All of the communications channels use the Reed-Solomonerror-correction protocol—the same error-correction standard as used in DVDs and Blu-ray discs as well as QR codes.

I find that somewhat hard to believe, LDPC are well established and much more suitable. I would have expected that they would use a DVBS2 standard code.


I suspect they know what they're doing.


Yeah, most likely.

None the less, I'm also curious about the choice, but couldn't find a lot about it. There has to be some trade-off I guess to using LDPC instead of Reed-Solomon. I only found this paper, but haven't read through it, so no conclusion as of yet:

https://trs.jpl.nasa.gov/bitstream/handle/2014/45387/08-1056...

> Efforts are underway in National Aeronautics and Space Administration (NASA) to upgrade both the S-band (nominal data rate) and the K-band (high data rate) receivers in the Space Network (SN) and the Deep Space Network (DSN) in order to support upcoming missions such as the new Crew Exploration Vehicle (CEV) and the James Webb Space Telescope (JWST). These modernization efforts provide an opportunity to infuse modern forward error correcting (FEC) codes that were not available when the original receivers were built. Low-density parity-check (LDPC) codes are the state-of-the-art in FEC technology that exhibits capacity approaching performance. The Jet Propulsion Laboratory (JPL) has designed a family of LDPC codes that are similar in structure and therefore, leads to a single decoder implementation. The Accumulate- Repeat-by-4-Jagged-Accumulate (AR4JA) code design offers a family of codes with rates 1/2, 2/3, 4/5 and length 1024, 4096, 16384 information bits.1, 2 Performance is less than one dB from capacity for all combinations.

My guess at this point is probably just "We've used Reed-Solomon a bunch, we know it works. We're working on newer techniques, but lets use what we know works"


Reed-Solomon is better at detecting longer runs of missing data (which could come from objects passing by, for example), and is a lot cheaper to decode - computation in space is very expensive.


I suspect you're right but it seems that the capacity advantage of convolutional codes only become significant in very low SNR, so maybe deep space probe applications. Also unless interleaving is used, Reed-Solomon can do better against bursts of errors, though am nor sure why the noise profile would be ay different.

So, as you say, maybe it was just faster to integrate the already certified equipment at that stage of the development.


idk the article also mentions they've been working on it for 20 years I wouldn't be surprised if they just got to a point that was good enough and then didn't want to mess with things

real tragedy is that they didn't use cutting edge web7.0 tech for their front end smh


In general, cutting-edge astrophysics does not necessarily use cutting-edge software engineering.

For example, the JWST also uses a proprietary version of JavaScript 3, made by a bankrupt company.

https://twitter.com/michael_nielsen/status/15469085323556577...

I think there's a pretty good chance that their data encoding scheme was working, and so they just left it in a working state, without upgrading it to use modern best practices.


Note that this mission was specced and designed ages ago too, so just as the observations it makes are views of the past, the engineering to do so is a time capsule too


They sure do, but I'm less confident that the statements made in the interview can be uphold to standards of scientific scrutiny.


Know how to maximize budget.


JWST started in 1996. According to Wikipedia that's largely when LDPCs were 'rediscovered'.

https://en.wikipedia.org/wiki/Low-density_parity-check_code#...


Why is this project on such a long timeline? I wonder what was the longest critical path in this project plan?


I can see a couple factors:

* To be useful to science, the telescope needs a huge diameter which is more than can be launched so they had to make a folding telescope that can be later unfolded.

* We only get a single chance for success so testing becomes critical. Lot of testing was needed to make sure everything would work out in the end.

* The technology needed was pushing the limits of what we are capable of so lot of R&D needed.


A misunderstanding of 'failure' by the general public.


Others have mentioned the advantage in terms of burst errors. That is fairly common for radio signals coming from space. Think of an airplane flying through the signal path or something. I know in the NRO all of the radio downlink used BCH for error correction that could correct up to 4 bits per byte, and DPCM for compression, which also does particularly well with long runs of the same pixel value, something pretty common with space imagery (most of what you're looking at is black). BCH allows you to pick exactly how many bits per byte you want to be able to correct, which can be tuned based on known error characteristics of your signal. Part of it may just be these systems have been around a long time and we already have extremely well-tuned and efficient implementations that are known to work quite well with very large volumes of data inbound from space.


I like that the article had an error missing the space between "Reed-Solomon" and "error-correction". A bad subeditor, or an Easter Egg joke? :)


I’m excited to see the new database and data management and ingestion solutions that will come out of this.

Anyone know if ZFS is playing a role?


Latency is 1h 20m one way (1.5M km / 300_000 km/s)?

EDIT: got my units wrong, see further down the thread.


More like 5 seconds - (1.5e6 km)/(300_000 km/s) is 5.0 seconds. This makes sense as the Sun is 8 light-minutes away from Earth and JWST is closer to Earth than the Sun.


It only takes ~8 minutes for light to reach us from the sun, so that seems wrong.

1.5 * 10^9 meters / (3.0 * 10^8 meters/second) = 5.0 seconds.


Ah right, I got my units wrong! Good thing I'm not in middle school any more!


> Data gathered from its scientific instruments, once collected, is stored within the spacecraft’s 68-GB solid-state drive (3 percent is reserved for engineering and telemetry data)

Hope the SSD does not fail after 32768 or 40000 hours of operation.


All jokes aside, tbere's an article floating around about NASA's software development practices.

They are big TLA+/formal specification fans (the article predates TLA+'s rise) with well resourced and antagonistic Q/A engineers.

That hard drive will have been ordered, custom, and the controller verified by hand I imagine.


Also likely that they are already super experienced with that particular SSD. I read an article a long time ago that talked about a spacecraft with some camera on it. They said the camera was 10 years old when it was installed and although their were better cameras out there, they picked this one specifically because of reliability.

I’m sure the same happened here.


Chances are they or the manufacturer have a room full of those cameras clicking away on a schedule for the last ten years, too, as an early warning system.


Could it perhaps be this? Interesting read about the level of effort and process invested in the Space Shuttle's software (~1996) [0]

For a TLDR, this answer [1] is great.

My favorite excerpt: The Shuttle software consists of ca. 420,000 lines. The total bug count hovers around 1. At one point around 1996, they built 11 versions of the code with a total of 17 bugs.

[0] https://www.fastcompany.com/28121/they-write-right-stuff

[1] https://space.stackexchange.com/questions/9260/how-often-if-...



I know this is a joke but...

JWST does not use a typical flash-based SSD. The mass storage is all SDRAM. There are layers of error correction and scrubbing to handle bit flips.


1365 to 1666 days, or 4-5 years of operations.

Maybe they should have chosen a HDD drive? Cosmic particles can flip bits on an SDD, can't they?


Data at-rest in SSDs isn't really at rest. The controller is constantly scrubbing and correcting errors.

The real hazard with SSDs is leaving them unpowered. I know of a story of several systems purchased a decade before they were needed and by the time they were used the boot drives were corrupted.


Huh. Interesting. That explains why my SSD died at boot time. I thought it was just coincidence, and that it had died the night before or something, but that obviously doesn’t make sense.


This thing uses gyroscopes to orient itself. My guess is the less moving parts, the better


Here's an interesting article about the gyroscopes / related tech used on the JWST.

TL;DL — it uses reaction/momentum wheels to change orientation, and it uses non-mechanical gyroscopes to detect changes in orientation.

https://www.universetoday.com/143152/spacecraft-gyroscopes-a...


I just went down a small, but very interesting rabbit hole: I was not aware of non mechanical gyros!

Apparently, light travelling along a fiber optic cable will travel a slightly different distance when the device has angular momentum. That is COOL.

https://en.wikipedia.org/wiki/Sagnac_effect


Yep, ring laser gyro :)

They are pretty awesome. Once got some data that seemed to be bad out of one and realized that the entire error was exactly accounted for by the Coriolis effect from the rotation of the earth.


I believe that the tech you mention is the same tech as used by the flat-earthers in Behind The Curve in the $20k gyroscope they bought, to try and prove that the Earth did not rotate (except they found it picked up a 15-degree-per-hour drift, debunking themselves, so then doubted the technology, heh) [1].

But, aside from the laser/fibre-optic tech, there are also MEMS [2] gyroscopes also — which is almost nanotech IMO — which are used as sensors in modern phones/tablets, and also on drones (to assist with navigation), plus various other robotics uses, and more.

MEMS gyros — often with an accelerometer (aka IMU / inertial measurement unit) and maybe a magnetometer (compass) — can be bought from folk such as Adafruit [3], SparkFun, etc. (I've got a few different ones myself), and hooked up to e.g. an Arduino or similar MCU (or indeed anything else that speaks can speak the appropriate protocol, e.g. I2C/SPI/etc depending on the board in question).

Hmm, just looked for a good image of how a MEMS gyro works, and didn't come up with what I was looking for / recall seeing before (I'm a bit pushed for time), but there's a diagram on this WP page [4].

Edit: ah, here's a better diagram [5]

[1] https://www.triplem.com.au/story/flat-earthers-spend-20-000-...

[2] https://en.wikipedia.org/wiki/Microelectromechanical_systems

[3] https://www.adafruit.com/category/521

[4] https://en.wikipedia.org/wiki/Vibrating_structure_gyroscope#...

[5] https://www.researchgate.net/figure/Block-diagram-of-the-MEM...


I've used the MPU6050 to build a two-wheel self balancing robot (roughly this: https://www.instructables.com/Arduino-Self-Balancing-Robot-1...). Basically, apply PID to the motor angle to keep the robot level with the gravity vector.

I still think MEMS sensors are very cool and extremely convenient/cheap for what you get.


Nice!

I've done some projects myself with MPU6050 and some with its 9-axis "bigger brother" the MPU9250 (same as 6050, but with added 3-axis magnetometer/compass). I've also used the LSM9DS1 — another 9-axis IMU, just a different chip.

— Yeah, definitely amazing tech for the price. Cheap as chips! /me gets coat.

Ingenuity, the Mars helicopter/drone, apparently includes a bunch of off-the-shelf kit like this — IIRC I think a bunch of the parts are made by SparkFun. I don't recall any specifics though.


The FE link is awesome!

The others are interesting and informative. I am gonna buy me a few parts to play with.

Thanks!


No problem, have fun! — and if you enjoyed that FE article, you will likely enjoy the 'Behind The Curve' movie too, it's really quite amusing :)


I guess hard drives would be dead because of enormous acceleration on launch day.


If they're not on, hdds can handle surprising amounts of acceleration, like 100s of gs


Yeah. Fluid dynamic bearings and parked heads don't seem very vulnerable to shock and vibration.


You could almost invert your logic too. Perhaps the spinning on the HD throws off the telescope enough to be troublesome. I'm not sure how stable the lagrange point orbiting is, but it can't be super stable.


Any moving mechanism is potentially a source for disturbance to the telescope. Not something that affects the stability of the orbit. But the small vibrations can translate to small vibrations in the instruments and secondary mirror which then cause distortion over the integration time of the image.

I don't know that that is the primary reason HDD's have been avoided. But any moving part is another source of failure so my guess is the HDD's life is not as long.

As far as I'm aware I don't know of any spacecraft that has flown a HDD (but there certainly could be). However, some early spacecraft did use tape drives. Hubble originally used tape drives and was replaced with solid state memory during one of the servicing missions.


Since cosmic rays are charged particles I'd expect even more bit flips on the magnetic platter of a HDD.


I am completely guessing here: it is shielded as are most of the computing components. It probably also has error correction of some sort built in.


Shielding adds significant weight, which isn't good for space-bound components.

I would guess they just use RAID, do round-trip data verification before writes succeed, and then reinitialize and scrub any storage module behaving improperly.

For bits stored on SSDs which already rely on error correction, if they were willing to make custom hardware rather than use something off-the-shelf, they could add more chips and scale up the error correcting code to deal with more errors.


that means no sandisk or the like


68 GB ? Just ? Shouldn’t that be some 10-15 1-2 TB SSD in parallel ?


Technology for space lags behind consumer technology by 10-20 years. There are a few reasons for this:

- Long lead time to test and certify hardware.

- Higher reliability requirements (e.g. must work non-stop for 10 years)

- Must be able to operate in a higher radiation environment with little to no cooling. In a vacuum, and in zero gravity, cooling works very differently to how it does on Earth.

- These missions often take a decade or more to come together, and changing requirements throughout that process is hard, costly, and risky, so often they stay the same from the beginning.

Notably, SpaceX are bucking this trend a bit with their avionics which just runs on standard Linux machines rather than specialist machines or with a realtime OS, but they have mission lengths measured in minutes to hours, not decades.


They also simply do not store data locally.

The onboard storage is basically a buffer, they beam everything back to earth.


Yep. That means bandwidth is the limiting point, and that maxes out at 28 megabit in ideal conditions. A multi-terabyte array would be a waste.


Sometimes the smartest people in the world still stink at their short game


IIRC the hard part with this thing is the lenses. I wonder how hard it would be for astronauts to swap an SSD if it came to that.


The JWST is about three times further than the furthest a human has ever been from earth, which was on the moon.

So, it would be __really__ hard for astronauts to swap an SSD.


It's much easier to get there and fix JWST than it is to land on the moon though.

Because the landing part was the hardest and take more delta V than actually getting to JWST. Not to mention going back from moon surface.


Didn’t JWST use some kind of grivitational breaking to stop at the LGPoint? A manned mission would require much more fuel to shorten the time of the trip and fuel for returning.


I think the engineering that would go into a mission to do maintenance/repair would be quite valuable and help with many other missions. It’s far enough to be far, but not so far that other factors like communication is extremely delayed. Getting that far out of LEO has a huge set of new challenges, I assume, we should be able to learn about.


Even if a crewed spacecraft could be sent to it (honestly not that crazy given JWST's predicted lifespan of 20 years. Starship should be crew rated in 10 years at most and if not, Orion could be sent with some sort of expanded service module for the trip) the issue would be that docking to or even approaching the telescope is extremely risky.

You don't want to fire any thrusters in its direction to avoid damaging the sunshield or the mirrors, but of course to slow down at it you would need to do that at some point.

If the SSD failed and a constant connection to Earth cannot be maintained, the more realistic solution would probably be to launch a satellite to a high orbit to maintain permanent connectivity with JWST which can act as a store-and-forward relay, effectively replacing the SSD without having to actually go to the telescope.


JWST is not serviceable. If the ssd dies, then it's dead. There's no replacing it.


It isn't meant to need servicing, but it does have a docking port if it ever does need it and we have the desire to do so.


Yes, but a servicing mission for replacing the memory would still be an unlikely operation to be able to do. The most likely use for the docking port is if the electronics onboard exceed their design life, but JWST is running out fuel (to maintain its position at the Lagrange point) so they basically strap a "jet pack" on to the satellite to keep in operation. This has been the business case of some companies trying to do this for GEO satellites:

https://astroscale.com/astroscale-u-s-enters-the-geo-satelli...


> If the ssd dies, then it's dead.

Presumably they could operate in a reduced mode where it does live transmission of the data when it's in contact with Earth?


Depends if the sensor readout is slower than 28 Mb/s.


If it generates, at most, 57GB per day[1], assuming 22h operation (2h for transmission), the sensors are generating 2.6GB/h or about 750kB/sec which is just about 6Mb/s (unless my math is wonky.)

[1] "JWST can produce up to 57 GB each day (although that amount is dependent on what observations are scheduled)."


That is the average rate over a day. If storage is not available to buffer it, then a sensor's peak readout rate could easily exceed the transmission rate.


Edit: Which apparently it does, which is why the SSD can ingest up to 48 Mb/s to be read out more slowly later (https://jwst-docs.stsci.edu/jwst-observatory-hardware/jwst-s...).


But also "The actual data rate depends on the number of detectors simultaneously in use, their exposure parameters, and the precise timing of when their exposure readouts arrive in the ICDH for processing" - in a reduced operation mode, they can turn down the number of sensors, etc., to keep the data rate below the live transmission rate.


No currently available spacecraft can make the trip and service JWST, it's also not really intended to be serviced.


Spitballing here ... could we de-orbit it back to Earth with what propellant remains? Then fix whatever and refill the propellant?


The sunshield had a complicated unfurling sequence, I very much doubt they could fold it back up while still in orbit.


Spaceship might (Elon time) fly this August.


It's "Starship", and it's unlikely to be able to get to where the JWST is in the next few years due to needing to be refuelled in orbit. Plus they have no payload bay design. Then there's the robotics necessary for doing a repair, or human-rating Starship, either of which is years worth of time.


Oh yea. Much less memorable than BFS.


Maybe we could send up a data buffer with a high-bandwidth, short-range radio.


This kind of communication seems like an interesting problem for Starlink (or other satellite internet constellations) to solve. What if the satellites had extra antennas pointed outwards and relayed the data back to the ground at high bandwidth?

Seems like it would avoid a lot of the issues like atmospheric interference, frequency congestion, and careful placement of receiver infrastructure.


the gynormous size and sensitivity of ground antennas dwarfs those other factors. Otherwise, don't you think that an agency known for sending things to space would have considered sending things to space?

However, when using laser comms, sometimes a delay does make sense.


>What if the satellites had extra antennas pointed outwards

Pointed outwards towards what?

And I don't understand how that solves atmosphere interference issues. Still gotta go through the atmosphere at least twice for ground to ground

In actual fact, I believe the long term Starlink plan does include satellites in higher orbits, but I don't know their role

Lowest long distance latency is potentially a big competitive advantage for Starlink, so they'll probably try to get the shortest ground to ground path for some high priority customer data, and higher orbits on the signal path will detract from that goal


I never understood why the spacex sattelites dont have shitty webcams on their "dark side"

Imagine the resolution of that array as it spins around earth...


There is no fixed or computable phase relationship in optical wavelengths. But the constellation could operate as an Earth-sized radio telescope pointing all directions at once, probably for a few seconds a day to avoid exceeding downlink bandwidth. There, the phase relationship is anyway computable, in principle.

Probably they have not, yet, because they will be lofting new ones all the time, so have time to get it right. And, it might not really be computable, in practice.


The article isn't as interesting as I thought it would be.

> JWST can produce up to 57 GB each day (although that amount is dependent on what observations are scheduled).

Just use a lot of hard drives? And delete junk data when it's no longer needed.


The problem is how to get the data back to earth, no?


> JWST is transmitting data back to Earth on a 25.9-gigahertz channel at up to 28 megabits per secon

0.028*3600 = 100 Gbit/hour

No problem here.


This is discussed in the article, as the sibling (Mike) comments, JWST uses the Deep Space Network (DSN) for comms.

The issue isn't so much blackout (the segments are 120 degrees separated^), but that contact time is expensive and you don't get 100% of the uptime to yourself. The DSN is basically the only infrastructure we have for this sort of long range communication. There are only three stations and JWST is sharing time with a bunch of other missions including everything that's currently on Mars. Scheduling takes place far in advance.

The problem also isn't absolute storage space onboard, even if capacity degrades at a gig a year, it's whether you can drain it faster than you fill it. That said, there is likely to be some limit on hard drive space depending on what the current rad hardened solutions are. JWST is built on very robust and well-known hardware that has good provenance in space. For a LEO mission you might be willing to risk less tolerant components to get terabytes of capacity, but out there you want extremely resilient components and 68 GB is probably the balance. Sentinel 2 (an ESA Earth observation satellite) has 2.4Tb/300GB onboard for example.

Ultimately the missions planners budget for how much data they're allowed to stream back and they've figured out an amount that is acceptable.

But in general, yes. If you can write to a disk and ship it, that's a good solution. IceCube (South Pole) generates TBs of data. We send back 0.5 TB of critical data every month over a fast satellite link, crucial alerts and telemetry are sent over Iridium (24/7 avail) and the rest gets sent back to Madison (WI) on a plane each summer.

^ In theory this means that you can service three missions at a time though, if they're not all in the same direction.


> There are only three stations and JWST is sharing time with a bunch of other missions including everything that's currently on Mars. Scheduling takes place far in advance.

There are 3 DSN complexes, but each complex has multiple dishes. One station is very frequently talking to multiple spacecraft at once.


it seems weird to spend $10billion on a telescope that lasts for 4-5 years and then not spend the money on a communication system that isn't limited by traffic and is 'expensive', or a spare drive to double its lifespan.


It's kind of important to remember that they didn't intend to spend $10B on the telescope and that it's going to last for ~20 years. The original price was expected to be $500M and it would have never been approved if the original price was $10B. It only ended up being able to get to such a cost because of gradual overruns over 20+ years and sunk cost.

That said, NASA have been expanding the DSN for the past decade with four new antennae added so far (remaining two expected by 2025), it's just slow progress as usual with most of what they do lately. DSN antennas don't bring in jobs (and thus votes) for senators the way decades long flagship projects like JWST or SLS do.


Lifespan is not dictated by hard disk space, it's dictated by consumables like liquid helium, other cryosystems and fuel for stationkeeping thrusters. Most big NASA projects are specced to just promise enough to get funded (in terms of science) and overengineered, so what gets built will often be stretched longer than the intial deliverables demanded anyway (see basically all the recent Mars rovers).


Ground comms is a construction project, which, as a federal govt thing, means it'd take even longer and be another billions of dollars.


I don't think you understand how much "billions of dollars" is for construction projects. We're talking a communication system, not the large hadron collider.


Having worked federal construction projects in...adjacent venues, I'm exactly aware of what is involved and the scope.


How many multi-billion dollar, federally constructed ground segments are there?


More than you would expect.


Note that's a peak rate, and that should be 100Gb (12.5 GB). Without knowing the variability it's impossible to say if that's a problem or not.


Cool, that does seem like plenty of overhead even if JWST spends the bulk of its time imaging.

Reading the article further, I was sort of surprised there’s only 68 GB of solid state storage on board. Does anyone know if that’s considered very large for space-grade systems?


There are only 8 hours of downlink time per day. DSN resources are limited and shared among many missions.

Everything was sized based on the expected volume of data that the telescope would be able to take. The instruments only generatedata at a certain rate, and there are inefficiencies involved with slewing between targets. Having a larger recorder or faster downlink would not mean that JWST could take more science data.


The article mentions that there are blackout windows, I guess when the US is facing away from JWST. Only the low bandwidth DSN has coverage all the time.


That's 100Gigabit/hour or 12.6Gigabyte/hour or 11.7Gibibyte/hour


> 0.028 * 3600 = 100 GB/hour

You forgot the units.

0.028 Mb/s *3600s = 100 Gb/hour = 12.6 GB/h


0.028 Gb/s * ...


> All of the communications channels use the Reed-Solomonerror-correction protocol.

I think you don't want to use Reed Solomon for this, but turbo codes or most likely online codes.

https://en.wikipedia.org/wiki/Online_codes


Maybe it's because of RS codes superior performance during burst errors while the SNR involved isn't so low as to give convolutional codes a net advantage. Maybe it's Error Floor.

Or maybe we should just assume that some of the world's most qualified RF and comms engineers behind these decisions did their homework.


Sigh, I might not be top notch anymore, but I am the (co) author of some patents in the field. Questioning RS for this use case is legit.


> Or maybe we should just assume that some of the world's most qualified RF and comms engineers behind these decisions did their homework.

I'd rather read an interesting discussion of the tradeoffs than a shallow dismissal of curiosity because "they know what they're doing".


Of course, if it's phrased like a discussion. Assuming these guys "know what they're doing" is a pretty good starting point.


maybe because online codes only came available after 2004, and the choice was already made and the picked solution sufficed so was never revisited. So the question becomes: when did they do their homework?


Yes, there is a thread to that effect. Development was heavily delayed, certification and conservatism around space systems can slow down deployment to the point of just not being worth changing things.

I am sorry to have been presumptuous. There's been a spate of recent armchair experts with some arrogant "they should just use a warp drive" type contributions lately.


- The thing with online codes is that while they are quite efficient in data overhead (a few %) they only start working when you have enough data ( > X MB). (check!)

- another problem is that since the system that you need to solve to get your data back is large and random, so in-place updates of the data are impossible. In other words: they only work for immutable data (check!)

- The extra information to counter lost messages is scattered over all messages so the client doesn't need to figure out which ones are missing (and doesn't need to tell the server, in this case the telescope). This makes them highly suited for high latency communication (and storage which is in essence the same thing, but that's another story) (check!)

They (online codes) were designed for this exact use case.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: