Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> All of the communications channels use the Reed-Solomonerror-correction protocol—the same error-correction standard as used in DVDs and Blu-ray discs as well as QR codes.

I find that somewhat hard to believe, LDPC are well established and much more suitable. I would have expected that they would use a DVBS2 standard code.




I suspect they know what they're doing.


Yeah, most likely.

None the less, I'm also curious about the choice, but couldn't find a lot about it. There has to be some trade-off I guess to using LDPC instead of Reed-Solomon. I only found this paper, but haven't read through it, so no conclusion as of yet:

https://trs.jpl.nasa.gov/bitstream/handle/2014/45387/08-1056...

> Efforts are underway in National Aeronautics and Space Administration (NASA) to upgrade both the S-band (nominal data rate) and the K-band (high data rate) receivers in the Space Network (SN) and the Deep Space Network (DSN) in order to support upcoming missions such as the new Crew Exploration Vehicle (CEV) and the James Webb Space Telescope (JWST). These modernization efforts provide an opportunity to infuse modern forward error correcting (FEC) codes that were not available when the original receivers were built. Low-density parity-check (LDPC) codes are the state-of-the-art in FEC technology that exhibits capacity approaching performance. The Jet Propulsion Laboratory (JPL) has designed a family of LDPC codes that are similar in structure and therefore, leads to a single decoder implementation. The Accumulate- Repeat-by-4-Jagged-Accumulate (AR4JA) code design offers a family of codes with rates 1/2, 2/3, 4/5 and length 1024, 4096, 16384 information bits.1, 2 Performance is less than one dB from capacity for all combinations.

My guess at this point is probably just "We've used Reed-Solomon a bunch, we know it works. We're working on newer techniques, but lets use what we know works"


Reed-Solomon is better at detecting longer runs of missing data (which could come from objects passing by, for example), and is a lot cheaper to decode - computation in space is very expensive.


I suspect you're right but it seems that the capacity advantage of convolutional codes only become significant in very low SNR, so maybe deep space probe applications. Also unless interleaving is used, Reed-Solomon can do better against bursts of errors, though am nor sure why the noise profile would be ay different.

So, as you say, maybe it was just faster to integrate the already certified equipment at that stage of the development.


idk the article also mentions they've been working on it for 20 years I wouldn't be surprised if they just got to a point that was good enough and then didn't want to mess with things

real tragedy is that they didn't use cutting edge web7.0 tech for their front end smh


In general, cutting-edge astrophysics does not necessarily use cutting-edge software engineering.

For example, the JWST also uses a proprietary version of JavaScript 3, made by a bankrupt company.

https://twitter.com/michael_nielsen/status/15469085323556577...

I think there's a pretty good chance that their data encoding scheme was working, and so they just left it in a working state, without upgrading it to use modern best practices.


Note that this mission was specced and designed ages ago too, so just as the observations it makes are views of the past, the engineering to do so is a time capsule too


They sure do, but I'm less confident that the statements made in the interview can be uphold to standards of scientific scrutiny.


Know how to maximize budget.


JWST started in 1996. According to Wikipedia that's largely when LDPCs were 'rediscovered'.

https://en.wikipedia.org/wiki/Low-density_parity-check_code#...


Why is this project on such a long timeline? I wonder what was the longest critical path in this project plan?


I can see a couple factors:

* To be useful to science, the telescope needs a huge diameter which is more than can be launched so they had to make a folding telescope that can be later unfolded.

* We only get a single chance for success so testing becomes critical. Lot of testing was needed to make sure everything would work out in the end.

* The technology needed was pushing the limits of what we are capable of so lot of R&D needed.


A misunderstanding of 'failure' by the general public.


Others have mentioned the advantage in terms of burst errors. That is fairly common for radio signals coming from space. Think of an airplane flying through the signal path or something. I know in the NRO all of the radio downlink used BCH for error correction that could correct up to 4 bits per byte, and DPCM for compression, which also does particularly well with long runs of the same pixel value, something pretty common with space imagery (most of what you're looking at is black). BCH allows you to pick exactly how many bits per byte you want to be able to correct, which can be tuned based on known error characteristics of your signal. Part of it may just be these systems have been around a long time and we already have extremely well-tuned and efficient implementations that are known to work quite well with very large volumes of data inbound from space.


I like that the article had an error missing the space between "Reed-Solomon" and "error-correction". A bad subeditor, or an Easter Egg joke? :)




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: