Hacker News new | past | comments | ask | show | jobs | submit login
James Webb Telescope pictures didn’t begin as images (thestar.com)
77 points by tontonius on Oct 18, 2022 | hide | past | favorite | 30 comments




I have friends/former colleagues who work on these pipelines, and I can tell you that it's not a stretch to say that there are dozens if not hundreds of people whose entire working lives are about characterizing the sensors, noise, electronics, so that after images are taken, they can be processed well / automatically / with high precision.

(and after all, if these instruments/telescope were 30 years and $10B in the works, you would hope there's a fairly well developed function to make the data as useful as it can be)

The goal is to get the "true" physical measurement of the light that arrives at the telescope. After those photons arrive, the measurements get contaminated by everything to do with the hardware, sensors, electronics, processing artifacts, and there's a whole organization that exists to study and remove these effects to get that true signal out.

Every filter, sensor, system has been studied for thousands of person-hours and there are libraries on libraries of files to calibrate/correct the image that gets taken. How do you add up exposures that are shifted by sub-pixel movements to effectively increase the resolution of the image? How to identify when certain periodic things happen to the telescope and add patterns of noise that you want to remove? What is the pattern that a single point of light should expect to be spread out into after traveling through the mirror/telescope/instrument/sensor system, and how do you use that to improve the image quality? (the 6 pointed star you see)

Most fascinating to me is when someone discovers or imagines that some natural phenomenon that you thought was a discovery, turns out to be a really subtle effect of the noise in the instrument? (ADC readout noise / spike that subtly correlates with a high value having passed by during readout of a previous pixel? which makes your supernova discovery actually a fluke? I'm trying to recall the paper discovering that the pixel value on one chip of an instrument was related to the bitwise encoding of the readout on a neighboring chip's pixel...)

Then there's even a whole industry of how to archive data, make it useful to the field, across telescopes, across projects, and over time.

Lots of science and work here over decades.


It's an art of its own where you need to account for absolutely everything. Like does the 0.1C fluctuation on your sensor is actual change or just the noise of the sensor flipping the last bit or two of the ADC ? Hell, in right conditions you can use that noise to extract extra resolution over time.

Or how vibration on a cable can induce sensor noise because you have 2 conductors in a long line creating capacitor with capacity modulated by vibration and because of that change of capacitance there is a current induced in it.

Or how every sensor is a temperature sensor and anything else it senses is just extra.


Just to further your comment, according to this overview paper

  http://ircamera.as.arizona.edu/MIRI/paper8.pdf
they can control the MIRI focal plane array temperature to 10mK using some clever tricks with electronic components that can operate as heaters.

> "This temperature sensor performance supports controlling the temperature of the SCAs to 10 mK, peak-to-peak."


Once bytedance put their filters on, they'll be a so much more sexy!

More seriously, I think so many people are only familiar with using image processing techniques to make things look subjectively "better", that they find it harder to believe that scientists don't do the same things to their research images. That is a bit corrosive to society, but real today.


Most fascinating to me is when someone discovers or imagines that some natural phenomenon that you thought was a discovery, turns out to be a really subtle effect of the noise in the instrument?

This happens a lot in radio astronomy, too - you can get interference from all sorts of electronic devices. Nowadays there are more and more passing satellites, as well. Or even something mundane like a microwave.

https://www.theguardian.com/science/2015/may/05/microwave-ov...


Quick question if anyone knows. One of the examples there is showing the "Linear" images compared to the "Stretched" images. I'm assuming that stretched means 0-255 RGB greyscale. But what are the ranges of "Linear" and why is it so dark? Are those floating point values of 0.0 - 1.0? Are they 12.0-18.0 like is shown in the Rosolowsky dataset?


Amateur astrophotographer here. What I'm going to talk about is true for my rig. The JWST is astronomically a better telescope than what I have, but the same basic principles apply.

The cameras used here are more than 8 bit cameras, so there has to be some way to map the higher bit-depth color channels to 8 bits for publishing. The term for the pixel values coming off the camera is ADU. For an 8 bit camera, the ADU range is 0-255. For 16bit cameras (like what mine outputs) is 0-65536. That's not really what stretching is about though.

A lot of time, the signal for the nebula in an image might be in the 1k-2k range (for a 16bit camera), and the stars will be in the 30k to 65k range. If you were to compress the pixel values to an 8 bit range linearly (ie, 0 adu = 0 pixel, 65536 adu = 255) you're missing out on a ton of detail in the 1k-2k range of the nebula. If you were to say 'ok, let's have 1k adu = 0 in the final image, and 2k adu = 255', then you might be able to see some of the detail, but a lot of the frame will be clipped to white which is kind of awful. That would be a linear remapping of ADU to pixel values.

The solution is to use a power rule (aka, apply an exponent to the ADU, aka create a non-linear stretch). (EDIT: The specific math is probably wrong here) That way you can compress the high adu values where large differences in ADU aren't very interesting, and stretch the low-adu values that have all the visually interesting signal. In the software this is done via a histogram tool that has three sliders; one to set the zero point, one to set the max point, and a middle one to set the curve.

It's kinda like a gamma correction.


Also related: μ-law[1] and A-law[2] companding in telecoms.

[1]: https://en.wikipedia.org/wiki/%CE%9C-law_algorithm

[2]: https://en.wikipedia.org/wiki/%CE%9C-law_algorithm


I think the answer is that the raw data has too much dynamic range. The stars are so much brighter than anything else that a naive linear scaling from the native depth to 8 bit results in all the shadows getting washed out and only the highlights showing. Instead, the "stretched" seems to be compressing the highlights to allow the shadow data to become brighter.


Probably true, and analogous to gamma correction (https://en.wikipedia.org/wiki/Gamma_correction) although they don't specifically say whether the range-compressing transformation that they are using is a power law.


N00b astrophotographer here. I can't see the images but this sounds correct. (Edit: paste the two images into an image editor and look at the histogram. You'll see it)


No digital "images" begin as images.


Not so. In digital photography, they all do. The headline is entirely wrong. JWST's mirrors are in fact reflecting an image onto its image sensors which use tiny light-sensitive diodes to convert light into electrical charges which are then translated into digital information and recorded as pixels. But the entire thing would be pointless if there wasn't an image to begin with. The fact is we can never see anything, we only see an image. But since this is part of how seeing is defined, it is taken for granted that the thing itself is not pressing on our retinas, only a reflection of light, aka an image, is.


Agreed 100%- the telescope is an image forming device.


Actually, unless you're doing something exotic, like using encoded aperture or light field stuffs, most digital images do begin as a real image[1] focused on a sensor.

1. https://en.wikipedia.org/wiki/Real_image


Thanks for the link. My thinking about "image" was too narrow.


ugh I mean yes, and yes when published these images should always say how they were colorized / how the spectrum was compressed

but that title

this is like saying 'if someone pumped raw H264 into your optic nerve you would dance like the guy in the avicii levels video'

like yes, we all know that's how that video got made, but nobody does that to their eye


Your eye is an imperfect sensor, film is an imperfect sensor, the color sensor in your camera is an imperfect sensor, and astronomers do things in particular ways because we're doing science with the data.

The paper always says how the image was made. If you want to know, look there, not at a press release.


Launch Pad Astronomy has two videos about this:

How NASA created Webb's image of the Carina Nebula with Alyssa Pagan (who was also mentioned in the article): https://www.youtube.com/watch?v=1QPJd2Fl6i4

How NASA imaged Webb's First Deep Field with Joe DePasquale https://www.youtube.com/watch?v=lLVqERtcdmw



Apologies if off-topic, but does anyone have a good source for downloading the JWST images in good quality? They are inspiring.



How am I supposed to read this if it’s behind a paywall?


I could read it after turning off javascript. Maybe try that.


lol the best kind of paywall is one that doesn't appear if you prevent RCE


Odd I didn't hit one

https://archive.ph/ZijsH


Money can be exchanged for goods and services.


Pay them money to get over the wall?





Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: