Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Quick question if anyone knows. One of the examples there is showing the "Linear" images compared to the "Stretched" images. I'm assuming that stretched means 0-255 RGB greyscale. But what are the ranges of "Linear" and why is it so dark? Are those floating point values of 0.0 - 1.0? Are they 12.0-18.0 like is shown in the Rosolowsky dataset?


Amateur astrophotographer here. What I'm going to talk about is true for my rig. The JWST is astronomically a better telescope than what I have, but the same basic principles apply.

The cameras used here are more than 8 bit cameras, so there has to be some way to map the higher bit-depth color channels to 8 bits for publishing. The term for the pixel values coming off the camera is ADU. For an 8 bit camera, the ADU range is 0-255. For 16bit cameras (like what mine outputs) is 0-65536. That's not really what stretching is about though.

A lot of time, the signal for the nebula in an image might be in the 1k-2k range (for a 16bit camera), and the stars will be in the 30k to 65k range. If you were to compress the pixel values to an 8 bit range linearly (ie, 0 adu = 0 pixel, 65536 adu = 255) you're missing out on a ton of detail in the 1k-2k range of the nebula. If you were to say 'ok, let's have 1k adu = 0 in the final image, and 2k adu = 255', then you might be able to see some of the detail, but a lot of the frame will be clipped to white which is kind of awful. That would be a linear remapping of ADU to pixel values.

The solution is to use a power rule (aka, apply an exponent to the ADU, aka create a non-linear stretch). (EDIT: The specific math is probably wrong here) That way you can compress the high adu values where large differences in ADU aren't very interesting, and stretch the low-adu values that have all the visually interesting signal. In the software this is done via a histogram tool that has three sliders; one to set the zero point, one to set the max point, and a middle one to set the curve.

It's kinda like a gamma correction.


Also related: μ-law[1] and A-law[2] companding in telecoms.

[1]: https://en.wikipedia.org/wiki/%CE%9C-law_algorithm

[2]: https://en.wikipedia.org/wiki/%CE%9C-law_algorithm


I think the answer is that the raw data has too much dynamic range. The stars are so much brighter than anything else that a naive linear scaling from the native depth to 8 bit results in all the shadows getting washed out and only the highlights showing. Instead, the "stretched" seems to be compressing the highlights to allow the shadow data to become brighter.


Probably true, and analogous to gamma correction (https://en.wikipedia.org/wiki/Gamma_correction) although they don't specifically say whether the range-compressing transformation that they are using is a power law.


N00b astrophotographer here. I can't see the images but this sounds correct. (Edit: paste the two images into an image editor and look at the histogram. You'll see it)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: