Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

You absolutely are seeing individual stars in the Triangulum Galaxy (M33) in the zoomable image. The brighter "blobs" are a combination of foreground stars (i.e., stars in our own galaxy) and star clusters in M33, but the fainter points are almost all stars in M33 (a few may faint foreground stars or background galaxies and quasars).

This is no more mysterious than the fact that you can see individual (nearby, bright) stars with your naked eye when you go outside at night. The angular resolving power of your eyes (or of small telescopes) compared to the angular diameters of even nearby stars is terrible; nonetheless, you can still see them.

The key thing to understand is that objects which are too small to resolve will be blurred by the telescope/lens/eye (plus atmospheric turbulence if you're not in space) into a point spread function [https://en.wikipedia.org/wiki/Point_spread_function]. The light from each star is thus spread out over multiple pixels in a pattern which is basically the same for all the stars, varying only in total brightness and thus detectability; for really bright stars, the light in this pattern can be traced out over a significant fraction of the entire image. This includes features like the diffraction spikes; we don't see these for stars in M33 only because they're too faint to register.



This, exactly. And it should be noted that even with a theoretical ideal PSF, point sources would still have non-zero-sized images simply because the recording surface (film, retina, CMOS sensor) has discrete sensor elements. A point source will always be at least a single "pixel" wide in the image, it just needs to be bright enough to stand out against whatever else is projected onto that pixel.


If I'm not mistaken, ideal optics would focus a point source onto a single pixel. A blurred image would spread that point source over more pixels and a sharpening filter might be able to not only fix the blur, but locate the point source within the that single pixel. Yes? No?

I'm imagining this being similar to how adding the right kind of noise and oversampling can tease out smaller signals than one might think possible in a noise free sampling system.

Am I off base here?


Optics don't know about pixels. To a first approximation, they are continuous. Whatever discretization happens after that, it's not relevant to the optics in any way. A sharp lens can greatly outresolve a low-resolution sensor; the bottleneck is the sensor in that case. Conversely, a high-resolution sensor may not be of much use if the lens is soft - but in this case you can at least in theory use deconvolution to recover detail if you know a good approximation of the PSF of the lens. This is not uncommon when working with scientific instruments, actually, but pretty rare in the case of consumer photography equipment.


No, because a pixel is not a point. Even if not-perfect optics blur the point a bit, it can still be one pixel or less. Ideal optics might be able to focus the light much smaller than one pixel, but that won't change the resulting image because the pixel sensor will pick up the same amount of light.



And elementary photosensitive units (whether silver halide grains on photographic film, individual sensor elements in a CMOS or CCD image sensor, or photoreceptive cells in a mammalian retina) are not points. They have a nonzero area and turn photons into signal irrespective of where they hit. That article, while very informative, is not relevant. Sensels are not pixels.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: