I never said filter after rasterizing. Pixel-sized squares have the same issue if you filter before rasterizing.
The only reason pixel-sized square survive supersampling intact is because supersampling uses an imperfect low-pass filter (block-based averaging). Here is a good example showing the shortcoming of straight supersampling: https://people.cs.clemson.edu/~tadavis/cs809/aa/aliasing5.pn...
You NEED to apply a true low-pass filter before rasterizing to completely eliminate Moiré patterns. Whether a round of supersampling occurs before this filtering is irrelevant. And pixel-sized squares don't survive low-pass filters intact, see https://en.wikipedia.org/wiki/Gibbs_phenomenon#Signal_proces...
Your goal is to take a pixel-sized square with sharp edges and render it on a screen composed of squares. If it displays wrong, you used the wrong method. You can't blame signal theory.
In the case of your second link, the problem is inappropriately applying a low-pass filter. This makes the edge of the square non-sharp, and adds distortion effects multiple pixels away.
This portion of signal theory only applies to a world made entirely out of frequencies. This causes problems when you try to apply it to realistic shapes. It's great for audio, not so great for rendering. The use of point samples does not automatically imply you should be using it.
If you don't like supersampling, that's fine. But you need to pick an antialiasing method that's compatible with the concept of 'edges'.
Edit:
You added "You NEED to apply a true low-pass filter before rasterizing to completely eliminate Moiré patterns."
I don't think that's right. It should look fine if you use brute force to calculate how much of every texel on every polygon shows up in every pixel. And it should look fine with much more efficient approximations of that. At no point should it be necessary to use filters that can cause ring effects multiple pixels away.
> This portion of signal theory only applies to a world made entirely out of frequencies.
That's not precisely true: discrete blocks or edges can be approximated arbitrarily closely in Fourier frequency space -- you just need to admit higher-frequency components.
'Pixel-sized square' is properly an oxymoron if you view pixels purely as samples -- a 'true' square can only be represented by an infinitely dense set of pixel samples.
But this is a feature, not a bug, because neither computer monitors, nor the human eye, can render nor see a 'true' square either, only successively better approximations to them.
> That's not precisely true: discrete blocks or edges can be approximated arbitrarily closely in Fourier frequency space -- you just need to admit higher-frequency components.
You can approximate those shapes, but that's an approximation. It's not impossible to do the math that way, but there are stumbling blocks to dodge. Like weird artifacts when you low-pass.
> 'Pixel-sized square' is properly an oxymoron if you view pixels purely as samples -- a 'true' square can only be represented by an infinitely dense set of pixel samples.
It's a square equal in size to the square-grid pixel spacing. It's not wrong, it's just being non-pedantic.
> But this is a feature, not a bug, because neither computer monitors, nor the human eye, can render nor see a 'true' square either, only successively better approximations to them.
In fact, CRTs phosphors didn't even correspond to logical computer pixels. If your CRT resolution was lower than the maximum phosphor grid resolution of the CRT mask, then a single pixel would indeed spread across more than one tri-color phosphor group.
Signal theory points reminds us that 'square pixels' are just an arbitrary shortcut. We could equally well describe them as hexagons, ovals, or rectangles, or make the grid positions random, and monitors would work just as well. That's why when you blow up a and render it as a giant square with exact edges, it's a very misleading and arbitrary choice.
But at no point, either in the CRT or in the human eye, do the ringing artifacts actually show up. So any pipeline that renders those when you blow the pixel up is worse than rendering just a giant square.
They do. If you try to draw a white pixel in the middle of a black background on a CRT, the white bleeds around in a small circle, and if look at a point light source in a totally dark room, you will see a small halo around it. These are both ringing artifacts.
You see faint light around the pixel/light source. You don't see the edges of the pixel/light source as brighter than the center. Do you? (Can't check right now but it's not how I remember seeing things)
The only reason pixel-sized square survive supersampling intact is because supersampling uses an imperfect low-pass filter (block-based averaging). Here is a good example showing the shortcoming of straight supersampling: https://people.cs.clemson.edu/~tadavis/cs809/aa/aliasing5.pn...
You NEED to apply a true low-pass filter before rasterizing to completely eliminate Moiré patterns. Whether a round of supersampling occurs before this filtering is irrelevant. And pixel-sized squares don't survive low-pass filters intact, see https://en.wikipedia.org/wiki/Gibbs_phenomenon#Signal_proces...