Well, for very small intros (4k/8k) distance fields are still hard to beat due to their compactness. A couple of examples not based on distfields, off the top of my head:
For 64k intros and size unlimited demos the possibilities are too many to list. Procedural mesh generation is a classic approach which has found great modern use recently:
I think the most elegant thing about this method is that it describes a scene in terms of its basic mathematical 3D objects and transformations on them (list here: http://www.iquilezles.org/www/articles/distfunctions/distfun... ) and then exploits the massive parallelism of the GPU for rendering all the pixels.
You can download it from here http://www.pouet.net/prod.php?which=51074 It's been updated to run more reliably (edit: on Vista) but I can't find a version that will run on Win7.
Is the demoscene a good place to get into graphics programming? The prevalence of older methods leads me to believe one could learn in a similar progression to the graphics gurus of today, moving from simpler old methods with performance and size optimization to modern techniques?
This is a really impressive presentation -- after looking on from afar at the seemingly magical works of the demoscene, this finally helped me understand a little bit of how the magic happens. I've only got a bit of GLSL experience so far but now I want to learn a lot more.
Two triangles make a flat screen (a 'quad'), which is sized to fill your actual screen. When you run a pixel shader over the quad, it ends up running for every pixel on your screen. The result is you have the visual effect of the pixel shader giving a very detailed-looking scene, when the actual geometry is as simple as it gets.
Here's an old experiment I did where the pixel shader is running on the faces of a cube http://dgholz.github.io/GLSLSphere/ The cube edges are highlighted in red so you can see them. I like green.
Basically you draw a single quad (2 triangles) covering the entire screen using OpenGL (or DirectX).
A Pixel shader is run when rendering each pixel of the quad. It's only input is often `time` and `resolution`.
At least in GLSL there's a global variable, `gl_Fragcoord` and provides the integer position of the pixel currently being drawn. So for example the pixel at the bottom left is gl_Fragcoord = vec2(0,0). The one directly to the right of that is gl_Fragcoord = vec2(0,1)
Given you're also passed the resolution can get a value that goes from 0 to 1 over the screen with
vec2 zeroToOne = gl_Fragcoord.xy / resolution;
If you were to dump that value directly do the screen you'd get a red gradient going black to red from left to right and a green gradient from black to green going from bottom to top. See http://glsl.heroku.com/e#18516.0
Now it's up to you to use more creative math that given just gl_Fragcoord, resolution, and time write a function that generates an image.
Wouldn't it be even easier to draw one triangle that extends beyond the screen so that it covers it entirely and let the pipeline clip it to the screen size?
In most standard 3D graphics, the CPU passes a description of the scene as polygons to the GPU, which then does two[1] shader steps - vertex and fragment[2] shading. The vertex shading works at the level of triangle vertices, effectively translating and and transforming the vertices, and then the fragment shader colors in each individual pixel.
So for a standard scene, the CPU tells the GPU: 'Right, we've got a room, with some pillars, and a monster, and a few lights, positioned like this', and then the CPU calculates what that looks like.
What Inigo is doing is that the CPU only knows there are two triangles - a quad covering the scene - so it just tells the GPU to draw a flat rectangle. The vertex shader does nothing but maintain the flat rectangle. However, because the fragment shader can be arbitrary logic, rather than just painting it with a solid color or even a texture, it is running its own simulation that involves drawing an entire scene.
----
[1] More these days with Geometry shaders, but that's another topic
[2] Sometimes called a pixel shader, although really that's incorrect - Fragment is a more accurate term
It basically means everything is happening in the shaders, not in geometry. There has to be some vertexes though, and the minimum you can have to cover the screen is two triangles.
That is incorrect. You can cover the screen with a single triangle ... just make it big enough. The corners get clipped and the middle region of the big triangle will cover the screen.
True, both approaches are equally valid and being used. Every once in a while some gfx haxors also spend baffling amounts of effort on humorously "benchmarking" both approaches against each other.. ;)
The Google cache version of the pdf doesn't include any images, so I put up a copy over here:
https://dl.dropboxusercontent.com/u/2173295/rwwtt.pdf