Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Rendering Worlds with Two Triangles on the GPU [pdf] (iquilezles.org)
109 points by muyyatin on July 22, 2014 | hide | past | favorite | 25 comments


Ah, the classic presentation that got me started in the demoscene a couple of years back.

The Google cache version of the pdf doesn't include any images, so I put up a copy over here:

https://dl.dropboxusercontent.com/u/2173295/rwwtt.pdf


Thanks for the mirror.

Know of any novel techniques/rehashing of old techniques that have been developed since?


Well, for very small intros (4k/8k) distance fields are still hard to beat due to their compactness. A couple of examples not based on distfields, off the top of my head:

http://www.pouet.net/prod.php?which=62027 (No idea what this is, but it's great)

http://www.pouet.net/prod.php?which=62974 (reverse fluid simulation)

http://www.pouet.net/prod.php?which=59613 (particles)

For 64k intros and size unlimited demos the possibilities are too many to list. Procedural mesh generation is a classic approach which has found great modern use recently:

http://www.pouet.net/prod.php?which=61204 (if you follow 1 link in this comment make it this one)


Awesome, thanks!

What kind of mathemagical trickery is that reverse fluid simulation?!


Seven held a short seminar about it, it's the first one in this video:

http://www.youtube.com/watch?v=DQ8eB_FORLo


I think the most elegant thing about this method is that it describes a scene in terms of its basic mathematical 3D objects and transformations on them (list here: http://www.iquilezles.org/www/articles/distfunctions/distfun... ) and then exploits the massive parallelism of the GPU for rendering all the pixels.

Here's a demo of someone playing around with it, complete with a Slisesix-inspired scene: http://www.rpenalva.com/blog/?p=254

This set of slides is also related: http://www.iquilezles.org/www/material/function2009/function...


And if you want to try things out yourself, iq has created a playground for you here: https://www.shadertoy.com/


and one similar to Elevated https://www.shadertoy.com/view/4slGD4


It is disingenuous for the author to not cite Elevated.


I love these.

People might enjoy noodling around the Geisswerks pages which have many code snippets around ray tracing; graphic demos; and so on.

http://www.geisswerks.com/


You can download it from here http://www.pouet.net/prod.php?which=51074 It's been updated to run more reliably (edit: on Vista) but I can't find a version that will run on Win7.

Edit: I found a similar one on Shadertoy https://www.shadertoy.com/view/lsf3zr


i just ran it on my mac successfully with wine..!


I was working with distance fields back in 2008, and the idea of inverting the process blew my mind.

I had no idea Iñigo Quilez's image was produced this way and I'm so glad I had the chance to see how it was made.

Thanks for posting!!


Is the demoscene a good place to get into graphics programming? The prevalence of older methods leads me to believe one could learn in a similar progression to the graphics gurus of today, moving from simpler old methods with performance and size optimization to modern techniques?


This is a really impressive presentation -- after looking on from afar at the seemingly magical works of the demoscene, this finally helped me understand a little bit of how the magic happens. I've only got a bit of GLSL experience so far but now I want to learn a lot more.


Could someone explain to me what the "two triangles that cover the entire screen area" have to do with anything?


Two triangles make a flat screen (a 'quad'), which is sized to fill your actual screen. When you run a pixel shader over the quad, it ends up running for every pixel on your screen. The result is you have the visual effect of the pixel shader giving a very detailed-looking scene, when the actual geometry is as simple as it gets.

Here's an old experiment I did where the pixel shader is running on the faces of a cube http://dgholz.github.io/GLSLSphere/ The cube edges are highlighted in red so you can see them. I like green.


Basically you draw a single quad (2 triangles) covering the entire screen using OpenGL (or DirectX).

A Pixel shader is run when rendering each pixel of the quad. It's only input is often `time` and `resolution`.

At least in GLSL there's a global variable, `gl_Fragcoord` and provides the integer position of the pixel currently being drawn. So for example the pixel at the bottom left is gl_Fragcoord = vec2(0,0). The one directly to the right of that is gl_Fragcoord = vec2(0,1)

Given you're also passed the resolution can get a value that goes from 0 to 1 over the screen with

   vec2 zeroToOne = gl_Fragcoord.xy / resolution;
If you were to dump that value directly do the screen you'd get a red gradient going black to red from left to right and a green gradient from black to green going from bottom to top. See http://glsl.heroku.com/e#18516.0

Now it's up to you to use more creative math that given just gl_Fragcoord, resolution, and time write a function that generates an image.

You can play with that in your browser here, http://glsl.heroku.com and here http://shadertoy.com


Wouldn't it be even easier to draw one triangle that extends beyond the screen so that it covers it entirely and let the pipeline clip it to the screen size?


So the whole point of using a shader is that it's the GPU that's doing all the work?


Yes, that's what this trick is for.

In most standard 3D graphics, the CPU passes a description of the scene as polygons to the GPU, which then does two[1] shader steps - vertex and fragment[2] shading. The vertex shading works at the level of triangle vertices, effectively translating and and transforming the vertices, and then the fragment shader colors in each individual pixel.

So for a standard scene, the CPU tells the GPU: 'Right, we've got a room, with some pillars, and a monster, and a few lights, positioned like this', and then the CPU calculates what that looks like.

What Inigo is doing is that the CPU only knows there are two triangles - a quad covering the scene - so it just tells the GPU to draw a flat rectangle. The vertex shader does nothing but maintain the flat rectangle. However, because the fragment shader can be arbitrary logic, rather than just painting it with a solid color or even a texture, it is running its own simulation that involves drawing an entire scene.

----

[1] More these days with Geometry shaders, but that's another topic

[2] Sometimes called a pixel shader, although really that's incorrect - Fragment is a more accurate term


It basically means everything is happening in the shaders, not in geometry. There has to be some vertexes though, and the minimum you can have to cover the screen is two triangles.


That is incorrect. You can cover the screen with a single triangle ... just make it big enough. The corners get clipped and the middle region of the big triangle will cover the screen.


You can get around the clipping problem by displaying it on an old Interocitor.


True, both approaches are equally valid and being used. Every once in a while some gfx haxors also spend baffling amounts of effort on humorously "benchmarking" both approaches against each other.. ;)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: