Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

As a hobbyist game dev one thing I've (recently) learned is that optimizing performance is paramount in video games. Running at 60 FPS you have a budget of 16 ms per frame (VR requires 90 FPS minimum else risk VR sickness; ~11ms per frame). Any scripts and anything you want on screen needs to fit in that amount of time. It's a big exercise in smoke and mirrors.

I'd love to know what optimization techniques are used so that those precious CPU/GPU cycles can go to those good looking things.

Side note: being absolutely forced to optimize the heck out of games as an essential practice gives me a much greater sense of performance characteristics in my "real" job and also in leetcode.



Modern renderers:

Use the GPU to decide what to render, and in a growing number of cases, will cull triangles in compute (software) instead of letting the fixed pipeline do it. The vertex shader feeds parameters to the fragment shader even if the triangle is culled (depth or backfacing), so with vertex counts climbing, there is a lot of dead time spent feeding fragment shaders that are never ran.

Render at a lower resolution and temporally upsample to a higher resolution.

Using low overhead apis that optimize command recording performance (DX12, Vulkan) and transfer / memory access.

Multithread all the things.

Obsess over data locality and compression.


Culling triangles seems to be going away though, especially with the rise of ray tracing. The expectation is that even aspects of the scene we can’t see influence lighting. How are they managing that?


IANAGD, most graphically intense games are using a hybrid approach where RT will inform lighting/shadows and reflections but still rely on traditional rasterization heavily. That gives you a good bit of room to fudge the raytracing since the rasterization gives you a baseline.

That said, it seems like the culling advice for RT is "expanded camera frustrum" culling, distance culling dependent on size, and choosing the right geometry LOD. Beyond that they want you to aggressively mark geometry as opaque and carefully arrange objects into groups (BLASes) according to a couple rules so the GPU can skip as much work as possible.

https://developer.nvidia.com/blog/best-practices-using-nvidi...


What makes you think culling triangles is going away? Ray tracing is usually done with a highly approximated volume or planar representation of the scene, which is then applied to surface triangles.

Rendering objects that 'we can't see' has been done for as long as shadow caster light sources have been around. Even though we cannot see the mesh directly, the light can, and the viewer can see the shadow. These indirections all play their role in the greater "rendering equation", and the specific solution depends on the constraints of the application and resources of the development team.

Some renderers have abandoned triangles altogether for signed distance fields, but this involves re-creating from scratch the entire art pipeline.


Sorry, I just assumed that you’d need to construct the geometry in order to produce an accurate scene (with any complex lighting, be it shadows or ray tracing). I’ll be reading up more on the techniques you’ve mentioned. Im not a graphics developer so my suppositions are entirely naive.


It's more that "construct the geometry" is a highly subjective exercise. :)


At a high level it's about doing as little as possible.

Don't allocate memory. Don't re-layout ui. etc


You can get by with 72 fps in VR and avoid sickness.


That's good; you get more of a frame budget that way. I saw 90 and even 144 as target frames to avoid VR sickness. It's good to know 72 frames can work too.


Frame rate plays a role - especially when it drops down below targets and stutters, which does happen irl, but designing the experience to avoid motion sickness from the ground up is what makes the difference. Keeping something stable in the view that moves along with head pose like a HUD element while everything else around you is moving makes a huge difference. Seeing the world around you in motion from inside a car with the stable windscreen will be much easier than zooming around through freespace even at 120hz for most people until they get comfortable with the sensory mismatch. I have heard several interesting approaches to acclimatization and can recommend this [1] if you are affected & am told positioning a real fan blowing air on you while in the headset will orient your proprioception in a way that helps.

[1] https://medium.com/@ThisIsMeIn360VR/motion-sickness-and-the-...


Also this game Space Salvage claims to be a way to develop VR Sea Legs using design principles

https://www.oculus.com/experiences/quest/4028646407203318/




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: