Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

About garbage collection:

Are there a lot of Unity/Godot devs unaware that their engines are using GC? I would assume they'd have accepted the cost of GC already.

Unreal devs I can understand having an issue with it though.





GDScript in Godot doesn't use GC, it uses reference counting and doesn't "stop the world".

Other languages that bind into the engine do this too, (C++, SwiftGodot, Rust-Godot)

C# obviously does, Miguel de Icaza actually started SwiftGodot because he (ironically) ended up hating GC pauses after promoting C# for so long


Go does surprisingly well at keeping GC freezes to a minimal in a way that you're unlikely to notice... C# has gotten a lot better since the core split as well. That said, there's a lot that comes down to how a developer creates a game.

I was added late to a project working on a training simulation engine, similar to games, where each avatar in the game was a separate thread... man, the GC pauses on the server would sometimes freeze for literally 10-15s, and it was not good at all. I refactored it to use an event-loop model and only 2 other threads, which ran much better overall. Even though it wasn't strictly a game itself, the techniques still matter. Funny how running through a list of a few hundred things is significantly better than a few hundred threads each with their own timers, etc.


> C# has gotten a lot better since the core split as well.

It has improved but the majority of games using C# are using Unity which does not use .NET (Core). It uses Mono or IL2CPP specifically with the Boehm GC so it performs significantly worse than .NET and even standalone Mono (SGen GC).


> Funny how running through a list of a few hundred things is significantly better than a few hundred threads each with their own timers, etc.

State machines are not in fashion. Exposed event loops are not in fashion. Most frameworks do their damnedest to hide those components.

As for GC freezes, if you're doing a game like project you can always just allocate a few huge arrays at startup and reuse those with no deallocation/allocation in most garbage collected environments.


Reference counting is a GC algorithm from CS point of view, as looking into any worthwhile reference will show.

It's not what people mean when they say GC though, especially in reference to games, where you care about your peak frame time more than about your average frame time.

Reference counting can also have very bursty performance. Consider what happens when you decrement the last reference to an object which is the sole remaining reference to an entire large tree of other objects. This will trigger a whole cascade of subsequent decrements and deallocations, which can be arbitrarily large.

Of course, you might say, "Well, sure, but your reference counting implementation doesn't need to eagerly deallocate on dereference." That's true! You can write a ref counter that defers some of those deallocations or amortizes them across multiple operations.

And when you do that, now you really do have a garbage collector.

See: https://web.eecs.umich.edu/~weimerw/2008-415/reading/bacon-g...


People should learn their subjects properly, not street knowledge.

You should watch some of the more recent Gamers Nexus videos... the average frame pacing counts for a lot, and they're making a concerted effort to show this, as it does represent the level of "jank" in games very well.

Got a link? I can't work out which ones you're referring to.

most recently https://www.youtube.com/watch?v=qDnXe6N8h_c on why FPS is flawed specifically for GPU benchmarking

most specifically, an ongoing attempt to understand and debunk frame generation (DLSS, etc.) as a performance gain due to introducing latency despite high FPS: https://www.youtube.com/watch?v=Nh1FHR9fkJk, https://www.youtube.com/watch?v=GDvfIbRIb3U

More broadly than frame pacing, https://www.youtube.com/watch?v=Fj-wZ_KGcsg is a recent example of one of _many_ interviews going back years on why both frame times and frame rates are all flawed for explaining why some games feel smoother/lag more than others (there are GN videos dating back to 2016 on the subject)


I haven't dug deep enough into C# to say this with certainty, but I believe later C# versions allows you to do enough manual allocation to "almost" get around the garbage collector. As well as new calls to try and nudge the GC away from hot paths.

You need to be very disciplined to pull this off, though. LINQ is basically off limits, for example. And of course, Godot's C# is likely much older than these modern techniques to begin with.


Godot's C# is fairly recent, C#12/.NET 8.

Yes, as long as you're not using Godot 3.x. Some still use 3.x (Mono) because 4.x (.NET) does not support web exports.

That's good to know. So it probably has the capability if you really wanted to dig in.

But that effort on an active engine would quite a long time to comb through. Really comes down to if a highly invested contributor wants to push it through and gets the go ahead.


Unreal devs have Unreal C++ dialect with GC, Blueprints and soon Verve to worry about.

The times of pure manual memory management game engines, across all layers of what requires to draw a frame are long gone.

Naturally someone is going to point out some game engine using compiled C dynamic libraries for scripting, those are the exception.


>The times of pure manual memory management game engines, across all layers of what requires to draw a frame are long gone.

That's what makes me curious about Rust engines like Bevy. Could is truly pull it off and bring back that kind of thought to game development? It's not "pure manual memory management", but the mindset of Rust requires that kind of thinking.

It will definitely be niche for times to come, since most (non-AAA) games simply aren't going to worry about performance. But it might carve a solid community for those concerned with optimization.


Thing is, FPS don't make fun games, what makes games fun is a great design, delivered in a way that overall performance doesn't hinder the experience.

That is why games like Minecraft, Roblox, Celeste, Balantro make it big. None of them would have happened if the creators followed the advice 100% C coding (or C++ for that matter) was the only way, and yet their design is what made them shine.


You're not wrong. But consider a different lens:

Celeste isn't a game that would need to worry about performance in 2018. It's 2d sprites with box collisions and relatively minimal particle effects. Your toaster can run Celeste.

But a game like Factorio with heavy simulations and complex interactions and pathing absolutely needs to consider performance to pull off a seemless experience.

Those are the kinds of games I'd hope engines like Bevy could enable farther down the line. Design is still key, but some game types are a larger technical challenge than others.


Issue isn't about game devs it's about non-game devs backseat programming.

If you spend a week in these engines you're well aware of the garbage collector.


In my experience, when using Unity, I became acutely aware of creating garbage and how to avoid it. I used to see a lot of object pooling which is basically a makeshift arena allocator.

Can you explain? AFAIK Godot uses C++ under the hood which does not have garbage collection. Other languages such as C# and GDScript use bindings.

Most people using Godot will be using GDScript or C# to make their games.

Funnily enough whilst trying to Google gdscript and godot, I found this post I wrote in 2018 (subcomments mention gdscript and gc).

https://news.ycombinator.com/item?id=16673751




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: