Hacker Newsnew | past | comments | ask | show | jobs | submit | DarthBatman's commentslogin

This! We don't use it for mission-critical crypto or the like. It's there to figure out whether a bot goes left or right. Given the enormous complexity of the League game state and input surface area, it likely won't be a concern unless someone devises a way to exploit it for RNG manipulation (this seems unlikely.)

The reason we didn't use crand was just code ergonomics. C-style rand has global state (by default) and we wanted our own interface to be explicit in the code so you knew when you were using a gameplay-impacting random number.


Thought I heard my ears ringing... Blast from the past here.

That's the beauty part of recording server state and playing it back -- we included a recording of whatever accurate integer-based OS time I could get on each platform. On Windows, for example, I record the raw values of QPC ticks and the tick rate into the recording. Then on playback, I play back the clocks themselves as if the hardware has that tick rate and tick count are from the recorded computer. So if frame 10 took 33.3ms and frame 11 took 66.7ms, we'd play them back that way.

Note that playbacks are almost never in real-time. The reason a typical server frame takes 33ms is that we might be processing for 5ms and then yielding the thread (sleeping) for the remaining time during real-time gameplay. On playback, we don't yield the thread, so we play back faster than real-time when doing something like a live Chronobreak (disaster recovery.)

Strictly speaking, you are right. Let's say we had a game with no player inputs (like a bots-only game.) It's possible that the outcomes might vary between two games with the same random seeds if they run different numbers of frames over the same time period. That said, we've actually tried this a few times, and our game code is surprisingly resilient to this -- only in a few instances have we found issues where subtle variations in frame time could impact player outcomes (and those are generally corrected when we find them -- this is an esports game after all.)

A lot of this comes down to tradeoffs. Yes we might have those subtle issues, but if we fixed our timestep, then we'd have the other problem of the game potentially going into slow-motion during (the admittedly rare) periods of long frame times. In practice, we decided time dilation is more negative for player performance than loss of interactive resolution; humans are great predictors of motion with limited information, but they're not so great when the rules of localized reality bend.

Edit: typo


> It's possible that the outcomes might vary between two games

So just to clarify: in League, the 'dt' passed into each 'simulation step' is NOT constant? Isnt this kinda crazy? In your later articles, floating point imprecision is talked about. Couldnt this variance in dt create odd behaviour and ultimate contribute to weird things like 'character movement speed' being not _exactly_ the same between games? (like really small, but still...)

And beyond that, how does the client and the server synchronize with each other if the frame #s represent different positions in time? My mind is blown right now...

Note: I've worked on many networked games, and have written rollback/resimulate/replay code. I don't really understand _why_ League wouldnt use a fixed time step. Whats the advantage? In our games, the rendering of course uses the real dt passed in from the OS, but the simulation step is always fixed. This means in our games during a replay, your computer could render the outcome differently if your frame rate is different, but the raw simulation is always the same.

For context, to show I at least have some idea what I'm talking about, I made this replay system (and the game has rollback multiplayer):

https://gooberdash.winterpixel.io/?play=5b74f7c0-8591-40dc-b...

I havent played a lot of League, but I always assumed it would use deterministic lock step networking (like it's predecessor, Warcraft 3)


We're recording the integer clocks though, and those don't change between runs. While game code converts things like (QPC ticks over tick-rate) to floating point, we don't sum those numbers directly. Instead, we internally store times as the raw integers then convert them to floats on-demand (typically when an Update function is asking for "elapsedTime" or a timer is asking for a "timeSpan" (like time since the start of the game.))

LoL and TFT don't use a synchronized-simulation (lockstep, sometimes called peer-to-peer) networking model. LoL is Client-Server, meaning the replication is explicit and not based purely on playing back client inputs. This gives us more control over things like network visibility, LODs, and latency compensation at a feature-by-feature level at the cost of increased complexity. Most of the games I've built over the years use this model and the LoL team is super comfortable with it.

The GameClients are not deterministic in the way that the GameServer is, though they're naturally closer to that ideal since the GameServer itself is typically predictable.

Don't get me wrong, there's a time and place for lockstep replication, and LoL probably could have gone that way. I wasn't there when that direction was picked, but I would have likely made the same choice as my predecessors, knowing what I do about our approach to competitive integrity.


All this stuff predates ECS and a fully specified definition of what a live service continent spanning MOBA is. All the tradeoffs make sense to me. The real question is, would it have been possible to define an engine solution that looks more like Overwatch, in the absence of a fully specified game? I feel like that is ECS's greatest weakness.


I feel like you’re skipping a step to make this comment make sense. Which is saying why using the Overwatch model would be better and why the need to introduce an ECS?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: