Hi HN,
I recently noticed a recurring visual artifact in the "Most Replayed" heatmap on the YouTube player. The highest peaks were always surrounded by two dips.
I got curious about why they were there, so I decided to reverse engineer the feature to find out.
This post documents the deep dive. It starts with a system design recreation, reverse engineering the rendering code, and ends with the mathematics.
This is also my first attempt at writing an interactive article. I would love to hear your thoughts on the investigation and the format.
I recently noticed a recurring visual artifact in the "Most Replayed" heatmap on the YouTube player. The highest peaks were always surrounded by two dips. I got curious about why they were there, so I decided to reverse engineer the feature to find out.
This post documents the deep dive. It starts with a system design recreation, reverse engineering the rendering code, and ends with the mathematics.
This is also my first attempt at writing an interactive article. I would love to hear your thoughts on the investigation and the format.
Hi HN,
I recently noticed a recurring visual artifact in the "Most Replayed" heatmap on the YouTube player. The highest peaks were always surrounded by two dips. I got curious about why they were there, so I decided to reverse engineer the feature to find out.
This post documents the deep dive. It starts with a system design recreation, reverse engineering the rendering code, and ends with the mathematics.
This is also my first attempt at writing an interactive article. I would love to hear your thoughts on the investigation and the format.
I recently noticed I'd crossed 100 commits on my personal site and realized it's been over three years since the initial commit. I wrote down some thoughts on the journey.
Curious to hear if others here still maintain a personal site/blog and what your experience has been.
Last weekend, I dove into CMU’s [15-445/645](https://15445.courses.cs.cmu.edu/) database course and got hit with a deceptively simple problem: count the number of unique users visiting a website per day. Easy, right? Just throw user IDs into an unordered_set and return its size—classic LeetCode.
But what happens when you’re at Facebook scale? Tracking a billion unique users means burning through GBs of memory just to count. And in the real world, users are streaming in constantly, not sitting in a neat, static list. Storing every ID? Not happening.
I explored practical workarounds (like “last seen” timestamps and full table scans), but they’re either inefficient or put massive strain on your DB. Then the assignment introduces HyperLogLog: a probabilistic algorithm that estimates cardinality with just 1.5KB of memory—accurate to within 2% for billions of users.
The magic? Pure mathematics. It’s distributable, and powers real-world systems like Redis and Google Analytics. I break down how it works (with illustrations!), check out my deep dive.
Curious to hear from HN: Who’s using HyperLogLog in production? And have you run into accuracy issues, and how did you handle them?
I got curious about why they were there, so I decided to reverse engineer the feature to find out. This post documents the deep dive. It starts with a system design recreation, reverse engineering the rendering code, and ends with the mathematics.
This is also my first attempt at writing an interactive article. I would love to hear your thoughts on the investigation and the format.
Alt URL: https://priyavr.at/blog/reversing-most-replayed/