Reinertsen's books are very, very good. The most recent and most up-to-date is Principles of Product Development Flow, which that chart came from. The previous one, Managing the Design Factory, is pretty similar and I think it's a better read.
Another great takeaway from it is to prioritize things by cost of delay, or even better, what he calls Weighted Shortest Job First, where you divide the cost of delay by the expected length of the task. The only problem is that the same people that want to get oddly formal and inappropriately rigorous with things like story points will want to turn cost-of-delay into an accounting exercise - or object that it's impossible because they don't have the accounting system for it - which misses the point entirely.
That is very true. I'm going to write something up about WSJF in practice in the future, but it has some real benefits to fixing that issue.
Particularly when you get the decision makers in a room and get them all to agree on the estimated value of each item. Not only does it remove the numerous priority #1's, it also gets everybody aligned on what the real priority #1 is and why.
I've seen it done where people are surveyed separately and it only works well when the people involved are forced to have a conversation to put real numbers to their assumptions, coming out with agreement. The other side effect is that it solves the squeaky wheel problem.
The shape is about right for the variance of the M/M/1 queue (what is an M/M/1/∞ queue??), but it's a little disingenuous since variance is a higher-order moment. Standard deviation is probably more sensible to think about here, since that brings it back down to the same scale as the queue length.
> Thus the average number of customers in the system is ρ/(1 − ρ) and the variance of number of customers in the system is ρ/(1 − ρ)². This result holds for any work conserving service regime, such as processor sharing.
In the cited example, going from 75% utilization -> 95% utilization drags your standard deviation from a modest 3.5 items in the queue all the way up to 19.5 items. And, in both cases, that's not far off from your average queue length, either.
Infinite queue length. M/M/1/k would be a system where once k items are enqueued, some work shedding policy takes over (either rejecting new work or often preferably dropping at the head of the queue.)
It's intuitively obvious, but I never realized it was that bad.
I'm going to have to run that down and see if it's actually backed by real data. Too many business books are complete flimflam.