> "If a request is canceled after the request is pushed onto the incoming queue, but before the response popped from the outgoing queue, we see our bug: the connection thus becomes corrupted and the next response that’s dequeued for an unrelated request can receive data left behind in the connection."
The OpenAI API was incredibly slow and lots of requests probably got cancelled (I certainly was doing that) for some days. I imagine someone could write a whole blog post about how that worked, it would be interesting reading.