or hell, making barriers that aren't just concrete!
The truth is, if the "moral cost" is high enough, we'll just solve the problem of people dying when they crash in X% of cases, until people/companies feel good about X vs what they pay for X.
I was careful to challenge the usefulness of the exercise. Mostly because I took it as a personal challenge. However, you make an important point. This exercise assumes the problem set will be identical in the future. This sounds like a sensible approach that ends up undermining all of the technological improvements a self driving vehicle will posses. Things like redundant control systems, [V2V, V2I, V2C] networking, run flat tires, and others. The self driving car will be closer to an airplane with a robotic agent as the traffic controller than anything else.
Precisely, if it's a problem of trying to get the vehicle to stop, perhaps focus on areas that can increase that probability first (i.e. running flat tires would be a great one to immediately get the car to slow down).
Thought experiments are supposed to tell us something interesting, by simplifying details while preserving the crux of the matter. Otherwise their values are questionable.
I could have asked "If I could dip my head into a black hole and take it out again, what will I see?" That is also a thought experiment, just not a useful one.