I followed a similar algorithm, considering all lives equal. Injury versus death: prevent death. Uncertainty versus death: prevent certain death, and assume passengers are more likely to survive an accident because they're better protected. Certain pedestrian death versus certain pedestrian death: prefer non-intervention over intervention. Certain passenger death versus certain pedestrian death: protect the passengers.
Justification for that last one: self-driving cars will be far safer than a human driver, such that it'll save many lives to get more people using self-driving cars sooner. Self-driving cars not prioritizing their passengers will rightfully be considered defective by potential passengers, and many such passengers will refuse to use such a vehicle, choosing to continue using human-driven cars. Thus, a self-driving car choosing not to prioritize its passengers will delay the adoption of self-driving cars, and result in more deaths and injuries overall.
"Certain passenger death versus certain pedestrian death: protect the passengers."
Pondering that question made me imagine some bad Sci-Fi future where self-driving cars end up being dangerous killer-robots for anybody but the passengers.
If pedestrians have to fear these things, because they are programmed in a "Protect the pilot above all else!" way, it might hamper adoption just as badly.
Justification for that last one: self-driving cars will be far safer than a human driver, such that it'll save many lives to get more people using self-driving cars sooner. Self-driving cars not prioritizing their passengers will rightfully be considered defective by potential passengers, and many such passengers will refuse to use such a vehicle, choosing to continue using human-driven cars. Thus, a self-driving car choosing not to prioritize its passengers will delay the adoption of self-driving cars, and result in more deaths and injuries overall.