Would it? In other words, can humans bear a new set of sensors without feeling overloaded? (IDK - at the very least, the peripheral vision cameras could help a lot)
For that, you have to look at aviation. IFR Pilots rely on instruments to fly, these are essentially a "new set of sensors". Using the instruments properly require some intense training. You need to know what to look at and when, it is almost like music, you need to follow a rhythm and do it consciously for a while until it becomes second nature. You also need to resolve conflicts between your gut feelings and the instrument readings: your gut is wrong and the instruments are right, but your brain won't accept that easily.
With planes becoming more and more complex, how to present informations to the pilot is critical. For simple planes you have a set of gauges the pilot need to check periodically. Then it started becoming too much, so a flight engineer was needed to deal with the ever increasing number of gauges. Now there are computer systems synthesizing sensor data to only show the pilot what he needs to fly the plane.
Back to cars, you cannot expect every driver to be trained like an IFR-certified pilot. So showing all sorts of sensor data is going to be counter productive.
That's a great analogy; also, the older a car, the more gauges are on the dashboard: new cars have speed, fuel, engine oil temp, maybe RPM; my oldest car had a battery level indicator and what I think was cooling water temperature. Perhaps this is the way the cars are already going: only show a measure when outside the normal box.
I kinda meant how Tesla shows the 360 view around the car in the dashboard which is fused from different sensors.
I imagine Waymo has even better visual. Seeing all objects, their previous trajectories, their possible future trajectories. Car lanes, traffic signal etc.
Surely one would be able to just look at that and decide, should I merge or not. It’s a way better view than side mirrors.
Basically I just want a 360 object view around the car on a heads up display as I’m driving. That would augment me as a human to be a better driver. Also alert me when it’s likely to be dangerous.
Basically the blog is saying, Waymo driver has superhuman eyes.
I understand what the blog says; I just have experience that learning to park via HUD is a very different game than looking directly; and that's a low-speed, low-object-count endeavor. Looking at possible intents might be overwhelming; but I trust that much can be achieved with training.