There's a whole lot of challenging information that is completely natural and intuitive for a human to understand but fiendishly difficult for a ML algorithm to figure out. There's some cues that I'm sure we probably won't be able to use until we create a genuine artificial general intelligence. If you're driving and you see someone standing at a crosswalk it's intuitive just by looking at them if they're waiting for you to pass, about to walk out into the road, panhandling, etc. You can put yourself in their shoes and make a reasonable prediction as to what they are thinking about doing. On a previous HN thread there was a commenter that had a run in with a Waymo car while riding a bike. He was coming up to a 4 way stop, and yielding to the Waymo car. If he was balancing on his pedals, still stationary, the Waymo car would stop and not proceed through the intersection, apparently interpreting his pose as if he's about to proceed through the intersection itself. A human driver wouldn't have an issue with that, but you can imagine the training data would show that when a bicycle is stopped, the rider puts a foot on the ground.
Generous helpings of lidar and radar to augment cameras is a crutch to help compensate for the lack of 500 million years of unsupervised learning that went into our visual cortex.
It might look intuitive just by looking at them, but good drivers won't trust that intuition, because humans are very unpredictable species. So good drivers will slow down to have enough time to react if necessary. You should not trust "reasonable prediction" when you're talking about death danger.
Which means "do not drive, ever." The actual risk appetite for (human) driving is very different from the advertised one; the heated discussions on SDV just bring this to light.
For those specific cases no, but it makes a great source to identify physical objects and their placement in the world. Obviously that can be done to a sufficient degree with just stereoscopic vision, but look at all of the autopilot fatalities from Tesla. All too often vision and radar data gets misinterpreted as "That must be an object to the side of the road" right up until it crashes into it. With lidar you can be sure you're not just looking at a radar reflection or getting the perspective wrong on a camera. Almost all of the large objects that you want to make sure you never hit will show up well with lidar.
Just about every one of those fatalities can be summed up as "Tesla ignores large stationary object directly ahead". Lidar would have detected all of those objects and most likely prevented every one of those accidents. I think Tesla currently has the best vision and radar only system out there, so either the state of the art doesn't quite cut it yet without lidar, or there's a ML engineer at Tesla that really really hates fire trucks.
Or Telsa doesn't actually provide Full Self Driving yet and people shouldn't be watching their phones.
It is noticing stationary objects, because sometimes it breaks when the car approaches a bridge going overhead, which is also bad when the car too close behind doesn't.
You have to ignore some objects in front of you (even ones heading directly towards you) because you're going round corners, so it's never cut and dried
Generous helpings of lidar and radar to augment cameras is a crutch to help compensate for the lack of 500 million years of unsupervised learning that went into our visual cortex.