Licensed pilot here (learned to fly in 1987). People need to look at self-driving cars in the way pilots use an auto-pilot.
Auto-pilot is most useful in two situations. 1) long cross country legs where there is not much flying to do (just maintain heading and altitude), so A/P frees the pilot up to manage other systems, enjoy the view, etc, and alleviates fatigue, 2) flying a precision instrument approach, reducing the risk that the pilot will succumb to spatial disorientation in the setup for (usually manual) landing.
With cars, auto-drive capability will be useful in reducing accidents in two modes: 1) long duration highway driving where fatigue is a big issue, 2) intervening to prevent a distracted driver from causing an accident (rear end collision etc).
I'd be perfectly happy with a car that can drive itself in cruise mode on the interstate, but requires an alert driver on local roads (with the added bonus that if I am about to slam into something, it will brake to avoid or lessen the impact).
Something for the liability crowd to consider, self driving cars won't be able to avoid every potential mishap, but they will be able to reduce their severity. A car that can automatically brake to reduce its speed by 25% just before impact, will reduce its kinetic energy by roughly half, and the potential for injury by more still.
This is a very insightful comment, from a person who has first hand user experience with self-navigation systems. Sadly, if you dare utter blafphemy against the Almighty God of Technology, bad karma will happen to you.
This is even against the hacker spirit. We loathe IDEs and cherish emacs/vi close to our hearths because what we want from automation is augmentation of human capabilities, not self mutilation + prostaethics. How can people not accept that the same formula can work best for cars?
Your comment made me think: Could driverless cars kill the domestic airline industry?
If I could get from NY to Florida in less than a day in my driverless car, and at sub-$2-gas, who would want to do the security check-in, luggage ripoff, flight delay thing when going less than 1000 miles?
The fact is that auto-pilot cars, as you describe them, won't be nearly as revolutionary as self-diriving cars. The latter would allow people to put to good use the time to commute to work; they would improve car-sharing to the point that people would not need a personal car and parking could become a problem of the past; they would allow you to get back home safely after some drinks; eventually people would not need to learn to drive at all. The mere augmentation of human drivers, as useful as it would be, would not change much in our habits
I made a similar comment elsewhere in the thread before reading yours.
I think the evolution of self-driving cars will be through the gradual improvement of auto-pilot cars until the human just has to sit there and do nothing 99% of the time (but still be legally in control).
I think it's pretty clear that the bar for safety must be much higher (10x) than human-level for these to be accepted by consumers. Auto accidents are very common, and the first time someone has an accident with a car like this everyone they know will hear about it. It will be terrifying. If these are only as safe on average as a human driver then nearly everyone is going to have at the very least a 2nd-hand negative experience.
The emotional response to these accidents is not going to be entirely irrational either. If I have a minor accident with my traditional truck, I'm going to probably have a good understanding of what went wrong and how I can prevent future collisions. With an autonomous vehicle... software upgrade? I'd rather take responsibility for my own safety if that's the case.
Which one will learn faster: each individual driver each time they have a minor accident (or a close-call)? Or the autonomous vehicle software updated from data collected from each and every accidents involving an autonomous vehicle? The result will probably be software-updates like Tesla cars are getting today.
I suspect that if the attitude towards autonomous vehicle accidents is similar to that of aircraft accidents, they will become extremely safe very quickly.
The point is not which is safer on a bottom line level, the point is consumers are terrified of the idea of surrendering control to a computer. Lots of people are terrified of flying for example even though it's much safer. Unless it's 10x safer people will be scared of automated cars and possibly even push for short-sighted laws.
Millions of people commute by subway everyday without suffering panic attacks. Tens of thousands (at the very least) ride Paris completely automated 14 subway line every day without suffering panic attacks, or any of the many other automated subways/monorails all around the world.
And yet a subset of them are probably terrified of flying. Because flying, in a way, is scary. Unlike birds we are not wired to fly. But that's fine: autonomous cars won't be flying. If train/subways/tramway/bus can be successful, so can autonomous cars.
I suspect that if the attitude towards autonomous vehicle accidents is similar to that of aircraft accidents, they will become extremely safe very quickly.
On the other hand, if every autonomous vehicle accident is investigated to the same standard as every aircraft accident, it appears likely that the entire system will become so overloaded and cost-ineffective that it is completely impractical to operate before we get over the hurdle and achieve any consistently safer performance that might otherwise be available.
I agree with the risk of overload. Sanity needs to prevail.
But by similar attitude, I was mostly thinking about the rigorous and systematic efforts to find root causes of accidents, rather than the toxic attitude of attempting to deflect blame (and liability) that is so pervasive in other industries.
Safety starts with the culture: it's not an add-on or an afterthought. If Google starts to not fixing problems with its cars to avoid acknowledging a problem (and the liabilities that go with it), then we have a problem...
Of course I'd learn from each fender-bender, and I'm clearly a better driver than average anyways! These fancy autonomous vehicles are too dangerous for my family.
Premiums go up because information is revealed to the insurance company that you are the type of person who gets into accidents. You were always this kind of person, but the insurance company didn't know until it happened.
From this point on, you might become less accident-prone than your previous self, but you established that you're more likely to get into accidents than the average 0-accident driver.
That doesn't mean that people don't become safer drivers after an accident! It just means that the variance between drivers is more significant than how much a single driver improves.
Insurance companies raise premiums after accidents because there is an excuse to do so. It has nothing do with your likelihood to have another accident, and everything to do with charging you more so that they can attract new customers with a lower price. Premiums go up even if the driver is found to be not at fault.
It is no more in an insurance company's interest to raise rates arbitrarily than it is for any other company. If one company's premiums are significantly more expensive than other insurers, they'll lose customers – and if what you say was true, any insurance company that realized this could cease the practice and gain many new customers.
The whole point is that in the post-accident customer's mind it is NOT an arbitrary price increase--the customer feels that they "earned it" by getting into an accident.
But all the insurance company cares about is revenue vs. expenses across the entire pool. If they can charge one customer more, it allows them to charge another customer less--like, by advertising a low fee for new signups.
Yes, this is my point: rate increases after accidents are done by choice of the insurance company, not because their risk management requires them to do so, as the grandparent post implied:
> Premiums go up after an accident because it is more likely that you are a poor driver and likely to have another accident.
No, premiums go up in that situation when the insurance company thinks they can do so and not lose too many customers. As you point out, not every insurance company operates this way.
An insurance company only cares about managing risk and revenue across the entire pool. They plan to pay out a certain number of claims, so any given accident might simply be fulfilling the actuarial expectations and not altering their risk calculations at all.
Same. I've also had one at-fault accident and my premium went up in the sense that my "accident free discount" was removed for about 24 months. But I eventually got that back too.
Anecdote not evidence, but still interesting - I know someone who was no-fault in an accident, and her premiums still increased because the insurance company claimed that statistically anyone who has an accident is more likely to have another, even if it's not their fault.
> I think it's pretty clear that the bar for safety must be much higher (10x) than human-level for these to be accepted by consumers.
Will it? Think about the recent Toyota brake pedals that didn't work or the GM faulty ignitions. Both of these caused a number of deaths, and the Toyota one was especially scary because it didn't even need accident-like events to cause a problem, your brakes would just stop working.
It'll hit the news (like Tesla battery fires vastly disproportionately reported on over other, much more common auto fires), but it's not clear at all that it will cause some massive panic or backlash.
About the Toyota's unintended acceleration (UA) incidents.
From Wikipedia:
"On February 8, 2011, the NHTSA, in collaboration with NASA, released its findings into the investigation on the Toyota drive-by-wire throttle system. After a 10-month search, NASA and NHTSA scientists found no electronic defect in Toyota vehicles.[27] Driver error or pedal misapplication was found responsible for most of the incidents.[28] The report ended stating, "Our conclusion is Toyota's problems were mechanical, not electrical." This included sticking accelerator pedals, and pedals caught under floor mats.[29]"
If this was true, then the self-driving car may have prevented the accidents. If it was really a software problem, then fix the software. It may be much more easy to fix the software rather than to fix human behaviors.
NASA didn't find a software defect, this is not the same as NASA finding that there is no software defect.
Toyota absolutely built and sold dangerous vehicles with software and hardware defects. This was not only a matter of gas pedals sticking to floor mats.
Importantly in this context Toyota did not put customer safety first and decided to suppress the truth of the problem rather than own the solution. This is cause for concern as we move toward self-driving systems which have to be perceived to work in order for the product to remain economically viable.
I am already aware of the Barr Group team and I know their claim.
>NASA didn't find a software defect, this is not the same as NASA finding that there is no software defect.
Read my post carefully. I haven't claimed such a thing and I offered two possibilities.
>Toyota absolutely built and sold dangerous vehicles with software and hardware defects.
You have to be careful with this. The Barr Group team haven't proven the relationship between the accident, and the software defect which they claim to found of.
The important thing is that software, hardware, system, and the whole design for car safety can improve significantly better than the current situation, which is much more better than just relying on human drivings.
I find it unlikely that consumers are as safety conscious as you imagine them to be. Think about how many people currently text or do other things to distract themselves while driving. They willingly choose less safety in order to be able to do something else while in transit. Pushing this behavior to its natural conclusion is what will allow self driving cars to be accepted even if they are 10% less safe than normal drivers.
People text while driving because they think they are better at doing two things at once than they actually are. For the same reason, they will expect that they can out-perform a robot unless those robots are so overwhelmingly safe that conclusion can't be rationalized. No consumer will be thinking about the big picture.
I agree that the bar for safety should be ~10x human-level. But I also find it interesting, because that should be a moving target. Eg if we realize humans often rear end the car in front of them, and fix that with "auto braking", then we'll improve how safe human drivers are.
Thus, hopefully, both humans and self-driving cars should have improving records [although I don't have any data to back this up].
You've raised an important point here: the 'human vs self-driving car' is something of a false dichotomy.
The human driving experience will become more and more augmented by technology, with auto-braking, lane-assist, smart cruise control and (eventually) auto-navigation until we have almost-self-driving cars which are still legally required to have a qualified human in the driver's seat. Many of the these features already exist in high-end cars.
People will be much happier to hit the 'auto' button and climb into the back seat when they've already had several years of 'almost-auto' driving.
> I think it's pretty clear that the bar for safety must be much higher (10x) than human-level for these to be accepted by consumers.
I think the safety level just has to exceed that of an average teenage driver for widespread adoption by a certain segment of the car-buying population: parents.
I think in this case, insurance companies will help drive adoption. If an insurance company believes that accidents are say half as likely, you would have to imagine one of them would bite on changing premiums by a bit. Cold hard cash will facilitate adoption, assuming basic regulatory guardrails are in place.
"There were 272 instances in which the software detected an anomaly somewhere in the system that could have had possible safety implications; in these cases it immediately handed control of the vehicle to our test driver. We’ve recently been driving ~5300 autonomous miles between these events, which is a nearly 7-fold improvement since the start of the reporting period, when we logged only ~785 autonomous miles between them. We’re pleased."
785 miles between anomalous events that require manual control is bad enough. 5300 is worse! That honestly needs to be 0 events (impossible as that might be) or only events for which advanced notice can be provided to be market-acceptable.
It's the TSA screening problem all over again: 13-88 hours of mind-numbing normal operation, with a single incident of something potentially exploding that you have to catch.
You mean that 5300 is worse because people will pay less attention?
Well, would it be better if it had a random chance of (artificially) needing human intervention, at on average every 785 miles? (Not exactly, because that would make it too easy to predict, and therefore might not make the person attentive when a real problem happens)
That doesn't seem like it would be too hard to add, if it was better by keeping people on their toes.
But, I doubt the problems get monotonically worse when the average distance between events increases? I would expect at some point, increasing distance doesn't significantly reduce attention , so the safety would increase a bit. Maybe not more than for some shorter distance, but I think for some slightly different distance.
> I think it's pretty clear that the bar for safety must be much higher (10x) than human-level for these to be accepted by consumers.
That's a really low bar.
Take a look at Google Maps in San Diego when it rains. Accidents everywhere. A self-driving car won't get into stupid accidents because it's following too closely, driving too fast, hydroplaning, etc.
Toss in the people who are distracted, tired, or drugged out on medication, and your numbers are even worse.
We're looking at the end of humans driving cars within 10 years.
> I think it's pretty clear that the bar for safety must be much higher (10x) than human-level for these to be accepted by consumers.
You might be right about that, I just don't see why it should be the case. Humans are not very good at driving safely, it seems like an easy bar for robots to clear. Even if robots are "only" twice as safe, we should all jump at the opportunity to halve injuries and deaths from auto accidents.
I think it could be predictability of a self-driving car in an accident that is making people uncomfortable and accident rate does not capture that. For e.g., an accident might means hitting a barrier, a human realizes the terrible situation and will swerve away from the cliff behind the barrier. But what would a self-driving car do? Its sensor might be too damaged to realize the dire consequence.
What about manufacturer liability? That's the critical question here.
It can't just be 2X or even 10X safer than humans if the manufacturer is liable for the accident. That would EXPLODE the cost of these cars and bankrupt companies -- even if just a small fraction of today's fatal accidents due to driver error became Google's liability.
Consider that even single-vehicle accidents, which today might be the driver's fault, could result in a manufacturer lawsuit.
This is why I think fully autonomous without any human to intervene is a pipe dream, because the safety level required for that is pretty close to perfectly bugfree in an almost infinitely complex world of roads and conditions. Just for liability reasons.
These test cars of course have human copilots. But it remains to be seen if that really will grant manufacturers immunity for accidents. If the car is perfect 99.999% of the time, wouldn't that train the human to trust it and not pay as close attention? And then miss that 0.001% of the time when it hurts someone? Would a judge find it reasonable for a human to stay vigilant and responsible for that fatal corner case bug that cropped up after two years of perfect autonomous driving?
There is no explosion. The cost of collisions today is reflected in the car insurance, so about $100/mo. If the manufacturer were to absorb this liability at current incident rates, it would make cars about 30% more expensive (assuming $300/mo loan payments). If Google reduces the (incidence * impact) of collision by 10x, the cost will come down to $10/mo, or about 3% of the total car cost.
And in return for the modest price increase you get a self-driving car!
You're forgetting about tort damages. Insurance does not typically cover the cost of killing or seriously injuring someone. Those damages go into the millions, far exceeding the maximum coverage. Cases of serious injury and death often end up bankrupting the responsible individual.
Also consider that there are more people who can sue per accident, because humans aren't responsible anymore. Today if you're driving the car and get in a accident and it's your fault, you can't sue anyone. Now you can. That's one more party per accident, and a whole category of single-vehicle accidents than now result in lawsuits.
> You're forgetting about tort damages. Insurance does not typically cover the cost of killing or seriously injuring someone. Those damages go into the millions, far exceeding the maximum coverage. Cases of serious injury and death often end up bankrupting the responsible individual.
And you're forgetting that, to the extent that such injuries can be laid at the food of manufacturers' defects, manufacturers are already liable for them, and since plaintiffs are unlikely to be able to fully collect from drivers, will often be sued now, so self-driving cars don't actually change things that much.
> Also consider that there are more people who can sue per accident, because humans aren't responsible anymore.
Incorrect.
> Today if you're driving the car and get in a accident and it's your fault, you can't sue anyone.
Incorrect. If you get in an accident and suffer damages and you allege that this is due to a manufacturer's defect, this is a basis for suit against the manufacturer. Now, you won't win if the manufacturer can show by a preponderance of the evidence that the accident was your fault rather than theirs. [0]
Similarly, quite likely, in a self-driving car regime, if your car crashes you can sue the manufacturer and claim a defect was responsible, but if they can prove its the fault of something else (such as your own failure to properly maintain the vehicle), they won't be liable to you.
I think your logic is correct, but your implied outcome is probably wrong.
>"If you get in an accident and suffer damages and you allege that this is due to a manufacturer's defect, this is a basis for suit against the manufacturer."
Right now, this is an edge case. The self driving car may cause it to become common.
> The self driving car may cause it to become common.
Yes, because exactly those cases where the driver (as operator, as distinct from the -- often the same person's role as -- owner-as-maintainer) is responsible now become the manufacturer's responsibility. But, again, that's exactly what insurance covers.
But, again, we know the cost of that liability is -- its the cost of driver's insurance.
Driver's insurance is the cost of liability to drivers, whose ability to pay damages is practically capped by their own insurance coverage.
A major corporation does not have the same practical ability to cap their damages. I am not convinced it's a straight comparable cost as you keep saying. The two risks are not exactly comparable, thus I would not expect the insurance costs to be the same.
> Insurance does not typically cover the cost of killing or seriously injuring someone.
Umm... That's pretty much exactly what liability insurance covers. Even with maximums, insurance companies frequently have to pay the whole amount regardless of what the policy says.
Anyone who is collectible (i.e, who can't just declare bankruptcy in the case of a multi-million dollar settlement) should carry umbrella coverage, though, because the insurance company may try to come back to you to collect.
It's cheap compared to car insurance, and it can cover into the millions.
>and a whole category of single-vehicle accidents than now result in lawsuits.
If you're talking about single-vehicle accidents where the human is at fault, then I wouldn't be surprised if those dropped to almost zero for self-driving cars. The place where self-driving cars have the most trouble is in complex situations involving multiple moving objects; most situations where the car just runs into something, harming its occupants, would rightfully be considered a defect.
And as others have mentioned, manufacturers already factor in the cost of defects into the total cost of producing a car.
No, it's not uncommon in the US. Almost everyone who owns a house has a $1mm+ umbrella policy as part of the standard homeowner's policy.
Edit: And I rent, so I bought an explicit $2mm umbrella policy, it's $267/year. That's a lot less than my car insurance, and a bit more than my renter's insurance.
1. Airplanes have highly trained trained human pilots who are highly vigilant and often in semi-manual control during takeoff and landing. Not a valid comparison.
2. Typical auto insurance policies have liability limits of around $100K. In cases of serious injury or death, tort damages often exceed that tremendously and bankrupt the responsible person. The solution is to get the legislature to cap tort damages for killing someone at ~$100K? Not going to happen.
I am not too sure it won’t happen (most legislators are rather fond of large donations). There is plenty of potential to play states off against each other to get the cap through.
I do agree with you that until the manufacturers liability is solved there won’t be autonomous cars sold.
> There is no explosion. The cost of collisions today is reflected in the car insurance, so about $100/mo. If the manufacturer were to absorb this liability at current incident rates, it would make cars about 30% more expensive (assuming $300/mo loan payments).
Except juries invariably award much larger damages against companies then against individuals; particularly when the individual is dead or permanently disabled by the accident. People will be reluctant to levy a multi-million dollar award against a reckless driver's widow, but will be perfectly happy to award it against a multi-billion dollar corporation.
> The cost of collisions today is reflected in the car insurance, so about $100/mo.
Not true--consumer auto insurance premiums do not encompass the cost of consumer lawsuits against manufacturers, because of course manufacturers do not buy consumer auto insurance.
For example Toyota settled their unintended acceleration lawsuits for $1.2 billion dollars or so. GM's ignition switch problems cost them $900 million to settle. How much will Volkswagon's diesel fraud cost them?
None of these costs are covered by consumer auto insurance. Instead they're covered by the capital of the corporation, which ultimately comes from investors or customers.
Now imagine a self-driving world in which every accident opens up the possibility of such class action settlements. In the absence of legislation to cap damages, companies would have to set aside huge pools of money to defend and settle such lawsuits. That means less investment, lower returns, or higher prices--any of which tend to hurt the long-term viability of a company.
> Not true--consumer auto insurance premiums do not encompass the cost of consumer lawsuits against manufacturers, because of course manufacturers do not buy consumer auto insurance.
They encompass the cost of the liability that would be transferred from drivers to manufacturers if all liability that drivers have now was transferred. They don't cover the liabilities manufacturers already have, but those liabilities are an existing part of the cost of doing business in the auto industry, not something new with automated vehicles.
> those liabilities are an existing part of the cost of doing business in the auto industry, not something new with automated vehicles.
You've written this in several posts and I think it's wrong. Consumer class actions are typically done on contingency, or with 3rd party financing, which means that cases must be somewhat likely to win before they even get filed.
Thus while it's true in theory that anyone can sue an auto manufacturer in any given accident, most accidents do not result in such suits. Consider that the GM ignition switch problem caused a number of deaths over the course of years before a suit was even filed on it. Auto manufacturers self-insure based on the expected rate of such lawsuits, and they obviously have very good data about that, as the movie Fight Club so memorably illustrated.
In a fully autonomous car a driver is not making any decisions at all, so how can they be liable? The balance would certainly shift toward more suits against manufacturers.
> In a fully autonomous car a driver is not making any decisions at all, so how can they be liable?
In a fully autonomous car, the owner is making decisions about maintenance, and in almost any case of claimed manufacturer's defect, distinguishing whether the cause is, in fact, a manufacturer's defect rather than a failure of maintenance will be important.
> The balance would certainly shift toward more suits against manufacturers.
Sure, because exactly the things that drivers-as-operators (rather than owners-as-maintainers) are now liable for would usually be the responsibility of the manufacturer. The amount of liability that represents is, to a close approximation, the cost of insurance coverage for the vehicle, so its essentially transferring the cost of drivers insurance to the manufacturer (who will roll it into the purchase price.)
Not only can consumers not maintain the software in their self-driving car, under the DMCA it is criminal for them to even attempt to do so! And it is the software that is the distinguishing characteristic of a fully autonomous self-driving car.
> Not only can consumers not maintain the software in their self-driving car
(1) Maintaining a self-driving car is more than maintaining software,
(2) Even in the case of software, an owner might have responsibilities with regard to maintenance (not in the sense of "maintaining software" in the programming sense, but in the sense of ensuring that, e.g., updates released by the manufacturer are downloaded and installed -- even if this is an automated, the owner may be responsible for making sure the vehicle is kept where that process can successfully complete and not interfering with it.)
> And it is the software that is the distinguishing characteristic of a fully autonomous self-driving car.
The "distinguishing characteristic" of a self-driving car is not the only thing relevant to liability.
In many cases, the manufacturer could include software that prevents the car from operating if maintenance is out of date.
There are a lot of ways the manufacturer can increase safety that might have negative consequences. For example, what happens if a manufacturer decides that all models over 10 years old are too risky? How do you separate their legitimate desire to ensure safety in their own self interest from their desire to sell more cars?
You might need to bring a lawyer when buying a car in the future.
there can't be a class action for each accident. If a company makes a self-driving car that keeps getting into a particular kind of accident, then sure - class action city. Same as if they create a non-self-driving car that keeps getting in a particular kind of accident. What's different?
The difference is that in a fully autononomous car, every type of accident creates liability for the manufacturer.
In cars today only certain problems rise to the level of class action suits. Let's say a driver thinks there's a cat in the road and swerves and hits a tree. No lawyer is going to sue a manufacturer on contingency for that accident.
Now let's say a self-driving car does the same thing. Yes, a lawyer will take that case.
More than likely, insurance will continue to be mandatory and paid by the vehicle owner ... but seeing it drop to $10/mo would be pretty sweet! Of course, you would then be prohibited from driving the vehicle. Self-driving only if you want to see the savings.
> It can't just be 2X or even 10X safer than humans if the manufacturer is liable for the accident. That would EXPLODE the cost of these cars and bankrupt companies -- even if just a small fraction of today's fatal accidents due to driver error became Google's liability.
If the owner/operator bears no liability and all liability is on the manufacturer, it transfers the liability cost onto the manufacturer, but even if it is as safe (not 2× or 10× as safe), that just means the manufacturer rolls the cost to (self-, likely) insure for that liability into the purchase price (but the purchaser doesn't need to get their own insurance, so the total cost of operation to the purchaser is unaffected.)
And, of course, if a company like Google is both the manufacturer and the owner/operator (e.g., using the vehicles in an Uber-like service tied in with Google Maps and Google Now, or using them for Google Express delivery vehicles with smaller robots onboard to deliver packages to the door, or using them for Google Street View camera vehicles, etc.), operator vs. manufacturer liability makes no difference.
1. Air bags sometimes kill people who would otherwise have survived in an accident, but on balance they save lives.
2. Making air bags mandatory hasn't caused manufacturers of cars or of air bags to go bankrupt.
3. If self-driving cars cause deaths, but on balance save lives, the situation is directly analogous.
4. There will be a way to make self-driving cars work. Q.E.D.
I seem to remember that laws exist to grant immunity to manufacturers of airbags, at least if they were functioning properly. A quick Google finds lots of ambulance-chasers who talk about "defective airbags," so manufacturers aren't immune to liability if they screwed up.
Similarly, you could imagine that No Fault laws could be passed for self-driving cars that were operating within reasonable parameters. A gross error on the part of a self-driving car could still leave manufacturers liable, but as history has shown, car companies are willing to write off even gross negligence claims rather than voluntarily add safety devices to cars, calculating cost-to-fix vs cost-to-pay-off-family tradeoffs. [1]
In fact, I'd go as far as to say that, any company that doesn't offer self-driving cars in the next 15 years may as well plan to shut down or be acquired. This is going to be a must-have feature. They will figure out how to make it work for manufacturers.
The liability question will be solved. They wouldn't even let car companies go bankrupt when they (arguably) deserved to; they certainly won't let them get into a situation that kills them, and as others have rightfully pointed out, the numbers aren't even that prohibitive if they needed to self-insure.
That is a very poor analogy. Airbags have only killed a few hundred people and in many of those cases the manufacturer was still not liable because of human error (victim was not wearing a seatbelt, was a child improperly seated in the front, etc.)
The manufacturer liability exposure for fully autonomous driving is literally several orders of magnitude greater than for airbags.
> quick Google finds lots of ambulance-chasers who talk about "defective airbags,"
That is the Takata airbag case - they manufactured defective airbag with malfunctioning ammonium nitrate inflators which exploded supersonically instead of inflating, killing 8 people. They were fined $200m just recently and have to recall and replace 400,000 airbags.
This case might actually be analogous to self-driving car issues - it was a case of a defect potentially caused through negligence in engineering, rather than airbags working as intended.
Are you saying this because manufacturers, being gigantic unsympathetic companies, would be sued for much greater damages than at-fault drivers get hit with today? Otherwise it makes no sense, TCO will come out the same whether drivers or manufacturers are liable.
Or we could learn to stop suing people over everything. I mean, gross neglect is one thing, but if it's a safer driver than I am, and it screws up, we call that an accident.
...
Nah, I'm dreaming, it's far more likely that insurance companies will set up a wide insurance program ala taxi services than it is that we'll learn to stop being litigious.
> gross neglect is one thing, but if it's a safer driver than I am, and it screws up, we call that an accident.
Exactly my thoughts. You typically have very little to go on if a company did something to you and they were not in neglect. At least in Europe/The Netherlands, I don't really know about the US. (The stories we hear is that in the US, neglect means not warning that coffee is indeed hot, or that you should not put pets in a microwave to warm them up... I don't believe that really so I'll just assume it's similar to here.)
You are assuming that, if liability in the US was a critical question, Google would not launch its project in a country with most hospitable weather and more compatible liability rules and from there, lobby for more reasonable principles in the US.
In theory this just shifts the insurance coverage from the driver to the manufacturer. The real unknown is if any insurance company will cover the manufacturer or not. I suspect that like the airline industry there will need to be a legislated cap on damages to make this work. Provided that there is insurance then there should not be any issue.
More fundamentally we should not assume that the people working on this at Google are idiots. They already know all of this and I am sure they are working with insurance companies and governments to solve this problem. Compared to the technical difficulty of making a reliable and safe autonomous vehicle this is an easy problem.
Liability of all things is "The. Critical. Question." when it comes to autonomous vehicles?
That's such an amazingly American thing to say. Do you really think that the uptake of autonomous vehicles is going to be hindered to any significant degree by what are largely U.S.-specific legal issues?
Can you imagine a world in 2060 where the U.S. decides to leave something like 10-20% of GDP on the table because of the liability aspects of their legal environment? It might hinder uptake a bit but I don't see it stopping it in the long run.
In any case regardless of what the U.S. does at that time the rest of the world is going to be driving autonomous vehicles.
New Zealand. Personal injury suits are barred, and instead handled by ACC, the state accident insurance corporation [0]. Punishing negligence is taken care of in the criminal law, and product safety is regulated.
That's just hiding the liability under "product safety regulations". Undoubtedly when there is a manufacturer defect that causes a serious injury or death, the NZ government prosecutes them.
There's liability in a lot of countries, punitive damages aren't a thing in a lot of them
So if your autonomous car kill someone you could just say "yes, shit happens, we'll fix it, but look at how the death toll from vehicles has gone down drastically due to this technology, maybe the courts shouldn't sue us out of existence when the alternative is a drunk/distracted/incompetent human behind the wheel?".
The OP is suggesting that that's going to happen in the U.S., maybe it will, but that's not the reality in many other countries.
Punitive damages are extra compensation to discourage future behavior. Yes, these are not used in every country, but compensatory damages certainly are, and those can get pretty high.
Certainly. But for the OP's point of liability being "The. Critical. Question." to stand the question of who gets paid by whom in the event of a death by autonomous vehicle, and how much, is somehow going to be more important than all the other economic and societal benefits of those vehicles.
The U.S. has around 25% of the world's personal vehicles, what it does is certainly going to play a big part, but if it does something crazy to restrict the technology it's just going to be developed elsewhere.
Volvo's CEO: "We are the suppliers of this technology and we are liable for everything the car is doing in autonomous mode. If you are not ready to make such a statement, you shouldn't try to develop an autonomous system."
So there.
At first, self-driving cars will probably be leased on operating leases, with maintenance and insurance bundled into the payments. Or maybe you own the vehicle, but there's a maintenance and insurance contract required to enable self-driving.
cost of damages = # of accidents * cost per accident
- Self driving cars should reduce the number of accidents
- Cost per accident seems independent of the kind of car you drive. If a human driven taxi hits me, I can sue the taxi company. If a self driven taxi hits me, I can sue Google. I should be able to sue for the same amount in either case.
> It can't just be 2X or even 10X safer than humans if the manufacturer is liable for the accident. That would EXPLODE the cost of these cars and bankrupt companies -- even if just a small fraction of today's fatal accidents due to driver error became Google's liability.
Right now, the insurance company is liable for the accident followed by the consumer. It'll just shift the insurance market towards manufacturers buying it [or likely self-insuring] and the cost of insuring a car will be baked into the price instead of bought aftermarket.
> Right now, the insurance company is liable for the accident followed by the consumer.
Actually, the driver is liable, but in most states (in order to be legally permitted to drive on public roads) must either have purchased insurance to cover some minimum amount of that liability or have posted a liability bond. But the insurer's obligation to pay is triggered by the driver's liability, not ahead of the driver's liability.
If it's only 10X better than humans ,and considering that professional drivers are about 10x better than the average, maybe we're better of with something like shared UBER ?
Or maybe just add self driving for the highway , especially if it's possible for the driver to get off the vehicle before entering the highway , and hop on another car entering the city ?
Minor crashes are even more frequent than the article estimates. Thus, the real human accident rate is even higher. Probably between 1 in every 24000 and 87000 miles.
The VTI driving study[1] equipped 100 cars with sensors and was therefore able to measure all crashes experienced. It directly measured 1 crash per 24000 miles. If we extrapolate based on the 17.4% police report rate, that suggests 1 per 87000 miles.
That's an interesting study. I'd be cautious at trying to create a one-line conclusion from it. It is, however, a fascinating full read, and not very long and not very technical, so I'd encourage people to read the whole thing.
Note the narrow demographic data and geographic data.
I live in Mountain View and I see Google self-driving cars around all the time (day and night). My personal feel is that they look "different" and my attention will deliberately focus on them (which may or may not affect my driving when I'm around them).
Why not test with cars that look "normal" (think early 2000s Corolla). Wouldn't this further decrease the chance of possible accidents?
I guess what I'm saying is, who's decision was it to make it look like a toy car and not just an actual regular car that doesn't divert my attention off the road?
They want to get people used to the idea of sharing the road with self-driving cars. People will be a lot more comfortable with them and have a lot fewer misconceptions if they've seen and recognized enough of them for it to stop seeming noteworthy.
I believe you're correct in that marketing did have a strong sway in the decision.
But the fundamental principle behind Google has always been engineering and especially for something as crucial as safety, they should have stuck with something bland and nonchalant imo.
The little Google cars (not the Lexus SUVs) were specifically designed for pedestrian safety in case of a collision, which can hardly be said about any other car, aside from Volvos.
That cars are designed with pedestrian safety taken into account? :)
I don't know if you've noticed but cars tend not to have noticeable front bumpers these days. Some cars also now have deformable bonnets, or bonnets that pop up if they detect somebody rolling on to them. (If you want to know how this works, you'll have to ask somebody else. I have no idea.) This is all to limit the damage cars cause to people when they hit them.
Euro NCAP has a section in its current ratings where they assess how good the car is at hitting people without injuring them too much, giving manufacturers of vehicles in many sectors an incentive to design with this in mind. See http://www.euroncap.com/en/vehicle-safety/the-ratings-explai....
Interesting, safety is usually not mentioned from the articles I've ready about the driverless car design. The two articles below kind of skimp about safety. Well, hopefully you're right and safety within the design was (and in my opinion should) be the first goal in mind.
I used to use Drupal. At this point I never, ever recommend people use it.
Pregenerating most of your pages a-la Jeckyll is just so much better that I don't know why anyone ever thought generating all of their pages with PHP was a good idea.
I remember thinking it was a good idea, to be honest. But at this point I look back and see how mistaken I was.
The CA dmv also publicly reports all accidents involving autonomous vehicles [1]. Most accidents are caused by drivers taking over control manually, or by other actors on the road driving erratically (or rear-ending the self-driving vehicles). The entries on the page report Google, Delphi, and Cruise automation as having incidents, though they're all minor damages, if there are any at all.
>How does that number compare with humans? Well, regular people in the USA have about 6 million accidents per year reported to the police, which means about once every 500,000 miles.
Aren't accidents only reported to police if there is a hit & run, or some other criminal activity?
Seems like the wrong metric for comparison, given the way they define the self-driving "accidents" and that the majority of human fender benders are not reported.
Not necessarily. Major accidents will have the police involved, small fender benders might be handled with just an exchange of insurance information. If there's substantial damage though, the police will probably get involved since a police report makes collecting on insurance easier.
It's not uncommon to contact the police so that a quick statement can be made and the paperwork logged. It seems to help with the insurance stuff and getting a third-party's eye on the scene.
When it comes down to differentiation what do I care about if a robot is driving me around?
Safety. Period.
The equation to get to safety is sensors + computing + map / map geometry.
The sensor differentiation will matter for a while, some OEMs will do very well, but ultimately it will just boil down to the software.
So how is the future of this space not just "everybody licenses Google's platform"?
If life/death safety is the differentiator (and measurable) I feel like the writing is on the wall. Google is going to dominate this.
Maybe Musk sneaks in there with his brash approach of using his drivers as test subjects but his position as a competing auto manufacturer seems less compelling than supporting an "Android-like" effort from Google.
> So how is the future of this space not just "everybody licenses Google's platform"?
I feel like what we've already seen in the DARPA grand challenge and in the self driving car research world will be what happens in the future: trailblazing is really hard work, but once others can see it can be done, even without a lot of details of how it was done, doing the same thing is usually much easier, or at least more obviously viable and so worth the investment.
Not to mention all the published research, suppliers with established manufacturing, and experienced engineers looking for other jobs that would now be out there.
Starting a self driving car company ten years from now, say, still would not be remotely easy, but you won't need anywhere near twenty years to catch up.
I can see this market playing out to be an oligopoly not a monopoly.
- Very high barrier to entry. You need $X of development costs, and $Y on traffic/mapping data. Anyone with money can get those.
- Self driving cars have limited economies of scale.
* Self driving taxis get better (less wait time) as their density goes up, to a point. Today Uber and Lyft are both dense enough in SF for me to not care which one I use.
* Self driving cars get safer as you collect more driving data. Safer self driving cars are better up to a point. Safety matters a lot now, since cars are unsafe. On the other hand I don't care if Boeings are 2x as safe as Airbuses, both are safe enough for me. If you can make your self driving car safe enough, I'll buy it. To make your self driving car safe enough, you need to spend $X on developement and testing and $Y on really good maps.
> The equation to get to safety is sensors + computing + map / map geometry.
The equation to get to safety is actually what you mentioned + roads where 100% of vehicles are tied in to a common network orchestrating each vehicle's movement in relation to the others and the environment. Without that, differentiation in software is likely to lead to unanticipated dangerous scenarios, just as the HFT market is susceptible to flash crashes... except on the road, they will be real crashes with the potential to be more destructive than human-caused accidents.
Hopefully when that network becomes a reality, we can build it on top of some kind of open, distributed platform (perhaps related to blockchain technology) that no single entity can abuse.
How does a single network 100% of vehicles are tied into avoid the problem of an attacker only having to subvert one node on that network to compromise everybody? (Not to mention the difficulty/impossibility of getting to 100% in a reasonable timeframe.) Seems like that's been shown time and again to be practically impossible.
That seems really brittle. The baseline for every vehicle should be safety in every situation. (Sometimes that might mean: pull over to the side of the road and stop.) Only once every vehicle is proved safe, can certain combinations of vehicles negotiate compromises to safety in pursuit of other goals like speed.
(responding to your comment and the other comment about a possibility of attackers)
I agree that the baseline should be independent, autonomous, safe operation of each vehicle without any network/cooperation, but in time it's going to go beyond that. I have no doubt that it will happen.
Once independent operation is proven to be reasonably safe and dependable for everyday use, it will have achieved a certain degree of safety and efficiency - significantly better (at least in safety) than humans, but still prone to occasional accidents (regardless of who/what is at fault), especially when human drivers are still allowed to drive on the same roads.
At that point, people will demand even higher levels of safety (similar to the nines of high availability), and greater efficiency (i.e., speed). Unfortunately, with human drivers sharing the same road, those two demands are at odds with each other; greater efficiency inherently reduces safety. At some point though, the demand will be great enough that we'll have to adopt something better - something with a higher level of coordination between vehicles, and where only computer-controlled vehicles can participate.
It certainly won't be pushed or adopted universally in one step, and it'll likely never cover all road infrastructure. It will be adopted first on high throughput routes, much like you have private toll roads today. Instead of paying a toll (or perhaps in addition), your car will enter the roadway by joining the network. Without network compatibility, you'll be unable to use the road.
Once you've joined the network, however, your vehicle can literally drive at breakneck speeds, avoiding other traffic by incredibly small margins, and all with significantly higher safety than non-networked roads.
Of course having a single network controlling vehicles opens up a possibility for attacks, just as modern commercial aircraft can be vulnerable to attack. It will be an issue, but not one without reasonable mitigation strategies and best practices. The "network" in question may not even be centralized in nature, but rather a decentralized, local mesh network with safeguards in place for each vehicle to detect and respond appropriately to situations where the network seems to be guiding vehicles outside safe bounds. There are a lot of ways that it could be implemented, but it'll be important (at least to me) that it's not all controlled by one government/corporation/entity, but instead rolled out as a set of interoperable industry standards.
Why such confidence in the future adoption of this system that you stipulate will be both dangerous and inconvenient? This might work as a scifi plot device, but it's not predictive.
> Maybe Musk sneaks in there with his brash approach of using his drivers as test subjects but his position as a competing auto manufacturer seems less compelling than supporting an "Android-like" effort from Google.
I'd rather support Musk; this is core to his product. For Google, its just another moonshot.
Tesla also has the advantage that Tesla owners drive 40 million miles/month. That's far and beyond what Google's autonomous vehicles are covering (<75K miles/month).
What kind of info are the Teslas beaming back to HQ? It can't be the ultra high volume data google's collect, can it? This [0] says google's collect almost 1GB/s. Presumably they keep a good chunk of that. Does anyone know how much Tesla is collecting on it's cars?
A big point in Tesla's favor is that they're putting it out incrementally. Cruise control is a very simple form of 'self-driving' that everyone is comfortable with the existence of.
For the population to truly embrace this trend, it's going to be a long road of incremental steps. Lane assists have started to become commonplace, as have tools to automatically brake to prevent collisions. As these technologies become commonplace, people will accept them. Eventually, fully autonomous driving won't be much of a stretch.
Also there is the argument that driver assist can lure drivers into a false sense of security where they don't pay attention when they should be. The worry is that driver assist systems could cause accidents by seeming safer than they actually are. I really like Tesla though and hope for everyone sake that that doesn't happen.
I'm reminded of, "The person who says it cannot be done, should not interrupt the person who is doing it."
Maybe they're right, but so far the driver-assistance angle seems to be advancing quite a bit faster and is certainly doing more to improve safety in the real world.
It's advancing faster in terms of being available to the consumer, that is true. But I'm not sure I would say it's advancing faster in terms of technology.
If you watch the TED talk you'll see that Google's cars today are responding to situations far beyond what Tesla's current offerings are designed to do. Construction flaggers, school busses which must not be passed, bicycles blowing through red lights in intersections, cars randomly pulling into the middle of traffic, bicycles hand-signaling a turn, a woman in a wheelchair chasing a duck, Google's cars are negotiating these situations without driver intervention. It's a totally different class of problem than lane tracking, emergency braking, auto-park, etc.
At the risk of invoking the ire of Google's marketing team further (I just got -6'd on this thread in less than two minutes. It happens actually as predictably as clockwork on any case where I point out Google isn't quite as amazing as it seems.)
So, I'm sitting here watching this, and nothing he's saying is particularly a solid logical argument. He's pointing out that a always-running self-driving car has to make more decisions or actions than a driver assistance device, but that's irrelevant: 99% of the time, the car's basically just moving forwards. And arguably, lane assist devices make decisions or corrections just as often as an autonomous car, same for automatic breaking. They constantly need to check the conditions on the road and decide whether or not to adjust. I'm not seeing any real reasoning driver assistance and fully autonomous are so shockingly different... short of the fact that this guy is marketing a Google product.
If anything, it seems Urmson is heavily pidgeon-holing what a "driver assistance system" is, so that he can discredit the way all of their competitors are developing the same types of technology. Nothing particularly special about what Google is tracking that couldn't be similarly developed as part of a driver assistance system.
In actuality, his final argument is the most ludicrous of them all... that gradually developing the technology through driver assistance is too slow. That this needs to be rushed out the door in the next couple years.
I didn't downvote your first comment, but in my opinion the reason it was downvoted was that it amounted to ad-hominem ("Don't listen to him because he's from Google.") rather than a reasoned argument like you've posted here. Sure, it's wise to take into account the source of an argument, but you can't reasonably stop there.
I do disagree with your first point, that an always-running self-driving car having to make more decisions is irrelevant because a car spends the majority of its time moving forward. I think the point is that a driver assistance system basically only handles that moving forward case. It may cover 99% of the time, but it only covers a small fraction of the decisions and actions the car has to take. It's the other 1% (probably higher than 1% actually, but it doesn't matter) that's the tricky part, and it is difficult to iterate from handling the special (if very common) case of basically just moving forward, to handling all the other situations a car can find itself in. You would quickly lose any easily-understood distinction between what situations the car is and is not able to handle, making it difficult for the human driver to know when their input will be required.
But we have other types of driver assistance for that as well. For instance, parking assist has to handle the complex maneuvering of a vehicle into a narrow space. Presumably while both avoiding hitting the cars near it which are parked, but also any cars which might come across your car while it is parking.
I could see driver assistance eventually move to a point where it's tracking and simulating just as much as a fully autonomous vehicle, but allowing the driver to remain in control until it believes a collision is imminent, or that the user is making an unsafe judgment call.
Things like the automatic emergency braking feature will eventually learn how to detect different types of objects as well. Say, to avoid slamming on the brakes because a plastic bag was pushed by the wind past a camera or sensor.
Presumably, these currently independent assistance features will converge over time, into your car simply being aware of it's surroundings and intervening to protect you. And then someday it'll just drive itself.
All of this can (and probably will) be done gradually, but it sounds like that isn't fast enough for Urmson's son to reach driving age.
Completely agree. It's not going to happen overnight. It'll happen piece by piece, until one day you wake up and you review your Tesla OTA update release notes and it lists "Level 4 Autonomous Mode" as one of the updated features.
"Android-like?" You mean allowing manufacturers to ship crapware-infested versions; having manufacturers handle deployment of critical updates, meaning they simply don't; having core features overridable via aftermarket appstore downloads that provide a vector for malware?
> having manufacturers handle deployment of critical updates, meaning they simply don't;
The consequences for a manufacturer of failing to deploy a critical update for self-driving cars is substantially different than failing to deploy a critical update to a mobile handset. So even with a similar set of formal, contractual requirements, there are reasons to expect that the resulting behavior could be quite different.
>At this rate how long until Google re-brands itself as Omni Consumer Products
You mean like Alphabet? I only lived in Mountain View for a few months last summer, but I gathered at the time that the city of Mountain View is already fighting tooth and nail to avoid becoming Google City.
The mountain view city council is largely controlled by homeowners. As a result, they are against anything that would impact home prices negatively, even if it would be a greater good. And they've also been trying to reduce the influence of google employees on the city, since they fear that too many google employees living in town could change the power structure of the city council itself and cause it to make decisions that don't optimize for home prices.
That means they actively fight projects that might actually be good for the city, like building high density housing on the eastern side of 101.
If everyone's going to accept a single standard (which would be helpful for inter-car telemetry), it can't be controlled by Google. We can't let our cars become another one of Google's monopolies.
We should also strongly consider the merits of open source with many vendors contributing and a good governance model. Ideally there'd be some sort of foundation (or government-sponsored group) in charge of such an effort.
The last thing we need is to have our screaming metal death traps buzzing around controlled by black box software by a single developer.
Next time you pull up to a stop light, look to the left and then right. I'm willing to be the drivers on the left and or right are looking at their phone.
It's getting worse... the phone, blue tooth, the dashboard computer, traffic, impatient drivers. etc...
I for one can't wait these self driving cars, they have got to better than distracted drivers we have now.
>Next time you pull up to a stop light, look to the left and then right. I'm willing to be the drivers on the left and or right are looking at their phone.
And the dude between them is busy staring at his neighboring drivers rather than the road!
Is Google only testing in California? It seems that the data for accidents was given for the whole of the US, but it should be compared to California only. I don't know if it's noticeably different or not, but I'd think that snow and ice would increase the accident rate in some states.
Snow and ice I think would provide a challenge for self-driving cars.
I am surprised that Google has not started having their software recording on cars that are driven by humans, to learn. They could increasingly have feedback to say “Our software would not drive that fast/that close to that curb” and progressively simulate a lot more dangerous situation and learn about tough situations than they currently can.
This is a harder problem than you think it is. This is a classic reinforcement learning problem, and it is incredibly difficult in the real world. It's easy in discrete state-space, perfect information, turn-based games like Go (the recent advance there). Very difficult in robotics.
How safe does a self-driving car have to be before it is allowed on the road?
I would say that's going to be a societal decision rather than an engineering decision.
An interesting point is that the introduction of the private automobile itself was an imposition on the social space of that time which basically was wildly unsafe with automobile accidents still being on of the leading causes of death in this country.
If it's determined that self-driving cars will be allowed, ordinary drivers will be forced to adjust to their presence and will have to learn their quirks.
Certainly, cellular phones realistically are being used by a significant fraction of drivers today and if the accident rate has gone up at worst only slightly due to that (not enough to counter-weight other safety measure), it's because non-cell users have adapted to the presence of the cell-user, however annoying that might be.
>How safe does a self-driving car have to be before it is allowed on the road?
I'd argue that self driving cars just don't have to be safer than average drivers. They should be safer than average drivers with computer driving assistance. It's already happening, but we can take the self driving programs and use them with human drivers. To get a best of both worlds scenario. Of course we'd have to mandate all new cars come with computer driving assistance.
The counter argument is that we accept the risk of cars now, why not let self driving cars have the same level of risk. My answer is that the risk of cars is so great we either shouldn't be accepting it or are only accepting it because there isn't an easy alternative. A lot of Americans die on the roads.
You make an incredibly good point - whilst any safety improvement at all is obviously and unquestionably a good thing, we can take this opportunity to step back and analyse the situation we've found ourselves in, and decide that the status quo is far from good enough and we should in fact be aiming to do much, much better.
A convenient advantage of regulating for self-driving vehicles is that we can put minimum safety standards into law quite easily, and enforce them meaningfully - because the prize for reaching and adhering to those standards is so great for the players involved, and because we have a relatively clean regulatory slate because of the clear step-change in the technology.
As well as the most important factor, safety, it could also be possible to step back and consider the other disadvantageous side-effects of the current status quo with cars, particularly in cities - noise, traffic congestion, wide roads with thin pavements/sidewalks for pedestrians, indeed a general culture of cars having priority over pedestrians in various situations where the number of pedestrians is much greater than the number of people in the cars.
These are all things which, depending on the city and the culture, can be really significant problems and which I'm sure would never be tolerated had they not crept up on us over decades. Regulating both for and with the capabilities of self-driving vehicles might give us opportunity to vastly improve the status quo in these areas, too, in a relatively short period of time.
It could be quite important to take advantage of the opportunity to realise such optimistic targets now, starting green field, as it can only get more difficult and become much slower to implement after the first set of regulations has been put into effect.
It depends on the distribution of drivers that cause incidents. If the average (mean) driver is involved in an average number of incidents, then fine, base the safety estimation on them. If the average driver is involved in a below average number of incidents, the safety standard should be tuned to the drivers that are involved in incidents.
When we calculate averages, are we including the number of drunk driving accidents, for example? 10,000 Americans are killed every year due to that. If we calculate equivalent accident rates that include impaired humans, I'm not sure that's good enough. Computer + human should be able to address this problem, right? Reduce speed, pull over and stop, if someone can't drive between the lines or they are driving the wrong direction on a road.
I think there is a point where assisted driving no longer becomes practical, and either the evolution of autonomous cars has to stop to allow the driver to have a continued say in how the car reacts to its environment or move on to where there is just no place for time-sensitive human input any more. As drivers trust their cars more they will pay attention less (not to mention practice less, an even deeper problem for post-autonomous-car generations than those before), and their ability to meaningfully react to stimulus will be roughly nil.
If we chose the former we are probably arbitrarily limiting our safety in order to save the ego of a human driver. If we chose the latter we will reach a point where we won't have a steering wheel for a driver to potentially become a hazard rather than a help.
Do we expect computer assistance to make much of a difference? Right now they handle the easy tasks like highway driving which don't account for many accidents, as far as I know.
All of the brouhaha about self driving cars is moot until real user testing begins. I'm not talking about driving a google mobile around the streets of mountain view with an engineer in the driver's seat.
These vehicles (I would hope) are designed to drive around people in different circumstances. They're software for use by people and until we have realistic tests with different kinds of people, ages, driving experience levels, etc. it's all academic.
I'm somewhat wondering if all the self driving car stuff by Google and Tesla is primarily a vehicle (no pun intended) for marketing rather than actual tech that will yield a real product.
It seems, from previous reports by Google, that their ability to understand discrepancy between sensors relies on knowing the road very well, not just the map with nuances like “cars can go there, but it’s mainly a pedestrian street” but more local undocumented habits. They mentioned details that prove that they would have to drive a lot in new cities to learn enough and make their car safe.
The one that I remember is that they have started driving in Austin and came across a very local species of vehicle with a unique habit: the hipster fixie rider with his stand-stop motion at red lights. Committed bikers do not have free wheels and stay standing in place not by putting their feet down but rocking back and forth; that motion was new and not interpreted clearly by the car, that hesitated to see it as a false start. It’s since been fixed. The article didn’t specify if the shirt pattern had any meaningful weight in the interpretation.
If the humans always take the wheel in dangerous situations, how do you know the cars are safe? The cases where they are most likely to get into trouble are the cases where the AI is not in operation!
>we have to figure out just how to test these vehicles so we can know when a safety goal has been met. We also have to figure out what the safety goal is.
Or you can just crack ahead and try them with users like Tesla.
In a world of massive un- and under- employment, mechanical turk and massive connectivity, increasing income inequality, and generationally declining standards of living, a handful of millionaires will sit in their unimaginably expensive self driving car, and five mechanical turk remote drivers will cooperate on a 3 of 5 vote basis to drive the millionaire.
Most brainpower is underutilized, the quantity is increasing, and its manufactured by unskilled labor. There eventually comes a peak-automation point where its better and cheaper for the economy and culture to just hire a dude to do it.
The world of the future is only ivy league grads will have real jobs and they'll have all the money, whats so inherently awful about someone in the favelas getting a job as a driver, especially if its cheaper and safer?
The world unemployment rate is 8% according to the CIA World Factbook. What's going to make this increase so dramatically? We've had industrial automation, computers, and the internet for decades now, and people still have jobs.
Also why would you want someone remote-controlling your car instead of sitting in the drivers seat? Wireless connectivity is way too flaky. Tunnels block the signal, rural areas have poor coverage, and equipment breaks and requires maintenance. It would be insanity to bet your life on a cellular signal.
Auto-pilot is most useful in two situations. 1) long cross country legs where there is not much flying to do (just maintain heading and altitude), so A/P frees the pilot up to manage other systems, enjoy the view, etc, and alleviates fatigue, 2) flying a precision instrument approach, reducing the risk that the pilot will succumb to spatial disorientation in the setup for (usually manual) landing.
With cars, auto-drive capability will be useful in reducing accidents in two modes: 1) long duration highway driving where fatigue is a big issue, 2) intervening to prevent a distracted driver from causing an accident (rear end collision etc).
I'd be perfectly happy with a car that can drive itself in cruise mode on the interstate, but requires an alert driver on local roads (with the added bonus that if I am about to slam into something, it will brake to avoid or lessen the impact).
Something for the liability crowd to consider, self driving cars won't be able to avoid every potential mishap, but they will be able to reduce their severity. A car that can automatically brake to reduce its speed by 25% just before impact, will reduce its kinetic energy by roughly half, and the potential for injury by more still.