Yesterday, I spoke at an event organized by the America's Future Foundation
on the topic of self-driving cars.
Here is a summary of what I said.
(Disclaimer: I do work at Google,
but I have never worked on self-driving cars,
and do not possess any information that isn't already public.)
The most important point is that a self-driving car,
as being developed right now by Google and many competitors,
is not a general artificial intelligence,
capable of replacing a human driver in all situations,
but a specialized artificial intelligence,
that does one limited task, and does it well.
The driving robots are thus very good at things humans are bad at -
they are never being tired, they never fall asleep at the wheel,
they never get drunk, they never get angry,
they never take a wrong turn,
they never assess speed or distance incorrectly,
they never forget the finer points of driving code,
they never forget to refuel at the best-priced station,
they always make efficient use of fuel,
they always go through timely maintenance, etc.
They are very bad at dealing with exceptional situations.
Is this ball in the street just some irrelevant rubber ball you can drive upon,
or is it a bronze ball that would cause a deadly accident if you run on it?
What about those fallen branches on the road?
Is this road still useful despite the flooding, landslide, etc.?
Is this deer, child, etc., going to jump in front of the car?
How should the car handle some temporary work on the road?
How to deal with a flock of geese on the road?
Now, the hope is that even though exceptional situations may require
some human to take control of the vehicle, override the itinerary,
clear the road, or otherwise take action - or call for help and wait -
the overall lives saved are well worth
the inconvenients in the cases the software fails.
And the lives saved are not just the accidents that won't happen.
It's also all the hours of life-time reclaimed.
Someone who drives to a job an hour away and back home spends
two hours everyday driving. That's over 10% of his waking hours.
Over forty years of work, the time reclaimed is the equivalent
of four years of extra life while in good health.
During their commute, people can sleep, eat, drink, relax, meditate,
dress, put on their makeup, read, talk, do their homework,
have sex, or whatever they prefer doing.
(Insert
Mr Bean driving in the morning.)
Disabled people will not be dependent upon someone else
to spend their time driving them around.
For a self-driving car does not replace a car:
it replaces a car plus a chauffeur.
It is more like a taxi than a personal car,
and a zipcar-like pool of self-driving cars
can be time-shared between many people,
instead of each car having to be parked most of the day while its owner
works, plays, shops or sleeps.
If and when most cars become self-driving,
the need for street parking space will be much diminished,
and streets will suddenly become wider, further facilitating traffic.
Thus, even though a self-driving car may
cost two or three times as much as a car,
even if they only cover limited areas where
temporary and permanent road changes
are guaranteed to be properly signaled for the sake of self-driving cars,
they are still a huge economic saving,
in better use of both human and material capital.
As costs fall, people will be able to afford longer commutes from cheaper places,
to enjoy life without being prisoner of public transportation schedules
or of high prices of a car or a taxi.
Over hundreds of millions of users,
tens of millions of extra productive life-times become available.
A boon for mankind.
Now, another consequence of self-driving cars being specialized tools
rather than general artificial intelligences is that,
since they are not sentient, they cannot take responsibility
for the accidents that will happen.
The buck has to stop with someone, and that cannot be some dumb computer.
Only humans can be held accountable
and humans will have to pay to cover damages to both passengers and third parties.
In the beginning, that means that only big companies with deep pockets can own such cars:
a large corporation like Google, willing to put its neck on the line;
insurance companies that expect to save a lot of money in damages avoided;
mutual funds where many small investors pool their savings together.
The same will be true for all upcoming autonomous robots:
small planes or quadricopters, carrying robots, etc.
they will need to be owned by people or corporations
who can afford to pay for any damages,
or insured by companies that'll take the responsibility.
[The following points to end of paragraph were not made during my speech.]
Note that owning autonomous vehicles is significantly riskier than
insuring human-controlled vehicles:
On the one hand, whereas the insurance for a human-controlled vehicle typically only covers the first few million dollars of damages, and any further liability is disclaimed by the insurer and pushed back to the human driver, the owner of the autonomous vehicle is the ultimately responsible party and can't limit liability in case of damages to third parties.
On the other hand, there is a systemic risk that is hard to evaluate, in case, e.g., after a flood, landslide, earthquake or catastrophic bug, stubborn car behavior causes not one accident but hundreds of accidents; it can be hard to provision for such black swan events, though hopefully the average casualty rate after such events still remains lower than currently is for human drivers.
The rise of self-driving cars will require change in government.
First the self-driving cars may require support
from those government bureaucracies that (at least currently) manage roads,
so that self-driving cars are made aware of temporary and permanent changes.
Second some regulatory amendments may be necessary for anyone to dare
endorse the liability for owning a self-driving car.
Meanwhile, there are huge privacy issues,
as self-driving car companies get even more information
on the location and habits of passengers,
and government bureaucracies such as the NSA
may eventually put their hands on
data that Google (or other operators) accumulate,
with or without active help from the companies.
Therefore, government rules lag behind technology,
but it is not a clear win when they catch up.
The last few centuries have seen an exponential growth
in human achievement through technology;
they have also witnessed an exponential growth of government,
taxes, statutes, bureaucracies, privileges and war capabilities.
Over the next few decades or centuries,
neither exponential growth is sustainable.
Whichever curve tops first, the other wins - and soon afterwards the first curve likely drops to zero.
If government somehow stops growing first,
mankind will know a golden age of peace and prosperity.
If technology somehow stops making big strides first,
then as Orwell predicted,
"If you want a vision of the future, imagine a boot stamping on a human face - forever."
Now even though humans overall may prove forever incapable
of understanding and implementing liberty
[and indeed may only get dumber and more subservient
due to government-induced dysgenics],
that might not matter for our far future.
For eventually, whether a few decades or a few centuries in the future,
General Artificial Intelligence may indeed be created.
Then not only will artificial sentient beings be able to endorse
the responsibility for self-driving cars,
they will soon enough be at the top of the food chain,
and endorse ownership and responsibility for everything
- and not just on Earth, but across the Solar System, the Galaxy, and beyond.
When that happens, we better hope that
these AIs if not humans understand the importance of Property Rights;
if they do, humans can live a life of plenty
based on the capital they have accumulated;
otherwise, our end will be very painful.
And so, let's hope the first AI isn't a military robot
hell-bent on killing humans, without any respect for property rights.