You are viewing a single comment's thread from:

RE: Self-Driving Cars No Longer Fear Inclement Weather at Night [VIDEO]

We talked a lot about the ethics. How is the car going to react if there is an accident or incident. i.e. a kid runs out in the street. To avoid the kid, the car is going to run into something and the driver will be dead. so, how is the AI going to be programmed to decide what to do? Lots of stuff like that :) And what when there is no power. Are people still going to be able to drive or get to be even more helpless....

Sort:  

Those are really excellent questions. And Big Tech has been fairly silent so far about how to answer them. I'll share with you what I know.

One ethics thought experiment goes like this: the car is presented with a sudden emergency situation in which there are three possible courses of action. Action 1 steers the car into a child who had been walking across the road. Action 2 steers the car into a small crowd of 10 people who had been waiting at a bus stop on the side of the road. Action 3 steers the car into a tree, killing the driver.

The first problem is that none of the current implementations of AI (e.g., convolutional neural network for computer vision and object recognition, and reinforcement learning for path planning) has achieved anything close to machine cognition. So when the machine makes its decision, it will have no idea what the consequences of that action are (killing one person versus killing a crowd, or killing a crowd versus killing a child). It will make its decision based on whether it saw something like this in its training before, and then act according to what was the "right answer" in the training scenario.

But it gets worse. Training an AI is notoriously inefficient. You have to show a CNN something like 5,000 photos of dogs before it can recognize a dog. And then later, it probably won't recognize a puppy. It has to do with the use of backpropagation and batch gradient descent. As the machine evaluates each training case, it determines the value of a cost function (imagine all valid data points for this function as forming a solid surface hanging about in some weird-kind-of multidimensional space ... a weirdly shaped "bowl shaped thing"), and the goal of the training is to keep moving "downhill" on this surface (by downhill, I mean in the direction of lower cost). But mathematically, it's hard to take more than a tiny step in any direction, otherwise, the math takes you off of the surface (and going off the surface is like picking a solution that can't/shouldn't happen in real life).

The consequence? There are millions of combinations of real world emergency conditions, and to learn them all, you would need the car to train on thousands of examples of each. Good luck with that. Self-driving cars today work fairly well under normal conditions, because normal conditions are fewer in number and are thus trainable. But the number of error cases shoots up exponentially. As things stand, we have no way to train for all of them.

Then there are the legal problems. If the car kills the driver, the driver's family can sue the manufacturer for wrongful death on a products liability claim. If the car kills the kid or the crowd, who can their families sue?

Could they sue the driver? Maybe not. How was the driver negligent (especially if the car doesn't have a steering wheel)? There is nothing that a reasonably prudent driver could have done to avoid the accident. And what is the duty of care, and how was it breached, if driver could do nothing but sit there and watch the carnage unfold?

Could they sue the manufacturer of the robot car? Maybe not. A products liability case is usually between a manufacturer of a product and a user of that product, not an innocent bystander. Some courts are opening this up, but that means the answer changes from state to state. And we've never seen a machine being sued for negligence. How does a machine owe a duty of care to a person? How could a court possibly enforce that?

And for another twist: lobbying by manufacturers. You can expect them to beat everyone to the punch by bribing (um, lobbying) Congress to pass a law protecting them from lawsuits. Now what?

As far as I know, all of these ethical, legal, and technological issues remain unresolved.

And this, my friend, you should post as another post :). Well, one of our thought scenarios was the expensive cars will always safe the driver...

Thanks for the kind word. And good point. No one would ever pay a luxury premium for a Mercedes Benz or BMW if there were even a remote possibility that the car would decide to kill its wealthy owner. ;)