Self-Driving Cars No Longer Fear Inclement Weather at Night [VIDEO]

in #machine-learning7 years ago (edited)

Previously, self-driving cars didn't handle nighttime drives in the rain very well. But all that has changed.

In this breakthrough video from the team at drive.ai, a test car roams the streets of Mountain View, CA near sundown as a light rain hits the windshield. As the sun sets, and the sky darkens, the rain gets a bit worse. But the robot car fearlessly glides along, handling the challenging environment without a hitch.

Sit back and enjoy the ride. And who knows, a robotic car may be in your near future.


Credit: drive.ai video on YouTube.

For more information, see the company's web site at https://www.drive.ai.

Sort:  

The @OriginalWorks bot has determined this post by @terenceplizga to be original material and upvoted it!

ezgif.com-resize.gif

To call @OriginalWorks, simply reply to any post with @originalworks or !originalworks in your message!

To enter this post into the daily RESTEEM contest, upvote this comment! The user with the most upvotes on their @OriginalWorks comment will win!

For more information, Click Here! || Click here to participate in the @OriginalWorks sponsored writing contest(125 SBD in prizes)!!!
Special thanks to @reggaemuffin for being a supporter! Vote him as a witness to help make Steemit a better place!

ha!! We just talked at length about self-driving cars today at the dinner table.

Now that's a coincidence, @mariannewest. How did that discussion go?

We talked a lot about the ethics. How is the car going to react if there is an accident or incident. i.e. a kid runs out in the street. To avoid the kid, the car is going to run into something and the driver will be dead. so, how is the AI going to be programmed to decide what to do? Lots of stuff like that :) And what when there is no power. Are people still going to be able to drive or get to be even more helpless....

Those are really excellent questions. And Big Tech has been fairly silent so far about how to answer them. I'll share with you what I know.

One ethics thought experiment goes like this: the car is presented with a sudden emergency situation in which there are three possible courses of action. Action 1 steers the car into a child who had been walking across the road. Action 2 steers the car into a small crowd of 10 people who had been waiting at a bus stop on the side of the road. Action 3 steers the car into a tree, killing the driver.

The first problem is that none of the current implementations of AI (e.g., convolutional neural network for computer vision and object recognition, and reinforcement learning for path planning) has achieved anything close to machine cognition. So when the machine makes its decision, it will have no idea what the consequences of that action are (killing one person versus killing a crowd, or killing a crowd versus killing a child). It will make its decision based on whether it saw something like this in its training before, and then act according to what was the "right answer" in the training scenario.

But it gets worse. Training an AI is notoriously inefficient. You have to show a CNN something like 5,000 photos of dogs before it can recognize a dog. And then later, it probably won't recognize a puppy. It has to do with the use of backpropagation and batch gradient descent. As the machine evaluates each training case, it determines the value of a cost function (imagine all valid data points for this function as forming a solid surface hanging about in some weird-kind-of multidimensional space ... a weirdly shaped "bowl shaped thing"), and the goal of the training is to keep moving "downhill" on this surface (by downhill, I mean in the direction of lower cost). But mathematically, it's hard to take more than a tiny step in any direction, otherwise, the math takes you off of the surface (and going off the surface is like picking a solution that can't/shouldn't happen in real life).

The consequence? There are millions of combinations of real world emergency conditions, and to learn them all, you would need the car to train on thousands of examples of each. Good luck with that. Self-driving cars today work fairly well under normal conditions, because normal conditions are fewer in number and are thus trainable. But the number of error cases shoots up exponentially. As things stand, we have no way to train for all of them.

Then there are the legal problems. If the car kills the driver, the driver's family can sue the manufacturer for wrongful death on a products liability claim. If the car kills the kid or the crowd, who can their families sue?

Could they sue the driver? Maybe not. How was the driver negligent (especially if the car doesn't have a steering wheel)? There is nothing that a reasonably prudent driver could have done to avoid the accident. And what is the duty of care, and how was it breached, if driver could do nothing but sit there and watch the carnage unfold?

Could they sue the manufacturer of the robot car? Maybe not. A products liability case is usually between a manufacturer of a product and a user of that product, not an innocent bystander. Some courts are opening this up, but that means the answer changes from state to state. And we've never seen a machine being sued for negligence. How does a machine owe a duty of care to a person? How could a court possibly enforce that?

And for another twist: lobbying by manufacturers. You can expect them to beat everyone to the punch by bribing (um, lobbying) Congress to pass a law protecting them from lawsuits. Now what?

As far as I know, all of these ethical, legal, and technological issues remain unresolved.

And this, my friend, you should post as another post :). Well, one of our thought scenarios was the expensive cars will always safe the driver...

Thanks for the kind word. And good point. No one would ever pay a luxury premium for a Mercedes Benz or BMW if there were even a remote possibility that the car would decide to kill its wealthy owner. ;)

@royrodgers has voted on behalf of @minnowpond. If you would like to recieve upvotes from minnowponds team on all your posts, simply FOLLOW @minnowpond.

        To receive an upvote send 0.25 SBD to @minnowpond with your posts url as the memo
        To receive an reSteem send 0.75 SBD to @minnowpond with your posts url as the memo
        To receive an upvote and a reSteem send 1.00SBD to @minnowpond with your posts url as the memo

Oh I wish I saw it when I was there about one week ago.