Four lidar detectors. Four fisheye cameras. Two narrow field-of-view cameras. GPS.

We may be cruising down a sun-dappled Silicon Valley expressway within the posted miles per hour. But every second, untold quantities of data are being scooped up by our specially tricked out luxury sports sedan.

Yet without the ability to bring the data from these sensors together, it's all kind of useless.

We can see all around the car as we buzz through Silicon Valley's office parks - thanks to a display our engineers have tacked onto the center console that lets us see through our Audi A6's four fisheye lenses. It's not enough, however, to avoid a toot from another driver as we swing through a turn.

A Look at the Road Ahead (and Behind)

There couldn't be a better example of why we're hustling deep learning out of data centers and into the thick of traffic.

For our NVIDIA DRIVE PX 2 launch and demonstration at CES 2016, this week in Las Vegas, we've set up a giant version of a car dashboard powered that the system.

The car's 'windshield' shows the video we captured while on the road with our Audi. Beneath it, our demo of a next-gen instrument cluster lets you instantly see it all - the roadway, other cars and a calculated path for the vehicle forward - as if you were seeing the car from above and behind.

So you can orient yourself with a glance. And be far more confident than if you've taken it all in by swiveling your head between the front windows and the side and rear-view mirrors.

A Digital Engine for Next-Gen Driving

Our automotive deep learning story starts with NVIDIA DRIVE PX 2. Based on powerful, next-gen Tegra processors and discrete GPUs based on our Pascal architecture, DRIVE PX 2 packs more than 24 trillion deep learning operations per second into a system that could fit in a glove box.

That kind of power is the only way to process - in real time - the huge volumes of data flowing into the car from all of its sensors. The big idea is to replace the bulky racks of servers used by researchers who have jerry-rigged sensor fusion systems of their own, as science projects, and replace them with a compact system that can slide inside production cars.

Those 24 teraflops are enough power to weave the data being collected by our complement of sensors into a wide range of advanced driver assistance features in real time. Features like surround view, collision avoidance, pedestrian detection, cross-traffic monitoring and driver-state monitoring are all possible if this data can be processed - and made available to drivers - quickly enough.

Going Deeper

There's more coming. We built DRIVE PX 2 to tap deep learning to learn from vast quantities of information. DRIVE PX 2 includes a deep neural network software development kit we call DIGITS, as well as video capture and video processing libraries.

Much like a human learns through experiences, so do deep neural networks. This is why our sensor-equipped Audi has logged so many hours of driving up and down Silicon Valley's highway 101 over the past few months. The more data we collect, the smarter our system becomes.

Automakers can then load the model created by these GPU-powered deep learning systems into vehicles. Once there, it can run in real time on DRIVE PX 2. It's a system developers can train - and retrain - with more data.

So your smart car can get smarter with over-the-air software updates. Talk about acceleration.

Nvidia Corporation issued this content on 2016-01-06 and is solely responsible for the information contained herein. Distributed by Public, unedited and unaltered, on 2016-01-06 14:48:26 UTC

Original Document: http://blogs.nvidia.com/blog/2016/01/06/autonomous-driving-ces/