Map scale

Nvidia announces ‘DRIVE Map’, a high-definition mapping platform for autonomous vehicles using crowdsourced data – FutureCar.com

Author: Eric Walz

What if you could create 3D road maps with centimeter precision? That’s what chipmaker Nvidia Corp has done with its new NVIDIA DRIVE Map platform. The company plans to map hundreds of thousands of miles of roads in the United States, China and Europe to help the safety of autonomous vehicles navigate with an accuracy of 5 centimeters or less.

Nvidia CEO and Founder Jensen Huang announced the new DRIVE Map platform during his keynote at Nvidia’s annual GTC technology conference on Tuesday.

DRIVE Map is a scalable multimodal mapping engine designed to accelerate the deployment of Level 3 and even Level 4 autonomous vehicles, which are designed to operate without human intervention. To achieve higher levels of autonomy, these vehicles require much more detailed maps in order to navigate safely without human assistance.

Nvidia accelerated development of DRIVE Map after acquiring HD mapping startup DeepMap last year. The startup’s core mapping technologies have been integrated into DRIVE Map. Nvidia’s DRIVE map combines the precision of DeepMap survey mapping, but with the scale of AI-powered crowd mapping.

Prior to Nvidia’s acquisition of the company, DeepMap specialized in fusing images from digital cameras, radar and 3D lidar data collected from passenger vehicles to create its high-definition maps for autonomous vehicles.

Ride-sharing company Lyft Inc has also experimented with using crowdsourced data to create highly accurate HD maps. In 2020, some drivers on Lyft’s ridesharing platform began using small, low-cost dashcams to collect footage of intersections, cyclists, pedestrians, and the behavior of other drivers while driving. the driver is on the move to pick up passengers.

HD maps used by self-driving vehicles include semantic detail not found on standard 2D maps used by millions of drivers every day for turn-by-turn directions. While highly detailed 3D maps include the exact position of road markings, traffic signs, crosswalks, curbs and other infrastructure.

DeepMap’s other work has focused on updating these maps using crowdsourced data collected from vehicles, as well as their availability. Keeping HD maps up-to-date and accessible in real time has been a big challenge for self-driving vehicle developers.

Nvidia’s DRIVE card is designed to support autonomous vehicles anywhere in the world. NVIDIA creates HD maps of major highways in North America, Europe, and Asia. It will provide survey-level ground truth map coverage of 500,000 kilometers (310,600 miles) of roads in North America, Europe and Asia by 2024. The maps will be continuously updated and extended with data collected from millions of passenger vehicles.

drive-sim-blog-radar-1280x600-1.jpeg

DRIVE map details generated using radar scans.

Three layers of map, camera, radar and lidar

The multi-layered DRIVE map contains location layers for cameras, radars, and lidar sensors, so an autonomous vehicle’s GPS can know precisely where it is on a map using one or more of the three layers. The AI-powered map driver can locate itself independently on each layer of the map, providing the additional redundancy required for SAE Level 3 and 4 autonomous driving.

The camera location layer consists of the same details that human drivers see while navigating, such as lane dividers, road markings, road boundaries, traffic lights, signs, and electric poles.

The map’s radar location layer is an aggregated point cloud of radar returns that is also used to determine the precise location of an autonomous vehicle. Radar data is especially useful in low light and poor weather conditions such as rain or fog, where cameras and lidar don’t work as well.

Using radar for localization is also useful in suburban areas where typical map attributes are not always available, allowing the AI ​​pilot to locate themselves based on surrounding objects picked up by radar scans.

The lidar voxel layer provides the most accurate and reliable representation of the environment. It builds a 3D representation of the world at a resolution of 5 centimeters, according to Nivida. The company says this high level of accuracy is impossible to achieve with data from cameras and radars alone.

Once a vehicle is precisely located on the map, the AI ​​can use detailed semantic information from the map to center the vehicle in a lane and drive in a way that other road users expect. .

Semantic map data includes features such as road layouts, turn lanes, crosswalks, and traffic lights, as well as how all the features interconnect for navigation. It’s similar to how human drivers navigate using GPS with turn-by-turn directions.

DRIVE Map is actually built using two separate map engines, a ground truth investigation map engine and a crowdsourced map engine. Ground truth data is collected from survey vehicles, while the participatory mapping engine is built from data collected from tourist vehicles that travel through the mapped areas.

This approach achieves centimeter precision with dedicated survey vehicles, as well as the freshness and scale that can only be achieved with millions of passenger vehicles continuously updating and expanding the map with data of the real world.

The ground truth engine is based on DeepMap survey map engine technology, which has been developed and verified over the past six years.

The AI-based crowdsource engine gathers map updates from millions of cars, constantly uploading new data to the cloud as the vehicles roll. The data is then aggregated with full fidelity in NVIDIA Omniverse and used to update the map, providing the real-world fleet with new live map updates within hours.

DRIVE Map also provides a data interface called “DRIVE MapStream”, to allow any particular car that meets the DRIVE Map requirements to continuously update the map using camera, radar and lidar data collected from of the vehicle.

An Earth-scale “digital twin”

In addition to helping AI-powered autonomous driving systems make better driving decisions, DRIVE Map will help accelerate the deployment of autonomous vehicles by generating field training data for deep neural network training ( DNN), as well as for testing and validation purposes. .

These workflows are centered on Nvidia Omniverse, where real-world map data is loaded and stored. Omniverse maintains this earth-scale representation of the digital twin, which will be continually updated using survey map vehicles as well as millions of passenger vehicles.

For developers of autonomous vehicles. Omniverse includes automated content generation tools, so the detailed map can be converted into a drivable simulation test environment that can be used with NVIDIA DRIVE Sim to enhance AI-powered self-driving software. According to Nvidia, features such as road elevation, road markings, islands, traffic lights, signs, and utility poles are accurately reproduced in the simulation environment with centimeter accuracy. .

Nvidia has developed a powerful computer simulation environment that provides developers of autonomous technologies with an “artificial universe” to train robotaxis and autonomous vehicles to drive in the real world in a simulated environment built using real-world data.

Autonomous vehicle developers can also use the simulated environment to generate edge case training scenarios that are not available from real data or are difficult to obtain using survey vehicles.

AV developers can also test their software in the safety of the digital twin environment before deploying autonomous vehicles in the real world.

The digital twin gives fleet operators a complete virtual view of where vehicles are driving around the world, assisting in remote operation if needed.

The new DRIVE card is a highly versatile and scalable platform from Nvidia. It equips autonomous vehicles with a deep understanding of the real world, which can help developers improve a vehicle’s AI-powered autonomous driving capabilities.

Nvidia said DRIVE Map will be available for the entire autonomous vehicle industry.