NVIDIA is at the forefront of the autonomous vehicle ( AV ) revolution, bringing more than 20 years of automotive safety, software, and computing expertise to fuel car innovations from the cloud.
Difficult transportation officials are showcasing their most recent developments with NVIDIA systems that course passenger cars, trucks, business vehicles, and more at , a global AI event taking place this week in San Jose, California.
NVIDIA’s three main accelerated determine platforms, and , running on systems for modeling and artificial data generation, and in-vehicle computers, are increasingly being used by mobility leaders to train the AI-based stack in the data center.
This opens up new avenues for designing, manufacturing, and deploying essentially healthy, smart freedom solutions for manufacturers and developers in the multitrillion-dollar car market, making them safer, smarter, and more enjoyable experiences.
Transforming Transportation Systems ,
General Motors ( GM ), the biggest automaker in the United States, is working with NVIDIA to create and manufacture its upcoming generation of cars, factories, and robots using accelerated compute platforms from NVIDIA. For AI modeling, GM has invested in NVIDIA GPU programs.
The companies ‘ current partnership includes using Omnivese with Cosmos to optimize stock organizing and deploying next-generation cars at scale, which is being accelerated by the NVIDIA DRIVE AGX. This will enable GM to create actual AI systems that are custom-made to the company’s vision, craft, and know-how, and eventually make mobility safer, smarter, and more available than ever.
Volvo Cars and its subsidiary Zenseact use the NVIDIA DGX software to examine and interpret device data, discover new insights, and teach future security models that will improve overall vehicle performance and safety, using the NVIDIA DRIVE AGX in-vehicle computer in its next-generation electric vehicles.
A strong end-to-end program for autonomous vehicles that prioritizes security, stability, and convenience has been developed by Lenovo and robotics company Nuro. The system is constructed using in-vehicle assess from NVIDIA DRIVE AGX.
Shipping developments
NVIDIA’s AI-driven systems are also boosting shipping, addressing pressing issues like vehicle shortages, waning e-commerce demands, and high operating costs. The mathematical resources provided by NVIDIA DRIVE AGX enable significant improvements to road safety and transportation on a large scale.
Gatik is integrating Build AGX for the onboard AI control required for its course 6 and 7 truck, which are made by Isuzu Motors and provide Fortune 500 buyers with driverless middle-mile delivery of a variety of products, including Tyson Foods, Kroger, and Loblaw.
Uber Freight is also incorporating DRIVE AGX into its current and future carrier fleets, thereby sustainably enhancing efficiency and lowering costs for shippers.
Torc is creating a scalable, practical AI compute system for autonomous trucks. The system utilizes Flex’s Jupiter platform and manufacturing capabilities in order to support Torc‘s product development and scaled market entry in 2027. The system uses NVIDIA DRIVE AGX in-vehicle compute and the NVIDIA DriveOS operating system.
Growing Demand for DRIVE AGX
The NVIDIA DRIVE AGX Orin platform is the AI engine behind today’s intelligent fleets, and production vehicles built using the central car computer are already hitting the roads.
Magna is a significant global automotive manufacturer, helping to meet the escalating demand for the DRIVE Thor platform, which is based on the NVIDIA Blackwell architecture and is designed for the most demanding processing tasks, including those involving , vision language models, and large language models ( ). For integration into automakers ‘ vehicle roadmaps, Magna will create driving systems using DRIVE AGX Thor that offer active safety and comfort as well as immersive AI experiences in the interior of the car.
The backbone of AV development is simulation and data.
NVIDIA released the Omniverse Blueprint for AV simulation earlier this year, a reference tool for creating complex 3D environments for testing, validation, and autonomous vehicle training. To increase photoreal data variation, the blueprint is expanding to include NVIDIA Cosmos world foundation models ( WFMs).
Cosmos, which was first introduced at the CES trade show in January, is already being used in automotive, with Plus including one that is incorporating Cosmos physical AI models into its SuperDrive technology, accelerating the development of level 4 self-driving trucks.
Foretellix is extending its use of the Cosmos Transfer WFM to include conditions like weather and lighting in its sensor simulation scenarios to increase situation diversity. In order to enable physics-based modeling of camera, lidar, radar, and ultrasonic sensor data, Mcity is integrating the blueprint into the digital twin of its AV testing facility.
CARLA, which offers an open-source AV simulator, has integrated the blueprint to provide high-fidelity sensor simulation. Capgemini, a global systems integrator, will be the first to integrate CARLA‘s Omniverse integration into its AV development platform for better sensor simulation.
NVIDIA is training and fine-tuning NVIDIA Cosmos ‘ simulation capabilities using Nexar’s extensive, high-quality, edge-case data. In order to improve its AI development, refine AV training, high-definition mapping, and predictive modeling, Nexar is using Cosmos, neural infrastructure models, and the NVIDIA DGX Cloud platform.
Enhancing In-Vehicle Experiences With NVIDIA AI Enterprise
Mobility leaders are integrating the DRIVE AGX-based NVIDIA AI Enterprise software platform to enhance and in-vehicle experiences.
Cerence AI will present Cerence xUI, its new LLM-based AI assistant platform, at GTC, to advance the development of agentic in-vehicle user experiences. The Cerence xUI hybrid platform, which was first developed for NVIDIA DRIVE AGX Orin, runs both onboard and in the cloud.
The CaLLM family of language models, which are built on open-source foundation models and refined using Cerence AI’s automotive dataset, serves as the foundation for Cerence xUI. Cerence AI has optimized CaLLM to serve as the central agentic orchestrator, facilitating enriched driver experiences at the edge and in the cloud, while boosting inference performance through the library and .
Additionally, SoundHound will be demonstrating its upcoming in-vehicle voice assistant, which utilizes generative AI at the edge with NVIDIA DRIVE AGX, improving the in-car experience by bringing cloud-based LLM intelligence directly to vehicles.
The Complexity of Autonomy and NVIDIA’s Safety-First Solution
The key to deploying highly automated and autonomous vehicles to the roads at scale is safety. However, one of today’s most challenging computing challenges is creating AVs. It demands a lot of precision, precision, and unwavering safety commitment.
Increasing mobility to those who need it most, reducing accidents and saving lives, are the goals of AVs and highly automated cars. NVIDIA has developed NVIDIA Halos, a full-stack comprehensive safety system that unifies vehicle architecture, AI models, chips, software, tools, and services to ensure the safe development of AVs from the cloud to the car to fulfill this promise.
Today at GTC, NVIDIA will host its inaugural AV Safety Day, which will feature in-depth discussions about the implementation and rules of automotive safety.
NVIDIA will host Automotive Developer Day on March 20 to provide information on the most recent developments in end-to-end AV development and beyond.
New AV developer tools
NVIDIA also released new for automotive, which are designed to speed up the creation and deployment of end-to-end stacks from the cloud to the car. The new NIM microservices for in-vehicle applications, which make use of Motional’s nuScenes dataset, include:
- BEVFormer, a cutting-edge transformer-based model that combines multiple frame camera data into a single bird ‘s-eye view representation for 3D perception.
- An end-to-end autonomous driving system called SparseDrive that simultaneously generates a safe planning path and motion prediction.
NVIDIA offers a variety of models for automotive enterprise applications, including Cosmos Nemotron, a vision language model that queries and synthesizes images and videos for multimodal understanding and AI-powered perception, and NV-CLIP, a multimodal transformer model that generates embeddings from images and text, and many more.
Watch the and sign up for sessions from NVIDIA and industry leaders at the show, which runs through March 21 to learn more about NVIDIA’s most recent automotive news.