Nvidia has announced the Alpamayo family of open-source, artificial intelligence models, simulation tools and datasets for reasoning-based autonomous vehicle development. Alpamayo brings together open models, simulation frameworks and datasets into one ecosystem for any automotive developer or research team to build upon, said the firm.
The Alpamayo line introduces chain-of-thought, reasoning-based vision language action (VLA) models that “bring humanlike thinking to autonomous vehicle decision-making,” enabling systems to think through unusual scenarios step-by-step in order to improve autonomous driving capability, says Nvidia.
At CES 2026, Nvidia is releasing Alpamayo 1, a chain-of-thought reasoning VLA model that uses video input to generate courses of action and the reasons behind them. This is joined by AlpaSim, an open-source end-to-end simulation framework that provides realistic sensor modelling, configurable traffic dynamics and scalable closed‑loop testing environments.
Nvidia is also releasing physical AI open datasets containing over 1,700 hours of driving data collected from “the widest range of geographies and conditions”, including real-world edge cases which are essential for advanced reasoning architectures, the firm said.
Companies including Lucid, Jaguar Land Rover, Uber and others are showing interest in Alpamayo for the development of reasoning-based autonomous driving stacks that will enable Level 4 autonomous driving, the technology company has stated. Lucid unveiled its Gravity six-seater robotaxi at the ongoing CES 2026.
“The ChatGPT moment for physical AI is here — when machines begin to understand, reason and act in the real world. Robotaxis are among the first to benefit. Alpamayo brings reasoning to autonomous vehicles, allowing them to think through rare scenarios, drive safely in complex environments and explain their driving decisions — it’s the foundation for safe, scalable autonomy,” said Nvidia founder and CEO Jensen Huang.
The technology firm also announced the debut of its Nvidia Drive AV software with “enhanced Level 2” point-to-point driving assistance, demonstrated in the battery-electric, C174-generation Mercedes-Benz CLA that made its debut in March 2025.
The point-to-point driving assistance allows the vehicle thus equipped to navigate with lane selection, turns and route-following in congested or unfamiliar areas, understand the movements of vulnerable road users such as pedestrians, cyclists and scooter riders and respond accordingly by yielding or stopping to avoid collision, and assist drivers in navigating safely between any address of their choosing, said Nvidia.
The post Nvidia announces Alpamayo AI autonomous vehicle open source toolkit, with “humanlike” decision-making appeared first on Paul Tan’s Automotive News.



