Home » Products Sourcing » Vehicle Parts & Accessories » Helm.ai Introduces Worldgen-1 Multi-Sensor Generative AI Foundation Model for Autonomous Driving

Helm.ai Introduces Worldgen-1 Multi-Sensor Generative AI Foundation Model for Autonomous Driving

Inside car

Helm.ai, a provider of AI software for high-end ADAS, Level 4 autonomous driving, and robotics, launched a multi-sensor generative AI foundation model for simulating the entire autonomous vehicle stack.

WorldGen-1 synthesizes highly realistic sensor and perception data across multiple modalities and perspectives simultaneously, extrapolates sensor data from one modality to another, and predicts the behavior of the ego-vehicle and other agents in the driving environment. These AI-based simulation capabilities streamline the development and validation of autonomous driving systems.

Leveraging innovation in generative DNN architectures and Deep Teaching, a highly efficient unsupervised training technology, WorldGen-1 is trained on thousands of hours of diverse driving data, covering every layer of the autonomous driving stack including vision, perception, lidar, and odometry.

WorldGen-1 simultaneously generates highly realistic sensor data for surround-view cameras, semantic segmentation at the perception layer, lidar front-view, lidar bird’s-eye-view, and the ego-vehicle path in physical coordinates. By generating sensor, perception, and path data consistently across the entire AV stack, WorldGen-1 accurately replicates potential real-world situations from the perspective of the self-driving vehicle. This comprehensive sensor simulation capability enables the generation of high-fidelity multi-sensor labeled data to resolve and validate a myriad of challenging corner cases.

Furthermore, WorldGen-1 can extrapolate from real camera data to multiple other modalities, including semantic segmentation, lidar front-view, lidar bird’s-eye-view, and the path of the ego vehicle. This capability allows for the augmentation of existing camera-only datasets into synthetic multi-sensor datasets, increasing the richness of camera-only datasets and reducing data collection costs.

Beyond sensor simulation and extrapolation, WorldGen-1 can predict, based on an observed input sequence, the behaviors of pedestrians, vehicles, and the ego-vehicle in relation to the surrounding environment, generating realistic temporal sequences up to minutes in length. This enables AI-generation of a wide range of potential scenarios, including rare corner cases.

WorldGen-1 can model multiple potential outcomes based on observed input data, demonstrating its ability for advanced multi-agent planning and prediction. WorldGen-1’s understanding of the driving environment and its predictive capability make it a valuable tool for intent prediction and path planning, both as a means of development and validation, as well as the core technology that makes real-time driving decisions.

Source from Green Car Congress

Disclaimer: The information set forth above is provided by greencarcongress.com independently of Alibaba.com. Alibaba.com makes no representation and warranties as to the quality and reliability of the seller and products. Alibaba.com expressly disclaims any liability for breaches pertaining to the copyright of content.

About The Author

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top