Brief description: The Visibility Estimator enhances safety by providing reliable visibility range estimates in harsh weather. Using onboard sensors, especially LiDAR, it detects visibility reductions due to fog, rain, and snow by analysing sensor point-cloud data.
Expected impact: This technology helps automated vehicles assess and adapt their perception in real time, ensuring safer operation in adverse conditions and reducing road accidents.
Brief description: The Weather Conditional Velocity Controller adjusts automated driving to prevent traction loss during acceleration and braking on low-friction roads. It limits speed, acceleration, and deceleration based on real-time friction and road inclination estimates.
Expected impact: By enabling automated vehicles to adapt to poor road conditions, this technology reduces traffic accidents and extends their operational range – i.e. Operational Design Domain – while ensuring safe driving.
Brief description: This solution integrates data from RGB cameras, LiDARs, and RADARs to improve 3D object detection and semantic scene segmentation. Using advanced AI models, it learns sensor-specific and shared features for efficient data fusion.
Expected impact: By enhancing perception, especially for vulnerable road users in poor weather, it increases detection robustness and helps reduce accidents.
Brief description: This AI-based outlier removal technology filters raindrops and snowflakes from 3D LiDAR point clouds, enhancing perception algorithms for autonomous vehicles. Using a novel convolution operation, it efficiently captures relevant points, reducing memory use and processing time while maintaining competitive performance.
Expected impact: By cleaning noisy sensor data, the ROADVIEW innovation improves key tasks like object detection and semantic scene segmentation, leading to more reliable perception systems and safer autonomous driving.
Brief description: This innovative solution identifies drivable areas using solely LiDAR sensors, employing advanced AI techniques for data analysis. A deep neural network processes 3D LiDAR data to distinguish safe paths from obstacles like buildings and vehicles.
Expected impact: By enhancing autonomous navigation without additional sensors, it improves safety and reduces collisions with all road users, especially in challenging weather conditions, helping reduce the overall number of accidents.
Brief description: This innovation enhances 3D object detection using LiDAR-only data by generating dense pseudo-LiDAR point clouds enriched with scene semantics. It eliminates the need for cameras by using a segmentation model and multi-modal domain translator to create synthetic depth cues. A semantically guided projection method ensures only relevant points are retained, improving detection accuracy.
Expected impact: By increasing LiDAR data density, this solution enhances the detection of vehicles and vulnerable road users, even in adverse weather, ultimately reducing accidents.
Brief description: This AI-based solution detects weather conditions – clear, rainy, foggy, or snowy – using RGB cameras, LiDARs, and RADARs. By fusing multimodal sensor data and applying AI techniques, it enhances weather detection for autonomous systems.
Expected impact: The system can integrate with vehicle controllers to adjust speed in adverse weather, reducing collisions, accidents, and CO2 emissions while improving road safety and efficiency.
Brief description: This solution uses vehicle-to-everything (V2X) communication to share local weather conditions and detected objects from roadside sensors with nearby vehicles. It assesses whether the sensors operate within their Operational Design Domain (ODD) to ensure data quality.
Expected impact: By sharing object attributes and confidence levels via the V2X Collective Perception Service (CPS), the system enhances vehicle situational awareness. This improves ODD evaluation, particularly in harsh weather, and helps reduce accidents.
Brief description: This innovation ensures that, in harsh weather conditions, infrastructure-based systems adjust manoeuvre cooperation requests sent to autonomous vehicles via vehicle-to-everything (V2X) communication. They do so by providing weather-aware driving advice or requesting a Minimum Risk Manoeuvre (MRM).
Expected impact: The V2X messages generated by the roadside system will help connected automated vehicles either safely deactivate automated driving functions when Operational Design Domain (ODD) limits are reached or extend their ODD in complex conditions. This will enhance traffic management and reduce accidents in challenging situations.
Brief description: Autonomous driving depends on data and machine learning. In harsh weather, poor data quality affects vehicle perception. ROADVIEW introduced the concept of Data Readiness Levels (DRLs). Akin to TRLs, DRLs consist of a 1-9 scale to quantify data quality, validated with the project datasets and a cross-verification tool.
Expected impact: The outcomes in ROADVIEW highlight Sustainable, Smart, and Resilient as key areas. Autonomous driving demands extensive data collection and processing. Training and real-time object detection consume significant computing power. Better data quality reduces reruns, optimises processing, and improves efficiency.
Brief description: This innovation introduces a model that predicts how a vehicle behaves in snowy and icy conditions. The model incorporates realistic tyre parameters measured at VTI’s tyre testing facility.
Expected impact: By improving the accuracy of vehicle behaviour simulations, this innovation is expected to enhance transport safety and security. It also provides a structured approach to evaluating system performance throughout the development process.
Brief description: Various metrics can be used to assess the performance of the overall motion planning algorithm in relation to ROADVIEW use cases. These metrics are designed for application in X-in-the-loop test environments, ranging from simulations to real-world tests.
Expected impact: The goal is to enhance transport safety and security by improving the fidelity of models used in testing. Additionally, the metrics will help define a structured approach to evaluating system performance throughout the development process.
Brief description: The project developed test systems that integrate real and simulated environments to reduce reliance on physical testing through simulation-supported methods (XiL). While regulations endorse simulation, a clear validation methodology is lacking.
Expected impact: ROADVIEW aims to assess approaches for verifying virtual environments, clarifying simulation’s role in testing automated vehicles and accelerating the deployment of safe, reliable driving functions.
Brief description: Automated vehicles must adapt their behaviour in adverse weather for safe operation. Road-side traffic cameras, enhanced with AI, can function as weather sensors, detecting conditions from images. Using V2X communication, this data is relayed to vehicles.
Expected impact: As many cameras are already deployed, this approach reduces costs while providing precise local weather information for automated driving.
Brief description: This innovative method enables precise vehicle positioning relative to a prebuilt map, even in adverse weather conditions, by utilising a vehicle-mounted LiDAR sensor. Unlike conventional positioning systems, it operates effectively in various lighting conditions and does not rely on GNSS or external infrastructure. The method remains robust against challenging weather effects, including heavy snowfall and rain, ensuring reliable performance in diverse environments.
Expected impact: By determining the vehicle’s full 6D pose – comprising position, height, and 3D orientation – this technology provides autonomous vehicles with accurate situational awareness, enhancing their ability to navigate safely and efficiently.
Brief description: The method allows the vehicle to measure road surface conditions in front of the vehicle before driving over the road. It allows the vehicle to measure a dense grip estimate of the road surface before the vehicle drives on that area. The estimate is computed by fusing forward facing camera and LiDAR data and the method allows grip estimates to be estimated for the visible forward field of view from 0 to few 10’s of meters forward.
Expected impact:Overall, the proposed technology allows to automated vehicles to drive safely even in conditions where the grip on the road is suboptimal, increasing safety and reliability of the vehicles.