Abstract
Automated mobility is rapidly growing, and significant investments have been made globally into the development of autonomous vehicle technology over the last decade. There exist various companies and academic institutes, investing a significant amount of financial, scientific, and infrastructural resources to develop and test advanced AI-based automated driving algorithms. Companies have different test fleets, collecting data over millions of kilometers on public roads. However, not every kilometer driven is equal, and most of the automated vehicles have, so far, been primarily trained and tested under clear weather. This bias brings a significant challenge when it comes to testing automated driving functions in harsh weather conditions such as dense fog, heavy rain, and snowfall, which vastly affect the functioning of sensors and the performance of perception and control algorithms. ADAW aims to bring together academia and industry to discuss how to develop robust sense-perceive-control pipelines and weather-aware decision-making systems, working reliably under extreme weather conditions. In the context of automated mobility, the main objective of ADAW is to address weather-related challenges and reconcile various hardware (e.g., sensor combinations) and software (e.g., AI-based algorithms) approaches for downstream perception tasks (e.g., semantic scene segmentation and object detection). This workshop will also compare the state-of-the-art solutions allowing vehicles to navigate autonomously in all weather conditions, opening new doors to go from autonomous to snowtonomous vehicles.
Workshop objectives
- Highlight and discuss within the robotics community the latest research and development trends in industry and academia related to AI-based perception and control algorithms, working efficiently and reliably in challenging weather conditions such as fog, rain, and snow.
- Define robust processing pipelines and architectures, reconcile and integrate various hardware (e.g., sensor combinations) and software (e.g., AI-based algorithms) approaches for addressing weather-related challenges both in terms of computational services and hardware robustness.
- Compare the state-of-the-art downstream perception tasks (e.g., semantic scene segmentation and object detection) in both computer vision and robotics, using datasets logged in adverse weather conditions. With this, we hope not only to discuss the maturity of already available synthetic and real-world datasets, but also to find out a common ground to combine assumedly different approaches for autonomous capability and reliability.
- Discuss methodologies to progress the state of the art in the field including benchmarking procedures, robustness by data fusion from different sensors, relevant datasets, and a set of critical test cases.
Organisers

Eren Erdal Aksoy
(Main organiser), GoogleScholar
Associate Professor, School of Information Technology, Halmstad University, Halmstad Sweden

Carlo A. Avizzano
(Co-organiser), GoogleScholar
Prof. of Robotics and Automation – Intelligent Automation Systems, Scuola Superiore Sant’ Anna, Pisa, Italy

Christos Sakaridis
(Co-organiser), GoogleScholar
Postdoctoral Researcher, Computer Vision Lab, ETH Zurich, Switzerland

Mohit Mehndiratta
(Co-organiser), GoogleScholar
Technical Lead, Planning and Control Team, Sensible 4 Oy, Espoo, Finland