Abstract
Significant investments have been made globally into the development of autonomous vehicle technology over the last decade. Various companies and academic institutes invest a significant amount of financial and infrastructural resources to collect data over millions of kilometers on public roads. However, not every kilometre driven is equal, and most of the automated vehicles have, so far, been primarily trained and tested under clear weather. This bias brings a significant challenge when it comes to testing automated driving functions in harsh weather conditions such as dense fog, heavy rain, and snowfall, which vastly affect the functioning of sensors and the performance of perception and control algorithms. ADAW aims to bring together academia and industry to discuss how to develop robust sense-perceive-control pipelines and weather-aware decision-making systems, working reliably under extreme weather conditions. In the context of automated mobility, the main objective of ADAW is to address weather-related challenges and reconcile various hardware (e.g., sensor combinations) and software (e.g., AI-based algorithms) approaches for downstream perception (e.g., semantic scene segmentation and object detection) and control tasks (e.g., velocity control). This workshop will also compare the state-of-the-art solutions allowing vehicles to localise and navigate autonomously in all weather conditions.
Workshop objectives
- Highlight and discuss within the robotics community the latest research and development trends in industry and academia related to AI-based perception and control algorithms, working efficiently and reliably in challenging weather conditions such as fog, rain, and snow.
- Define robust processing pipelines and architectures, reconcile and integrate various hardware (e.g., sensor combinations) and software (e.g., AI-based algorithms) approaches for addressing weather-related challenges both in terms of computational services and hardware robustness.
- Compare the state-of-the-art downstream perception tasks (e.g., semantic scene segmentation and object detection) in both computer vision and robotics, using datasets logged in adverse weather conditions. With this, we hope not only to discuss the maturity of already available synthetic and real-world datasets, but also to find out a common ground to combine assumedly different approaches for autonomous capability and reliability.
- Discuss methodologies to progress the state of the art in the field including benchmarking procedures, robustness by data fusion from different sensors, relevant datasets, and a set of critical test cases.
Organisers
Eren Erdal Aksoy
(Main organiser), GoogleScholar
Associate Professor, School of Information Technology, Halmstad University, Sweden
Ngo Thien Thu
(Co-organiser), GoogleScholar
Postdoctoral Researcher, School of Information Technology, Halmstad University, Sweden
Christos Sakaridis
(Co-organiser), GoogleScholar
Postdoctoral Researcher, Computer Vision Lab, ETH Zurich,Switzerland
Leila Ghasemzadeh
(Co-organiser), GoogleScholar
Autonomous Vehicles Senior Engineer, Ford Otosan, Türkiye
Joonwoo Son
(Co-organiser), LinkedIn
Founder & CTO, Sonnet.AI, Principal Research Engineer, DGIST, South Korea