
On 22 September 2025, the 1st Workshop on “Exploring the Potential of XAI and HMI to Alleviate Ethical, Legal, and Social Conflicts in Automated Vehicles” took place as part of the 17th International ACM Conference on Automotive User Interfaces and Interactive Vehicular Applications (AutomotiveUI 2025), held in Brisbane, Australia.
With highly automated vehicles becoming increasingly present on public roads, addressing ethical, legal and social implication (ELSI) conflicts in AV decision-making remains a complex challenge—particularly in ambiguous or uncertain situations. The workshop explored how Explainable AI (XAI) and Human–Machine Interfaces (HMIs) can improve transparency and support appropriate trust by helping users understand and interpret AV decisions. Using a scenario-based format, participants reflected on AV decision-making under conflicting conditions, what types of explanations users need, and how such explanations could be effectively communicated through HMI. The workshop outcomes represent an initial step toward the meaningful integration of XAI to support human-centred automated driving decisions.
The workshop was organised by Krishna Sahithi Karur, Andreas Riener, Ignacio Alvarez, Philipp Wintersberger, Jeongeun Park, and Seul Chan Lee.
The HIDDEN project was also introduced during the workshop discussions, by THI colleagues, towards providing participants with an overview of its objectives and relevance to trustworthy, human-centric automated driving, and linking HIDDEN’s work on Hybrid and ethical-aware intelligence to the workshop themes on XAI and HMI.
A follow-up workshop to further elaborate on the developed concepts took place on November 23 at Hanyang University, Ansan, Korea. Participants included Andreas Riener, Krishna Sahithi Karur, Jeongeun Park, Seul Chan Lee, and approximately ten additional postdoctoral researchers and PhD students from the greater Seoul area, Korea.
Photo Gallery



