By Florian Buettner, Ulli Waltinger
The symposium on Human-centered AI: Trustworthiness of AI Models & Data was held as part of the AAAI Fall Symposium Series in Arlington, VA on November 7–9, 2019. The focus of the symposium was on AI systems to improve data quality and technical robustness and safety, with additional discussion centered around explainable models, human trust and ethical aspects of AI.
To facilitate the widespread acceptance of AI systems guiding decision-making in real-world applications, it is key that solutions comprise trustworthy, integrated human-AI systems. A key requirement for deployment of AI at enterprise scale is to realize the importance of integrating human-centered design into AI systems such that humans are able to use systems effectively, understand results and output, and explain findings to oversight committees.
The goal of the symposium was to bridge theory and practice by bringing together participants from industry, government and academia. This included on the one hand academic researchers with long-standing expertise in devising interpretable and uncertainty-aware predictive models. On the other hand it included applied researchers with experience not only in deploying human-centered machine learning methods into complex systems in industry and government, but also in addressing challenges in terms of how these models communicate uncertainty and limitations transparently to stakeholders and how they interact with humans accordingly.
We set the stage for intensive discussion throughout the course of the symposium with an engaged panel discussion, where researchers and data scientists from Accenture Federal, Telefonica and Conexus shared their viewpoints on how to implement responsible and trustworthy AI and address concerns regarding security and privacy in industrial settings. This theme on trustworthy AI and industrial applications was completed by two invited talks discussing challenges in the application of AI in industry and government. Markus Kaiser (Siemens AG) outlined the challenges of machine learning in the physical world, while Gil Alterovitz (U.S. Veterans Administration) focused on aspects of human-centered AI in government.
Another theme of the symposium was the discussion of different aspects of data quality, ranging from metrics to assess the quality of annotation and adversarially generated time series to the presentation of new benchmark datasets.
The third major theme of the workshop was on explainable AI and interpretable machine learning. This included contributed talks discussing interpretability in the context of reinforcement learning and knowledge graphs, but also interpretability via causal inference. The theme was completed by invited talk by Daniel Sonntag (German Research Center for Artificial Intelligence) on the links between interactive machine learning and explainability, in particular in the context of medical applications.
Finally, we discussed aspects of human trust and ethical aspects of AI. The participants discussed various aspects that make an AI-based decision process trustworthy (e.g. a human in the loop) and scenarios for ethical AI. The symposium also included an invited talk by Rediet Abebe (Harvard University) extending this theme to AI for social good, discussing the role of algorithms in increasing societal welfare and designing algorithms for social good.
The symposium was concluded by an invited talk by Eric Daimler (Spinglass) who shared his views on why AI is still as underrated as the internet was in the late 1990s and his vision on how we have to fundamentally change the way we teach AI in order to unlock its full potential.
Proceedings of the symposium are published as https://arxiv.org/abs/2001.05375. Florian Buettner, John Piorkowsi, Ian McCulloh and Ulli Waltinger served as co-chairs of this symposium.
Florian Buettner is a senior key expert research scientist at the Siemens AI Lab, Siemens AG.
Ulli Waltinger is a research group leader at the Siemens AI Lab, Siemens AG.