By Bertrand Braunschweig , Sašo Džeroski, Pascal Hitzler, Brian Jalaian, Sebastian Mežnar, Nina Moorman, David Porfirio, Hong Qin, Cogan Shimizu
The Association for the Advancement of Artificial Intelligence’s 2024 Fall Symposium Series was held at Westin Arlington Gateway, Arlington, Virginia, November 7-9, 2024. There were seven symposia in the fall program: AI Trustworthiness and Risk Assessment for Challenging Contexts (ATRACC), Artificial Intelligence for Aging in Place, Integrated Approaches to Computational Scientific Discovery, Large Language Models for Knowledge Graph and Ontology Engineering (LLMs for KG and OE), Machine Intelligence for Equitable Global Health (MI4EGH), Unifying Representations for Robot Application Development, Using AI to Build Secure and Resilient Agricultural Systems: Leveraging AI to mitigate Cyber, Climatic and Economic Threats in Food, Agricultural, and Water (FAW) Systems. This report contains summaries of the workshops, which were submitted by some, but not all, of the workshop chairs.
AI Trustworthiness and Risk Assessment for Challenging Contexts (S1)
The rapid embrace of AI-based critical systems introduces new dimensions of errors that induce increased levels of risk, limiting trustworthiness. Thus, AI-based critical systems must be assessed across many dimensions by different parties (researchers, developers, regulators, customers, insurance companies, end-users, etc.) for different reasons. Assessment of trustworthiness should be made at both the full system level and at the level of individual AI components. The focus of this symposium was on AI trustworthiness broadly and methods that help provide bounds for fairness, reproducibility, reliability, and accountability in the context of quantifying AI-system risk, spanning the entire AI lifecycle from theoretical research formulations all the way to system implementation, deployment, and operation.
This first AAAI symposium on AI Trustworthiness and Risk Assessment in Challenging Contexts was triggered by two initiatives on responsible and trustworthy AI that came together thanks to encouragement given by AAAI: an international community (mostly European and Asia-South Pacific) around AI trustworthiness assessment for critical systems, already gathered at the AITA SSS Symposium in 2023; and a US-based community around University of West Florida, gathered about the question of AI risk assessment in challenging contexts e.g., for security or defense applications.
The symposium was attended by 40 participants from three continents during its 2,5 days. The symposium consisted in 17 selected scientific presentations, three keynote talks, two invited talks and two enlightening panels.
The 17 scientific presentations were grouped in focused thematics: eXplainable AI; risks and trustworthiness assessment of foundation and large language models; methods and tools for risk mitigation; all aspects regarding data; ethics and artificial trust. Each paper was given 30 minutes, including questions, which allowed numerous substantial debates that contributed to the goal of creating an informal community of researchers and practitioners on the subject.
The keynote talks were by Stefan Wrobel, director of the Fraunhofer Institute for Intelligent Analysis and Information Systems IAIS, Professor of Computer Science at the University of Bonn, and Bonn Director of the Lamarr Institute for Machine Learning and Artificial Intelligence, who gave an overview of the many programs developed by his institute on AI trustworthiness assessment, including brand new results on trustworthiness improvement for LLMs; by Gopal Ramchurn, professor at the University of Southampton, CEO of the Responsible Ai UK (RAI UK) program and Director of the UKRI Trustworthy Autonomous Systems (TAS) Hub, who presented the many activities of the RAI UK program and the most relevant lessons learned; and by TJ Klausutis, program manager of ASIMOV (Autonomy Standards and Ideals with Military Operational Values) in the Strategic Technology Office of DARPA, who clearly showed the multiple views and challenges regarding ethics and their formalization for military applications.
The two invited talks presented material on two other important aspects: Jeffrey Bolkhovsky, a research scientist at the Naval Submarine Medical Research Laboratory, presented OPTIMA (Operational Trust in Mission Autonomy, a sociotechnical project to construct an ontology and metrics to allow mathematical rigor to be applied to the abstract notion of trust. Jansen Seheult, a consultant and assistant professor in the Divisions of Hematopathology and Computational Pathology & AI at Mayo Clinic, gave a rich talk on AI/ML Integration in Clinical Laboratory Diagnostics.
The first panel, titled “Using AI in Mission-Critical Environments,” brought together a distinguished group of experts, including representatives from the Johns Hopkins University Applied Physics Laboratory (JHUAPL), the Department of Veterans Affairs, and Fraunhofer Germany. This panel focused on the diverse challenges and solutions these organizations face when implementing AI technologies in high-stakes scenarios.
The concluding panel of the AAAI ATRACC Symposium was an open forum chaired by Bertrand Braunschweig and Brian Jalaian, encouraging active participation from all attendees. This inclusive discussion provided a platform for participants to share insights and perspectives on AI trustworthiness and risk assessment, fostering a collaborative environment that bridged diverse sectors and disciplines.
Key themes that emerged from the symposium’s presentations and discussions included:
Holistic Assessment: Evaluating AI systems comprehensively at both the system and component levels is essential to ensure fairness, reproducibility, reliability, and accountability.
Interdisciplinary Collaboration: Addressing the multifaceted challenges of AI trustworthiness requires ongoing dialogue among social scientists, legal experts, cognitive scientists, psychologists, and computer scientists.
Adaptive Regulation: Developing forward-thinking regulations that evolve alongside AI technologies is crucial to maintaining responsible development and deployment.
The symposium’s dynamic exchanges underscored the importance of a cohesive community dedicated to advancing AI trustworthiness. Participants expressed enthusiasm for continued collaboration, with plans to organize monthly online seminars starting in January 2025 to delve deeper into these critical issues. This collective effort marks a significant step toward ensuring that AI technologies develop in alignment with ethical standards and societal well-being.
This symposium was felt to be productive and enlightening by all participants, who responded with enthusiasm to the proposal to continue interacting as a community of researchers on AI trustworthiness and risk assessment, through a series of online scientific seminars that we plan to organize monthly starting in January 2025. There will be more food for thought in the coming months!
Bertrand Braunschweig and Brian Jalaian served as cochairs of this workshop. They also wrote this report.
Artificial Intelligence for Aging in Place (S2)
As the world’s population ages, many older adults prefer to age in place, seeking to maintain their independence, community connections, and quality of life while remaining in their homes. Artificial Intelligence (AI) based technologies hold the promise of facilitating aging in place through various means, such as predicting health events (e.g., falls or disease progression), offering personalized health and activity recommendations, mediating interactions within a person’s care and social network, providing physical and cognitive assistance for activities of daily living, and adapting to changing behaviors and preferences over time. While AI-based technologies hold the promise of facilitating aging in place, developing these approaches poses several challenges. These include logistical and privacy concerns related to collecting data and ground truth information in real-world settings, difficulties in deploying and evaluating systems in unstructured home environments, and issues surrounding technology acceptance and adoption. Additionally, the needs and abilities of older adults can change rapidly due to an increased risk of disabilities, often compounded by limited societal support. To explore these challenges and foster an interdisciplinary community, this symposium brought together a diverse group of researchers and practitioners across AI, robotics, human-machine interaction, psychology, ethics, and gerontology. This symposium served as a platform for sharing experiences, successes, and challenges in research within this domain, as well as for identifying open questions that could guide future advancements in the field.
A recurring theme identified in the symposium is user-centered design, ensuring alignment between the true needs of the user and the needs filled by the AI. User-centered design should take different forms, such as group brainstorming sessions and individual interviews. It should also happen at different stages of development, including ideation, system development, and evaluation. To facilitate these sessions, participants reported on the benefits of incorporating ”
translators” such as occupational therapists (OC), in the research process, as they can serve as a bridge between AI researchers and clinicians. Some labs host regular training sessions for OCs, such that they are well-versed in the research methods and can more effectively interface with clinicians and care receivers alongside the researchers.
While AI for aging in place can be developed for inflection points in an older adult’s health, AI should also be involved in the continuum of care. This requires researchers to conduct longitudinal studies and to personalize the data analysis to identify deviations in an individual’s physical and cognitive health. By collecting dense, frequent data, AI models can be trained to identify deviations over different timescales (ranging from changes observed over a day or week to those observed over the span of months). This intra-personal approach would enable AI to estimate risk and predict or detect health events. Participants with experience in longitudinal data collection with older adults using wearables and sensors discussed the critical role of participant engagement and challenges such as habituation and variability in user capabilities and access to technology. These challenges are exacerbated by frequent device discontinuation or abandonment, resulting in a “graveyard of wearables.”
A challenge of leveraging foundation models for aging in place, such as voice assistants, is that the existing training datasets are not obtained from a representative sample of ages, resulting in content that does not align with the lived experiences of older adults. To avoid “othering” or marginalizing the end user, researchers should carefully consider the age distributions of the training data employed.
Finally, the symposium identified several open questions for the development of AI for aging in place. The first concerns user expectations management: how can we manage users’ expectations for AI, and how frequently should developers check in to ensure these expectations remain calibrated during longitudinal deployments? With respect to data privacy considerations, given the particular sensitivity of health data in aging-in-place contexts, how should privacy requirements influence AI system design? Lastly, towards the development of common evaluation metrics for aging in place, what metrics should be used to evaluate AI systems supporting aging in place, given the complexity of measuring success in this domain?
Nina Moorman and Pragathi Praveena served as cochairs of this workshop and authored this report. Agata Rozga, Victor Antony, Nadira Mahamane, Michelle Zhao, Laurel Riek, Reid Simmons, and Matthew Gombolay served on the organizing committee.
Integrated Approaches to Computational Scientific Discovery (S3)
The symposium titled “Integrated Approaches to Computational Scientific Discovery” took place from November 7th to November 9th in Arlington, Virginia, as part of the AAAI Fall Symposium Series 2024. Its aim was to foster discussion on the current and future research directions in computational scientific discovery by bringing together researchers from various fields. The symposium was a resounding success, featuring talks on topics such as equation discovery, closed-loop scientific discovery, and reduced-order modeling. It attracted around 50 participants from North America, Europe, and Asia, achieving the highest attendance among all symposia at the event.
Scientific discovery has fascinated AI researchers since the 1970s, and in recent years, there has been an increased interest in fields like physics and applied mathematics. Despite this, most efforts have focused on individual components of discovery. While this was initially reasonable, it is now time to integrate various forms of discovery with other crucial aspects of science, including the creation of measuring devices, the design of controlled experiments, and the communication of results.
The symposium featured talks on both individual components of scientific discovery and how to integrate these components into larger systems, such as AI scientists or interactive research agents. The event began with an overview of the field by Pat Langley (Institute for the Study of Learning and Expertise), followed by invited talks from Lindley Darden (University of Maryland) on discovering genetic disease mechanisms and Peter Clark (Allen Institute for AI) on integrated scientific discovery systems.
The next session focused on literature analysis and hypothesis generation. It mostly included talks that concentrated on the use of large language models (LLMs). The approaches presented ranged from using LLMs to generate new ideas and assess their novelty to applying LLMs in physics or for equation discovery.
The afternoon session explored equation discovery approaches. While all presentations addressed similar problems, each approach offered a unique solution. These included a re-implementation of a classic method, a neural-guided approach, a Bayesian sampling-based approach, and a symmetry discovery technique.
The first day concluded with a poster session where 12 posters were on exhibit. The session provided the participants the opportunity to discuss the content of the posters with their authors. It was also an occasion where the participants could meet, discuss their work, and engage in conversations about all the topics presented during the symposium.
The second day was dedicated to sessions on experimentation, variable invention, and discussions about the challenges faced by the community. The experimentation sessions featured a variety of interesting applications and solutions to problems within these fields. Highlights included experimentation in materials science, a robot scientist for droplet friction experiments, scientific process discovery, closed-loop discovery in behavioral science, conjecture generation for number theory, automated discovery in plane geometry, and advancements in density functional theory.
During the session on variable invention, three approaches to latent space dynamics were presented. In addition, we heard about a method for discovering quadratic representations of partial differential equations (PDEs). With the approaches presented during these presentations, one could hopefully one day discover dynamics that occur in nature directly from video.
One of the most important parts of the symposium was the session dedicated to discussing the challenges faced by the scientific community centered around the topic of computational scientific discovery. Prior to this session, Benjamin Jantzen (Virginia Tech) presented six key challenges for fully autonomous scientific discovery, providing a valuable introduction to the discussion. During the session, participants delved into several critical issues and potential solutions. Topics discussed included the formalization of science and the scientific process, determining the appropriate level of abstraction, effectively integrating scientific literature into AI systems, and accurately evaluating closed-loop scientific discovery methods. The discussion also highlighted the importance of communication channels and events that bring the community together, such as the symposium itself and the upcoming event on Artificial Intelligence for Science (ai4science.ijs.si), which will take place in Ljubljana, Slovenia, September 22-26, 2025. This event will be an extended edition of the Discovery Science conference, which will bring together both AI researchers and domain scientists from different areas (such as materials science and medicine).
On the last day, the symposium featured sessions on variable invention and equation discovery. The variable invention session included talks on discovering dynamics from videos and data for structural mechanics and fluid dynamics. The equation discovery session showcased a variety of approaches, including a method that discovers equations using residuals, a behavior-based distance metric between mathematical expressions, a discussion on the structural identifiability of weak-form ODE systems, and an exploration of the importance of formal languages for scientific discovery and synthesis.
All in all, the symposium presented a great opportunity for members of the computational scientific discovery community to present their work, exchange exciting ideas, discuss scientific challenges, and make new connections.
Youngsoo Choi, Sašo Džeroski, Ross King, and Pat Langley served as cochairs of this symposium. This report was written by Sašo Džeroski and Sebastian Mežnar.
Large Language Models for Knowledge Graph and Ontology Engineering (S4)
Knowledge Graph and Ontology Engineering (KGOE) refers to all tasks regarding Knowledge Graph and ontology lifecycle management, from creation to maintenance, to re-use and applications. Specific tasks such as ontology modeling, alignment, and population, as well as entity disambiguation, are central to KGOE, and they are hard to do at reasonable quality levels and expensive to conduct, in particular in terms of time and effort required by application domain experts and ontology and KG engineers. The Semantic Web research and application community has made steady progress on KGOE in the past 20 years; however, automation – or even semi-automation – at the scale and quality required is still elusive.
The recent rise of Large Language Models (LLMs), however, carries the potential of being a game-changer for KGOE. Acting essentially as approximate natural language databases, together with their already demonstrated abilities to cover wide ranges of topics and added value in, say, software development, they are poised to play a role as assistants to humans in KGOE tasks for acceleration and lowering costs.
The 2024 AAAI Fall Symposium on LLMs for KGOE brought together a broad range, in terms of backgrounds, expertise, and location, of KGOE researchers to discuss recent developments and current advances in LLM-assisted KGOE. It was a very lively and interactive meeting that included discussions and breakouts.
Three keynotes set the stage. Alessandro Oltramari, Carnegie Bosch Institute at Carnegie Mellon University and Bosch Research Technology Center in Pittsburgh, PA, provided a broad perspective on the symposium theme, discussing neurosymbolic cognitive reasoning from theory to practice. Mohammed J. Zaki, Rensselaer Polytechnic Institute, discussed LLM-assisted KGOE in relation to ongoing work in food health. Valentina Tamma, The University of Liverpool, focused in particular on the importance of competency questions as part of LLM-assisted KGOE. An address by one of the cochairs, Cogan Shimizu, Wright State University, argued for ontology modularity as a core tool to achieve high-quality LLM assistance in KGOE. The three keynote speakers were joined by cochair Pascal Hitzler, Kansas State University, for a panel discussion moderated by Cogan Shimizu. Driven by audience questions, the discussion mostly evolved around broad questions related to neurosymbolic artificial intelligence and cognitive science, the general state of the art in AI, and projections of future developments. Twelve short presentations by symposium attendees on recent technical work furthermore provided a wider perspective on the multitude and variety of ongoing research activities in KGOE.
Breakout groups were formed based on themes suggested by participants. They discussed the relations between LLMs and Cognitive Science; technical approaches to making use of LLMs (e.g., prompt engineering, fine-tuning, RAG) and more generally neurosymbolic methods; and the state of the art, and future projections, of LLM-assisted ontology modeling, ontology alignment, ontology population, and entity disambiguation.
There was broad positive feedback by the participants, and the closing session included a discussion of possible avenues for subsequent engagement.
Hande McGinty, Kansas State University, represented the symposium at the plenary session. The symposium cochairs were Pascal Hitzler, Andrea Nuzzolese, Catia Pesquita and Cogan Shimizu. This report was written by Cogan Shimizu and Pascal Hitzler.
Machine Intelligence for Equitable Global Health (S5)
The AAAI 2024 Fall Symposium on Machine Intelligence for Equitable Global Health (MI4EGH) brought together researchers and practitioners to explore the transformative potential of AI in advancing global health with a focus on equity, ethics, and innovation. The symposium featured 14 accepted presentations, 11 invited talks, and a panel discussion, providing a comprehensive overview of the challenges and opportunities in applying AI to diverse healthcare challenges. Topics included infectious disease prediction, cancer diagnostics, speech disorder screening, antibiotic resistance monitoring, ethics of AI, and more. The discussions emphasized fairness in AI models, integrating ethical frameworks like feminist perspectives and bioethics and incorporating public participation to build trustworthy systems. Panels and presentations delved into funding opportunities, technical advancements, and the importance of interdisciplinary collaboration to establish equitable AI infrastructures for the future of global health.
The symposium featured a range of presentations that highlighted the diverse applications of machine intelligence in advancing global health. Dr. Jeffrey Townsend, a professor at the Yale School of Public Health, presented his research on managing COVID-19 as it transitions to an endemic phase. His work investigated the durability of vaccine-induced and natural immunity, as well as seasonal patterns, offering practical tools for shaping global health policies. Dr. Liqing Zhang, a professor at Virginia Tech, shared her research on monitoring antibiotic resistance through wastewater surveillance. Dr. Zhiyong Lu, a senior investigator at the National Library of Medicine, highlighted flaws of the current large language model for interpreting medical images. Dr. Lu also introduced TrialGPT, an AI framework leveraging large language models to streamline patient-to-clinical-trial matching. Dr. Amarda Shehu, a professor at George Mason University, presented her comparative study on health equity in AI policies across international, national, and sub-national levels. Dr. Stephen Sodeke, a bioethicist at Tuskegee University, addressed the ethical challenges and opportunities of AI and machine learning in global health. His presentation emphasized justice-informed frameworks, community engagement, and culturally sensitive AI design as critical to addressing structural health inequities. Dr. Sodeke called for collaborative efforts to deploy AI ethically, leveraging its potential to achieve global health equity. Dr. Philippe Giabbanelli, a professor at Old Dominion University, discussed emerging opportunities in combining machine intelligence and simulation models to support the identification and evaluation of public policies on mental health. Dr. Soumya Banerjee, a researcher at the University of Cambridge, discussed the transformative role of patients as active contributors to AI development in healthcare. He highlighted the importance of engaging patients to co-design AI models, evaluate outcomes, and shape research questions. Dr. Nick Fisk from the University of Rhode Island demonstrates that evolutionary methods can improve the prediction of clinical features of cancers across diverse populations and address health disparity. Dr. Jun Bai, an assistant professor at the University of Cincinnati, presented her work on exploring Latent Space for Generating Peptide Analogs Using Protein Language Models. Dr. Gangqing Hu, an assistant professor at West Virginia University, presented an application of GPT-4V, a multimodal AI model, in dermoscopic melanoma diagnosis. Dr. Hu’s model, when equipped with few-shot learning setups, demonstrated significant improvements in diagnostic accuracy and exhibited the potential to assist medical trainees with color vision deficiencies, fostering inclusivity and equity in medical education.
Moderated by Dr. Giabbanelli, a panel discussion featured Dr. Goli Yamini (NSF), Mr. Rory McLean (Aderas), Dr. Ritambhara Singh (Brown University), and Dr. Soumya Banerjee (University of Cambridge). The discussion explored strategies for impactful research, emphasizing the importance of interdisciplinary collaboration, foundational research, and effective proposal writing for programs like NSF’s Smart and Connected Health.
Dr. Hong Qin (Old Dominion University), Dr. Letu Qingge (North Carolina A&T State University), Dr. Jude Kong (York University), and Dr. Frank Liu (Old Dominion University) organized this symposium. This report was written by Hong Qin.
Unifying Representations for Robot Application Development (S6)
The second AAAI Fall Symposium on Unifying Representations for Robot Application Development (UR-RAD) was held on November 7-9, 2024, in Arlington, Virginia. The symposium featured four invited speakers, sixteen paper presentations, one poster session, multiple breakout discussions, an extended discussion about lessons learned, and a feedback session to help plan the future of UR-RAD.
Broadly, the focus of UR-RAD is on how roboticists and artificial intelligence (AI) researchers use formal languages and computational abstractions to represent robot tasks, behaviors, and social interactions. The goals of UR-RAD are, therefore, to categorize current trends in representations for robot application development, identify opportunities for adopting new representations, and identify areas where representational standardization versus representational diversity would be beneficial. In this year’s symposium, UR-RAD 2024 additionally aimed to foster collaboration between a diverse range of institutions and identify how to make an impact beyond the AAAI Fall Symposium Series.
In line with the core objectives of UR-RAD 2024 and in order to include a wider range of stakeholders involved in the development and application of robotic technologies, the organizers sought to engage a diverse range of speakers and presenters from fields complimentary, even if not necessarily directly related, to robotics. UR-RAD 2024 additionally included organizers from government, industry, academia, and non-profit institutions.
The first day of UR-RAD 2024 featured invited talks from two roboticists, Dr. Siddharth Srivastava (Arizona State University) and Dr. Cynthia Matuszek (University of Maryland, Baltimore County). Dr. Srivastava discussed his group’s work on learning abstractions for robot planning, while Dr. Matuszek presented her group’s work on grounding natural language representations. The first day also featured eight paper presentations on human-robot interaction and robotic perception and motion and concluded with a poster and demo session.
The second day of UR-RAD featured invited talks from the AI and software engineering communities, Dr. Jamie Macbeth (Smith College) and Dr. Brittany Johnson-Matthews (George Mason University). Dr. Macbeth spoke about representing natural language statements using conceptual dependencies, while Dr. Johnson-Matthews spoke about fostering inclusivity via community-driven software engineering. After the conclusion of each talk, the UR-RAD organizers engaged with both speakers in an informal discussion session, which was received well by the UR-RAD attendees. The second day also featured eight paper presentations on robot learning, planning, and application development interfaces and paradigms. The day concluded with the best paper award given to Charlie Street et al. for their work titled “Towards a Verifiable Toolchain for Robotics” due to the strength of their research and its relevance to the symposium.
The third day of UR-RAD was dedicated to distilling key takeaways and synthesizing lessons learned from the preceding days. Attendees engaged in in-depth discussions on several critical topics, including the notion of “unity” in robot application development. Rather than referring to a single, unifying representation, ‘unity’ was defined by attendees as a shared understanding among stakeholders, namely robot designers, developers, researchers, and end users. Additionally, attendees highlighted the ambiguity surrounding the concept of “representation.” A further topic of discussion explored the interplay between user-facing versus expert representations and how each must work together to achieve effective robot application development.
The third day of the symposium also focused on gathering feedback to inform future iterations of UR-RAD. Attendees praised the diversity of talks and paper presentations as a major strength of the symposium. In addition to expressing interest in attending future UR-RAD symposia, many attendees expressed a strong desire to grow the UR-RAD community. To help plan future symposia, the UR-RAD organizers engaged the community for feedback. Overwhelmingly, the UR-RAD community desires more direct interaction with invited speakers, such as through panel sessions, involving the speakers in the creation of breakout discussion topics, and asking speakers to prepare sets of questions to ask attendees. The informal discussion with Dr. Macbeth and Dr. Johnson-Matthews on the second day was cited as a particularly successful example of speaker engagement. Attendees also shared ideas about how UR-RAD can engage with other research communities that are stakeholders in the robot application development process.
David Porfirio, Saad Elbeleidy, Ruchen Wen, Laura Stegner, Ross Mead, Laura M. Hiatt, Mark Roberts, and Jason Wilson served as co-organizers for UR-RAD. This report was written by David Porfirio.
Using AI to Build Secure and Resilient Agricultural Systems: Leveraging AI to Mitigate Cyber, Climatic and Economic Threats in Food, Agricultural, and Water Systems (S7)
The increasing frequency of threats to the Food, Agriculture, and Water (FAW) has heightened the need to develop more resilient and secure underpinnings for the systems that support these critical sectors. AI can make significant contributions to this effort by detecting, predicting, analyzing, and mitigating the threats, thus creating novel, robust approaches to create a more secure and resilient FAW for the world. Threats to these systems can be climatic in nature, such as due to extreme weather, floods, and droughts, or economic, such as the impacts of trade policies and supply chain issues, but more recently, cybersecurity challenges are becoming more evident. Recent developments in data, sensors, and precision technologies have elevated their adoption in the FAW sector. However, these newly adopted cyber-physical systems also present additional cybersecurity challenges. No formal report was filed by the organizers for this symposium.
Authors
Bertrand Braunschweig is the scientific coordinator of the Confiance.ai community France.
Professor Sašo Džeroski is head of the Department of knowledge technologies at the Jožef Stefan Institute in Ljubljana, Slovenia.
Pascal Hitzler is Professor Lloyd T. Smith’s Creativity in Engineering Chair and Director of the Center for Artificial Intelligence and Data Science at the Department of Computer Science, Kansas State University, Manhattan, Kansas.
Brian Jalaian is an associate professor in the Department of Computer Science at the University of West Florida, USA.
Sebastian Mežnar is a PhD student at the Jožef Stefan International Postgraduate School and a researcher at the Department of Knowledge Technologies at the Jožef Stefan Institute.
Nina Moorman is a Ph.D. student in the Interactive Computing Department at the Georgia Institute of Technology. Pragathi Praveena is a Postdoctoral Fellow at the Robotics Institute at Carnegie Mellon University.
David Porfirio is a computer scientist at the Navy Center for Applied Research in Artificial Intelligence, U.S. Naval Research Laboratory.
Dr. Hong Qin is an Associate Professor at the School of Data Science and the Department of Computer Science at the Old Dominion University.
Cogan Shimizu is an Assistant Professor at the Department of Computer Science and Engineering at Wright State University, Dayton, Ohio.