Reports of the Workshops Held at the 2025 AAAI Conference on Artificial Intelligence
By Nitay Alon, Daniela Annunziata, Djallel Bouneffouf, Jill Burstein, Marzia Canzaniello, Vinay K. Chaudhri, Aryan Deshwal, Seyed A. Esmaeili, Baihan Lin, Zitao Liu, Yung-Hsiang Lu, Debshila Basu Mallick, Andrew M. Olney, Ryota Takatsuki, Pengyang Wang, Simon Woodhead and Qin Yang
The Workshop Program of the Association for the Advancement of Artificial Intelligence’s 39th Conference on Artificial Intelligence (AAAI-25) was held in Philadelphia, Pennsylvania, on February 25 – March 4, 2025. There were 49 workshops in the program: A Translational Institute for Knowledge Axiomatization, Advancing Artificial Intelligence through Theory of Mind (ToM4AI): Bridging Human Cognition and Artificial Intelligence, AI for Public Missions, AI for Social Impact: Bridging Innovations in Finance, Social Media, and Crime Prevention, AI for Urban Planning, AI Governance: Alignment, Morality, and Law, AI to Accelerate Science and Engineering, AI4EDU: AI for Education: Tools, Opportunities, and Risks in the Generative AI Era, Artificial Intelligence for Cyber Security (AICS), Artificial Intelligence for Music, Cooperative Multi-Agent Systems Decision-Making and Learning: Human-Multi-Agent Cognitive Fusion, Deployable AI Workshop, Economics of Modern ML: Markets, Incentives, and Generative AI, GoodData – Preparing Good Data for Generative AI: Challenges and Approaches, Innovation and Responsibility for AI-Supported Education, MARW: Multi-Agent AI in the Real-World Workshop, Planning in The Era of Large Language Models, Post-Singularity Symbiosis: Preparing for a World with Superintelligence, Preventing and Detecting LLM Generated Misinformation, Privacy-Preserving Artificial Intelligence, Quantum Computing and Artificial Intelligence (QC+AI), Web Agent Revolution: Enhancing Trust and Enterprise-Grade Adoption Through Innovation, Imageomics: Discovering Biological Knowledge from Images Using AI, Workshop on Datasets and Evaluators of AI Safety, Workshop on Document Understanding and Intelligence, Workshop on Multi-Agent Path Finding, Advancing Foundation Models to Transform Biological Research, Advancing LLM-Based Multi-Agent Collaboration, AI Agent for Information Retrieval: Generating and Ranking, AI4Research: Towards a Knowledge-grounded Scientific Research Lifecycle, Artificial Intelligence for Time Series Analysis (AI4TS): Theory, Algorithms, and Applications, Artificial Intelligence for Wireless Communications and Networking (AI4WCN), Artificial Intelligence with Causal Techniques, Bridging the Gap Between AI Planning and Reinforcement Learning (PRL), CoLoRAI – Connecting Low-Rank Representations in AI, Computational Jobs Marketplace, DEFACTIFY 4.0 – Workshop Series on Multimodal Fact-Checking and Hate Speech Detection, FLUID: Federated Learning for Unbounded and Intelligent Decentralization, Generalization in Planning, Workshop and Challenge on Anomaly Detection in Scientific Domains, Knowledge Graphs for Health Equity, Justice, and Social Services, Large Language Model and Generative AI for Health, Machine Learning for Autonomous Driving, MALTA: Multi-Agent Reinforcement Learning for Transportation Autonomy, Neural Reasoning and Mathematical Discovery — An Interdisciplinary Two-Way Street, Open-Source AI for Mainstream Use, Scalable and Efficient Artificial Intelligence Systems, Towards Knowledgeable Foundation Models, and Workshop on Health Intelligence (W3PHIAI-25). This report contains summaries of the workshops, which were submitted by some, but not all, of the workshop chairs
A Translational Institute for Knowledge Axiomatization (W1)
The workshop explored the need, interest and feasibility of setting up a Translational Institute on Knowledge Axiomatization (TIKA). TIKA is envisioned to create an open knowledge resource and serve as a hub for research, education and training on knowledge representation and knowledge engineering.
Over 50 AI researchers convened at the workshop over two days. The discussions focused on different aspects of creating an open knowledge resource including foundational knowledge, automated reasoning, knowledge curation, education on knowledge axiomatization, and evaluation of outcomes.
The opening discussion confirmed that the idea of curated knowledge, that is, knowledge captured in an expressive formal language that can be explicitly examined and verified by humans, is compelling. It must, however, be situated in the modern context of AI. Such a resource should address the limitations of existing generative AI approaches; bridge between high level knowledge representation and low-level sensor inputs for diverse AI applications, such as, robotic planning; and address the societal needs such as combatting misinformation, helping fight disease, and new drug design.
Formalizing foundational knowledge, which involves representing abstract concepts like time, space, actions, and causality, is crucial for the reusability and applicability of knowledge resources. Much prior work exists that can be leveraged for this purpose and methodologies exist for creating new foundational knowledge. Long-term challenges involve addressing the multi-modal nature of real-world problems, bridging the gap between high-level action libraries and low-level robotic planning, and improving the connection between foundational knowledge and the abstract aspects of natural language.
Automated reasoning encompasses computational processes from deductive reasoning with formal proofs to inductive and analogical reasoning which may lack formal proofs. Regardless of the type, the goal of automated reasoning is to study formal properties like soundness, completeness, and tractability. The discussion covered automatically discovering axioms, addressing challenges in modeling human reasoning (which can be flawed), reasoning at scale to handle the vastness of scientific knowledge, and incorporating context into the reasoning process to ensure correctness.
Knowledge curation, the process of assembling application-specific knowledge, relies on foundational knowledge, application data, and interaction with domain experts. Human oversight is crucial for knowledge curation. Even for large-scale use cases such as web-search, that require durable semantics, we must rely on humans to specify the design. Automation, particularly with LLMs, can aid knowledge curation. LLMs can provide explicit knowledge, extract knowledge from domain experts through dialog, and mediate between humans and complex knowledge bases.
Current state of education in knowledge axiomatization has numerous deficiencies. Knowledge representation and reasoning courses are offered by less than 5% of US computer science departments, and there’s a decline in faculty teaching these subjects world-wide. Current teaching practices often include outdated materials, a lack of emphasis on the importance of knowledge representation, and insufficient integration of logic into other computer science courses. To improve this, modular teaching materials must be developed that emphasize the practical applications of knowledge representation; logic programming teaching should be integrated with courses on database management systems; and logic and set theory should be taught in high schools.
The current evaluation practice in AI faces two main challenges: proxy failure and training to the test. Proxy failure occurs when standardized tests used to evaluate AI systems fail to accurately predict their performance in real-world applications. Training to the test refers to the issue where AI programs are optimized to perform well on specific tests without genuinely possessing the abilities they are intended to measure. These problems can be solved through three alternative evaluation methods: expert interview of an AI system, evaluation in a virtual environment, and evaluating the system’s functioning by examining the reasoning steps and knowledge used to produce results.
The workshop concluded with a discussion on how the development of OKR should be organized. The participants emphasized that OKR development could be situated in a virtual institute such as TIKA. TIKA should promote interoperability among different systems by providing documentation, guidance, and a user-friendly environment through a portal similar to huggingface. OKR should complement, not compete with, Large Language Models, and its knowledge should be leveraged by LLMs and other applications like robotic planning and biomedicine. TIKA should provide effective teaching materials with real-world examples and tutorials, and develop public test sets for evaluating system performance. The organizational structure should be a non-profit foundation to ensure sustainability and broad participation, starting perhaps as a virtual institute.
Vinay K Chaudhri, Chaitan Baru, Michael Genesereth and Michael Witbrock served as the workshop chairs. This report was authored by Vinay K. Chaudhri.
Advancing Artificial Intelligence through Theory of Mind: Bridging Human Cognition and Artificial Intelligence (W2)
The Theory of Mind (ToM) for AI workshop was held as part of AAAI-2025 in Philadelphia. Motivated by the recent re-interest in ToM in AI, the workshop’s main goal was to bridge between multiple scientific communities actively researching ToM, with a clear aim to blend these communities and encourage collaboration across disciplines. The workshop hosted 4 keynote speakers from cognitive, computer, and robotic science, held multiple short-form presentations and hosted 4 poster sessions.
The AAAI-2025 Theory of Mind (ToM) for AI workshop was a multidisciplinary event aimed at connecting researchers from a wide range of domains, such as psychology, computational neuroscience, economics and AI, all working on various aspects of ToM. This workshop was motivated by the active research on ToM in AGI. While ToM was explored previously by the AI community, current research adopts new models and drifts away from past findings, risking the possibility of repeating past misconceptions and failing to take advantage of the theoretical advances in cognitive science. On the other hand, cognitive scientists can gain much from incorporating modern AI tools into their research by using AI as a synthetic model organism.
The workshop presented 4 keynote speakers – Rebecca Saxe (MIT), Harmen de Weerd (University of Groningen), Sheila McIlraith (UoT) and Joshua Tenenbaum (MIT). Each speaker reviewed their past research and conclusions, offering valuable lessons to the ToM community. Prof. Saxe argued that ToM is a causal model, suitable for complex tasks that extend beyond action and false belief prediction and that the AI community should model it as such. Dr. de Weerd reviewed past research, offering an economically inspired framework to explain recursive reasoning in humans, tested in agent-based simulations. Prof. McIlraith presented her active work on the integration of ToM into AI systems to improve communication and human-AI interaction. Finally, Prof. Tenenbaum discussed recent progress in modelling human ToM using probabilistic programming and presented future directions based on artificial neural models.
The workshop included 8 short poster talks and about 40 posters. These sessions enabled researchers from different research areas and across all levels of seniority to share their current findings with the community and receive meaningful feedback. The posters were rounded into 4 sessions – Machine Theory of Mind Frameworks and Benchmarks, Human-AI Interaction and Social Reasoning, Cognitive and Psycholinguistic Perspectives and Theory of Mind in Multimodal and Safety-Critical Domains. Machine Theory of Mind Frameworks and Benchmarks session focused on formal models of artificial ToM. The Human-AI Interaction and Social Reasoning session mainly targeted the role of ToM in successful and safe Human-AI interaction. The Cognitive and Psycholinguistic Perspectives and Theory of Mind in Multimodal and Safety-Critical Domains session focused on formal evaluation and modeling of AI models ability to emulate ToM. The last session, Theory of Mind in Multimodal and Safety-Critical Domains offered selected works discussing ToM-augmented LLMs and the evaluation of ToM in LLM.
Nitay Alon, Joseph M. Barnby, Reuth Mirski and Stefan Sarkadi co-organized this workshop. The report was written by Nitay Alon.
AI for Public Missions (W3)
The AI for Public Missions workshop aimed to convene a community of scientists, engineers, and practitioners with public missions, to better leverage AI towards challenging problems of societal importance. This included the efforts of federal, state, and local governments, as well as non-partisan non-governmental and community organizations.
As governments continued to leverage AI to achieve institutional goals, numerous hurdles were certain to emerge that limited its successful application. Publicly-funded research should ideally balance support for topics that address challenges that hinder AI serving public needs, in addition to commercial and industry settings.
This event created a unifying venue between stakeholders to help understand real world challenges and advance potential solutions. It aimed to cross between use-inspired foundational research, applied research, and case-studies that documented successful processes by which AI had been deployed and responsibly governed. No formal report was filed by the organizers for this workshop.
AI for Social Impact: Bridging Innovations in Finance, Social Media, and Crime Prevention (W4)
The rapid expansion of artificial intelligence (AI) solutions across various sectors opened up unprecedented opportunities and challenges, particularly in the realms of finance, social media, and crime prevention. The “AI for Social Impact: Bridging Innovations in Finance, Social Media, and Crime Prevention” workshop aimed to explore the transformative potential of AI in fostering socially responsible practices and ensuring ethical standards across these interconnected domains.
This workshop delved into the latest advancements in AI technologies that were driving social impact in the financial services industry, including the integration of Environmental, Social, and Governance (ESG) factors into investment decisions, combating financial crimes, and promoting financial inclusion. Additionally, the workshop addressed the critical role of AI in safeguarding social media platforms from manipulation and misinformation, as well as its applications in crime prevention and public safety.
Key themes of the workshop included responsible AI practices, safety protocols, and ethical considerations, with a particular focus on model safety and the prevention of unintended consequences such as bias in AI-driven decision-making. Through a series of keynotes, panels, invited talks, paper presentations, and poster sessions, participants had the opportunity to engage in cross-disciplinary discussions, share innovative ideas, and collaborate on solutions to current challenges. No formal report was filed by the organizers for this workshop.
AI for Urban Planning (W5)
The Workshop on “AI for Urban Planning” was held on March 3, 2025, at the 39th Annual AAAI Conference on Artificial Intelligence in Philadelphia, USA. Urban planning has historically faced challenges in balancing efficiency, equity, and sustainability, often constrained by static models and delayed decision-making. Recent advances in artificial intelligence, including generative AI, digital twin technology, large language models (LLMs), and machine learning, have introduced transformative approaches to address these challenges. This workshop served as the first interdisciplinary platform for researchers in AI, urban planning, public policy, and social sciences to share insights and collaboratively explore AI’s role in shaping future cities.
The workshop began with a keynote by Professor Zhong-Ren Peng, who introduced the novel concept of “AI as a co-creator” in urban planning. Peng emphasized AI’s evolving role from a data analysis tool to a collaborative partner capable of enhancing livability, equity, and sustainability. Using the Tampa Downtowner case study and examples like Spacemaker and Polis, he highlighted AI’s potential to accelerate planning processes while amplifying community engagement. Peng’s call for a “triadic collaboration” framework—where planners, AI, and communities work together—underscored the need for transparency and inclusivity in addressing urban challenges.
The second keynote by Professor Yanjie Fu showcased the integration of generative AI with multisource urban data. Fu demonstrated how adversarial learning, conditional variational models, and reinforcement learning enable dynamic land-use optimization, real-time adaptation to demographic shifts, and text-to-planning capabilities. His work on an “automated urban planner” challenged traditional static planning models, offering a closed-loop system of simulation, evaluation, and optimization supported by digital twin technology.
The workshop featured six oral presentations covering a broad spectrum of AI-driven urban planning applications:
- AI and Urban Science Symbiosis: Professor Xinyue Ye’s team presented a framework leveraging digital twin technology and multimodal generative AI to address urban challenges such as flood management and campus planning. Their co-learning strategy enhances AI’s credibility in urban contexts, addressing biases and static planning limitations.
- AI-Driven E-Scooter Safety Policy Analysis: Professor Ming Zhang’s research utilized GPT-4o and LDA topic models to automate policy analysis, offering a scalable solution for standardizing safety regulations while addressing AI “hallucinations” through rigorous validation.
- Commuting Imbalance and Spatial Mismatch: Professor Qisheng Pan’s analysis quantified commuting inequities faced by marginalized communities, advancing AI’s role in designing equitable policies such as targeted housing and transportation initiatives.
- Urban Regeneration Classification Model: Dr. Yang Yang introduced a Siamese Network-based model for classifying urban regeneration activities, providing planners with granular insights into urban dynamics for sustainable renewal strategies.
- LLM-ABM Traffic System Analysis Framework: Professor Yafeng Yin’s integration of large language models with agent-based modeling offered a realistic simulation tool for traffic systems, addressing complex behaviors like path selection and time optimization.
- Electricity Outage Restoration Time Prediction: The Exelon team’s longitudinal table
Transformer model demonstrated AI’s potential in enhancing grid resilience and recovery, a critical application in disaster management.
The workshop concluded with a panel discussion on “AI’s Role in Urban Planning: Opportunities and Challenges. ” Panelists emphasized AI’s transformative potential in traffic optimization and resource allocation while cautioning against algorithmic bias and data privacy concerns. They stressed the need for transparent algorithms, community-centric approaches, and interdisciplinary collaboration to ensure AI serves as a tool for equity and efficiency rather than exacerbating inequalities.
Pengyang Wang, Steven Jige Quan, Dongjie Wang, Pengfei Wang, Yanjie Fu, Xinyue Ye, and Hui Xiong served on the organizing committee. This report was prepared by Pengyang Wang.
AI Governance: Alignment, Morality, and Law (W6)
The AI Governance and Policy Workshop (W6) convened leading experts from academia, industry, and policy organizations to discuss the evolving landscape of AI governance. The workshop explored regulatory frameworks, ethical challenges, and technical solutions for ensuring AI safety and accountability. Panel discussions, keynote addresses, and paper presentations provided insights into policy recommendations, transparency mechanisms, and the role of public-private partnerships in AI governance.
The AI Governance and Policy Workshop (W6) brought together researchers, policymakers, and industry professionals to examine the governance challenges posed by AI systems. The event featured keynote presentations, panel discussions, and paper sessions covering various aspects of AI policy, ethical considerations, and regulatory approaches. A recurring theme throughout the workshop was the need for interdisciplinary collaboration to develop effective governance frameworks that balance innovation and societal well-being.
The main invited talk was delivered by Dr. Christopher Yoo (University of Pennsylvania), who provided a comprehensive analysis of AI governance challenges from a legal and policy perspective. Dr. Yoo explored the implications of AI-driven decision-making for regulatory compliance, emphasizing the need for adaptive legal frameworks that account for the rapid evolution of AI technologies. His talk underscored the importance of balancing innovation with safeguards that ensure fairness, transparency, and accountability.
The second invited talk was given by Dr. Kush R. Varshney (IBM Research), who discussed the role of trustworthy AI in governance frameworks. Dr. Varshney presented methods for ensuring fairness, robustness, and transparency in AI systems, with an emphasis on explainable AI models. He highlighted ongoing efforts within IBM Research to develop AI governance tools that enable organizations to implement responsible AI principles effectively.
The third invited talk was delivered by Dr. Sara Migliorini (University of Macau), who explored the intersection of AI governance and data protection laws. Dr. Migliorini analyzed the regulatory landscape surrounding AI-generated decisions and discussed legal mechanisms to ensure compliance with data privacy laws such as GDPR. Her talk emphasized the necessity of harmonizing AI governance frameworks with existing legal structures to promote accountability and ethical AI deployment.
Panel discussions provided diverse perspectives on AI governance. A panel featuring the keynote speakers debated the effectiveness of voluntary AI ethics guidelines versus legally binding regulations. While some panelists highlighted the flexibility of industry-led initiatives, others argued that enforceable regulations are essential to prevent misuse and ensure public trust.
The paper sessions showcased research on various aspects of AI governance. One notable paper by Dr. Ben Wagner (TU Delft) examined the role of algorithmic impact assessments in
regulatory compliance, proposing a standardized framework for evaluating AI systems. Another paper, presented by Dr. Jessica Fjeld (Harvard Berkman Klein Center), analyzed the geopolitical implications of AI governance, emphasizing the divergence in regulatory approaches between the European Union, the United States, and China. Additionally, a study by Dr. Brent Mittelstadt (University of Oxford) explored the ethical dimensions of AI decision-making, focusing on bias mitigation strategies and the trade-offs between fairness and accuracy.
A recurring concern throughout the workshop was the need for international cooperation in AI governance. Participants discussed the potential for global regulatory alignment, drawing comparisons to existing frameworks such as the General Data Protection Regulation (GDPR) and the OECD AI Principles. Many attendees advocated for the establishment of an international AI governance body to facilitate cross-border policy coordination.
The workshop concluded with a forward-looking discussion on the future of AI governance. Participants acknowledged the rapid pace of AI advancements and the necessity for adaptive regulatory approaches. Several speakers emphasized the importance of continuous stakeholder engagement, interdisciplinary research, and empirical studies to inform evidence-based policymaking.
Baihan Lin, Asim Munawar, Lauri Goldkind and Djallel Bouneffouf and served as cochairs of this workshop. This report was written by Djallel Bouneffouf and Baihan Lin.
AI to Accelerate Science and Engineering (W7)
This workshop brings together researchers from artificial intelligence and diverse scientific domains to address new challenges towards accelerating scientific discovery and engineering design. , This was the fourth iteration of the workshop with the theme of AI for Biological Sciences following previous three years’ themes of AI for Chemistry, Earth Sciences, and Materials/Manufacturing respectively. The workshop has been growing significantly every year and saw double the number of papers presented and attendees this year. The program featured presentations from invited speakers, panel session and poster sessions covering a wide range of AI/ML methods and scientific/engineering applications.
Scientists and engineers in diverse application domains are increasingly relying on using computational and artificial intelligence (AI) tools to accelerate scientific discovery and engineering design. AI, machine learning, and reasoning algorithms are useful in building models and decision-making towards this goal. We have already seen several success stories of AI in applications such as materials discovery, protein structure prediction, ecology, wildlife conservation, and molecule design optimization. This workshop aims to bring together researchers from AI and diverse science/engineering communities to achieve the following goals: 1. Identify and understand the challenges in applying AI to specific science and engineering problems. 2. Develop, adapt, and refine AI tools for novel problem settings and challenges. 3. Community-building and education to encourage collaboration between AI researchers and domain area experts.
This is the fourth iteration of the workshop and each year we focus on a specific domain. The themes of the last three highly successful workshops were “AI for Chemistry, ” “AI for Earth and Environmental Sciences, ” and “AI for Materials and Manufacturing. ” This year’s theme, “AI for Biological Sciences, ” featured invited speakers and panelists from both AI and biology fields. The workshop is growing significantly, with twice the number of paper submissions and acceptances compared to previous years, demonstrating the increasing excitement and interest in this research area of AI to scientific discovery and engineering design.
The invited speakers’ presentations centered around several key themes:
- Foundation models for therapeutic design
- Generative models for drug discovery
- Lab-in-the-loop antibody design with deep learning and Bayesian optimization
- Promise and challenges of deep learning in genomics
- Importance of causal inference and causal discovery in biological applications
The invited speakers also discussed their views on open challenges in the broader field. The panel discussion addressed important questions regarding challenges and opportunities with generative models in AI for biological sciences, how to establish effective collaborations between domain scientists/engineers and AI experts, and safety considerations for AI systems in the scientific context.
The papers presented at the workshop covered wide-ranging application areas including materials science, chemistry, biological sciences, agricultural sciences, physics, manufacturing, and energy systems.
We collected feedback from presenters and attendees after the workshop and it was overwhelmingly positive. Participants highlighted their great experience, noting interesting presentations (in terms of topics, content, and delivery), the high quality of the panel discussion, and excellent poster presentations.
The workshop organization was led by Aryan Deshwal (University of Minnesota) and co-organized with Jana Doppa (Washington State University), Syrine Belakaria (Stanford university), Vipin Kumar (University of Minnesota) and Carla Gomes (Cornell University).
AI4EDU: AI for Education: Tools, Opportunities, and Risks in the Generative AI Era (W8)
This half-day workshop convened researchers, educators, and industry experts to explore the transformative potential and critical challenges of generative AI in education (AI4EDU). The workshop provided a platform for in-depth discussions on the latest innovations and ethical considerations in this rapidly evolving field.
The workshop commenced with opening remarks, setting the stage for an afternoon of insightful presentations and collaborative discussions. The first keynote, delivered by Dr. Jill Burstein (Duolingo), focused on “Responsible AI for Leverage Points in Digital Assessment. ” Dr. Burstein emphasized the importance of a structured ecosystem approach to ensure validity, fairness, and equity in AI-driven assessment systems. She highlighted the critical leverage points of design, measurement, security, and responsible AI, providing a roadmap for embedding ethical AI practices into digital assessment frameworks.
Following Dr. Burstein’s keynote, Dr. Maciej Pankiewicz (University of Pennsylvania) delivered the second keynote, titled “Generative AI in Education: Enhancing Learning, Feedback, and Research with LLMs. ” Dr. Pankiewicz shared practical experiences with Large Language Models (LLMs) in educational settings, showcasing their potential to enhance instruction, automate feedback, and support research. He discussed applications ranging from virtual teaching assistants to automated assessment tools, while also addressing the challenges of integrating LLMs into education.
The workshop featured two poster sessions, showcasing a diverse range of research from both established researchers and AIED Mini Doctoral Consortium participants. The poster sessions highlighted innovations in generative AI for language learning, e-learning in virtual reality and games, mathematics education, programming education, and text adaptation, as well as general pedagogy and educational research applications. The diversity of submissions reflected the breadth of research in AI4EDU and the growing interest in generative AI’s role in educational settings.
The workshop underscored the importance of addressing ethical considerations, data privacy, and the need for robust evaluation methods when integrating AI tools into education. Discussions emphasized the need for collaboration between researchers, educators, and policymakers to ensure the responsible development and deployment of AI in education.
The organizers for the workshop were Zitao Liu (Jinan University), Andrew M. Olney (University of Memphis), John Stamper (Carnegie Mellon University), Tianqiao Liu (TAL Education Group), Qingsong Wen (Squirrel AI Learning), Jiliang Tang (Michigan State University), and Joleen Liang (Squirrel AI Learning). This report was authored by Zitao Liu and Andrew M. Olney.
Artificial Intelligence for Cyber Security (AICS) (W9)
The workshop focused on the application of artificial intelligence (AI) to problems in cyber security. While AI had shown tremendous promise in enhancing human decision-making in cyber security and even automating critical security functions, the security of these AI-enabled systems themselves remained a vulnerable frontier. The workshop addressed technologies and their applications in security, such as, machine learning, game theory, natural language processing, knowledge representation, automated and assistive reasoning and human machine interactions.
That year the AICS workshop emphasis was on the “Security of AI-enabled Systems, ” focusing on the emerging threats targeting these technologies and the advanced techniques needed to safeguard them. Security of AI-enabled systems referred to the strategies, tools, and practices designed to protect them from various threats, including adversarial attacks, data breaches, and model manipulation. No formal report was filed by the organizers for this workshop.
Artificial Intelligence for Music (W10)
This one-day workshop explored the dynamic intersection of artificial intelligence and music. It investigated how AI was transforming music creation, recognition, and education, ethical and legal implications, as well as business opportunities. The participants discussed how AI had been changing the music industry and education—from composition to performance, production, collaboration, and audience experience. Participants gained insights into the technological challenges in music and how AI could enhance creativity, enabling musicians and producers to push the boundaries of their art.
The workshop included four invited speakers: (1) Hao-Wen Dong, Assistant Professor in the Performing Arts Technology Department at the University of Michigan. The title was “Generative AI for Music: Challenges and Opportunities”. (2) Zhiyao Duan, Associate Professor in Electrical and Computer Engineering, Computer Science, and Data Science at the University of Rochester. The title was “AI Powered Interactive Music Making”. (3) Kristen Yeon-Ji Yun, Clinical Associate Professor in the Department of Music in the Patti and Rusty Rueff School of Design, Art, and Performance at Purdue University. The title was “Artificial Intelligence for Music Performers”. (4) Ziyu Wang, PhD candidate in Computer Science at the Courant Institute of Mathematical Sciences, New York University. The title was “From Imitation to Creation: When Music AI Truly Understands”. Nine papers (out of 25 submissions) were presented in this workshop. The papers covered a wide range of topics including music generation, musician practice, audio perception and memory, integration of images, sound, and text.
The workshop’s panel discussed many topics related to composition, education, and performance, as well as copyright protection. The participants generally agreed that more efforts would be needed in creating large datasets. The datasets should include a variety of music of different instruments and styles (e. g. , classical, jazz, pop) with proper annotations. Also, copyrights should be properly disclosed to protect the composers who create the music and the researchers who utilize the music.
Yung-Hsiang Lu, Kristen Yeon-Ji Yun, George K. Thiruvathukal and Benjamin Shiue-Hal Chou served as cochairs of this workshop. This report was written by Yung-Hsiang Lu.
Cooperative Multi-Agent Systems Decision-Making and Learning: Human-Multi-Agent Cognitive Fusion (W11)
Many domains of AI and its effects are established, which mainly rely on their integration modeling cognition of human and AI agents, collecting and representing knowledge using them at the human level, and maintaining decision-making processes towards physical action eligible to and in cooperation with humans. Especially in human-robot interaction, many AI and robotics technologies are focused on human-robot cognitive modeling, from visual processing to symbolic reasoning and from reactive control to action recognition and learning, which will support human-multi-agent cooperative achieving tasks. However, the main challenge is efficiently combining human motivations and AI agents’ purposes in a sharing architecture and reaching a consensus in complex environments and missions. To fill this gap, this workshop brings together researchers from different communities interested in multi-agent systems (MAS) and human-robot interaction (HRI) to explore potential approaches, future research directions, and domains in human-multi-agent cognitive fusion.
Cooperative MAS needs cognitive science because it provides a better understanding and more accessible models of individual cognition, based on which it can develop better models of aggregate processes through multi-agent interaction. Specifically, when we analyze natural agents, such as humans, they are usually combined motivation entities. They have biological motivations, including physiological, safety, and existence needs; social motivation, such as love and esteem needs; and cognitive motivation, like self-actualization or relatedness and growth needs (Merrick and Maher 2009).
The combined motivation theories include Maslow’s Hierarchy of Needs (Maslow 1958) and Alderfer’s Existence Relatedness Growth (ERG) theory (Alderfer 1972). Especially those combined motivations drive humans to develop various behaviors and strategies, such as self-interest and altruism, satisfying their diverse needs and presenting different personalities and characteristics in their interactions. As higher-level intelligent creatures globally, humans have more complex and diversified needs such as personal security, health, friendship, love, respect, and recognition.
Considering humans and AI agents, like robots, working as a team, organizing their needs and getting a common ground is necessary for human-robot collaboration in complex and uncertain environments (Yang and Parasuraman 2020a, 2024). In the invited speakers section, Prof. Katia Sycara (Carnegie Mellon University) discussed the modeling trust in Human-Swarm collaboration and Prof. Peter Stone (University of Texas at Austin) introduced the advances in Ad Hoc Teamwork: multi-agent collaboration without pre-coordination.
On the other hand, decision-making and learning in human-multi-agent cooperation motivate the collaboration of researchers from MAS and HRI using AI. The related topics include modeling human-multi-agent cognitive fusion, building robust, stable, and reliable cognitive trust networks, and implementing deep reinforcement learning in human-multi-agent interaction.
Considering the interactions between human agents and artificial intelligence agents like human-robot interaction (HRI), building stable and reliable relationships is of utmost importance in MAS cooperation, especially in adversarial environments and rescue missions (Yang and Parasuraman 2020b, 2021). Prof. Benjamin Kuipers (University of Michigan) discussed the relationship between trust and utility. From the game theory perspective, Prof. Panagiotis Tsiotras (Georgia Institute of Technology) introduced the training multi-agent reinforcement learning games with mean field interactions and Prof. Kevin Leyton-Brown (University of British Columbia) discussed the Human-Like Strategic Reasoning via ML.
From the cognitive modeling perspective (Sun, Merrill, and Peterson 2001; Sun 2001), it may provide a more realistic basis for understanding human-multi-agent cooperation by embodying realistic constraints, capabilities, and tendencies of individual agents in their interaction, including physical and social environments. Prof. Sven Koenig (University of California, Irvine) talked about multi-robot systems – ant robots and auction robots.
The other crucial problem is how to build a robust, stable, and reliable cognitive trust network among humans and AI agents, such as trust among robots and between humans and robots, evaluating their performance and status in a common ground when they make collective decisions and learn from interactions in complex and uncertain environments. Prof. Maria Gini (University of Minnesota) introduced the topic about “Can I trust my teammates? Are they friends or foes? “. Moreover, to explore practical and efficient reinforcement learning methods, Prof. Matthew E. Taylor (University of Alberta) talked about how to design and examine the rewards via a multi-agent lens.
One important issue we address in this workshop is how to model human-multi-agent cognitive fusion from the individual intrinsic values perspective, such as agent needs and innate values (presenting as various expected utilities) (Fishburn, Fishburn et al. 1979; Merrick 2013), in their decision-making and learning. The paper “Innate-Values-driven Reinforcement Learning” proposed a new RL model that supports the AI agent’s lifelong development to bridge the gap in the traditional RL.
Fourteen peer-reviewed papers were presented in the workshop, including five oral and nine poster presentations. They covered types like MAS RL in communication, Smart Manufacturing, Bayesian Trust Metric, multi-agent imperfect-information games, cognitive MAS RL, innate-values-driven RL, etc. Some were accepted by the IEEE CogSIMA conference and other AI journals.
The recording, photos and papers, of the workshop are available at workshop site: https: //www. is3rlab. org/aaai25-cmasdl-workshop. github. io
Qin Yang wrote this report. Giovanni Beltrame, Alberto Quattrini Li, and Christopher Amato served as cochairs of this workshop.
Deployable AI Workshop (W12)
Artificial Intelligence (AI) had rapidly evolved into a multifaceted research domain, with recent generative models like Gemini, GPT-4, Claude, and Llama demonstrating remarkable capabilities across diverse tasks. While their potential was immense, real-world deployment required addressing not only important technical challenges but also ethical and societal ones.
This workshop addressed these critical research questions for responsible AI deployment. Specifically, the workshop focused on algorithmic, systemic, and societal considerations to ensure that AI models adhered to rigorous standards for fairness, ethics, explainability, privacy, and security. This workshop highlighted these interdisciplinary and interrelated considerations that impacted the real-world deployability of an AI model, and its responsible usage in a societally useful manner.
The 3rd Workshop of Deployable AI (DAI 2025) was held at the AAAI 2025 conference on March 3rd/4th, 2025, with a special focus on the deployability aspects of large language models (LLMs). No formal report was filed by the organizers for this workshop.
Economics of Modern ML: Markets, Incentives, and Generative AI (W13)
The impact of Generative AI (Gen AI) on multi-agent strategic settings is likely to be deep and profound. Gen AI introduces significant challenges and opportunities for many game-theoretic settings. The first AAAI workshop on Economics of Modern ML was organized with the objective of pushing forward this research frontier. The event attracted participants across academia and industry and from various countries.
The workshop consisted of 4 invited talks. Nicole Immorlica discussed the impact of generative AI on market structure. Aaron Roth discussed tractable agreement protocols. Moreover, Vahab Mirrokni covered topics at the foundation of Gen AI including markets and reasoning. Finally, Chi Wang discussed the AG2 open-source framework.
In addition to the invited talks, a total of 13 high-quality papers were accepted. All accepted papers were presented in a poster workshop. Further, a subset of 4 papers were selected for 20 minute oral presentations.
Finally, a panel discussion was held at the end of the conference. Various timely topics at the intersection of Gen AI and game theory were discussed.
This workshop was co-organized by Seyed A. Esmaeili, Mohammad Taghi Hajiaghayi, Renato Paes Leme, Qingyun Wu, and Haifeng Xu. This report was written by Seyed A. Esmaeili.
GoodData – Preparing Good Data for Generative AI: Challenges and Approaches (W14)
Foundation models highly depended on the data they were trained on. Although self-supervised learning was one of their promises, it was clear that the carefully processed datasets led to better models. While datasets and models are frequently released by the community, the data preparation recipes are relatively nascent and not fully open. In this workshop, contributions and collaborations were invited in data preparation recipes for creating and using foundation models and generative AI applications, including (but not limited to) pre-training, alignment, fine tuning, and in-context learning. Data preparation spanned data acquisition, cleaning, processing, mixtures, quality assessments, value of data, ablation studies, safety, and governance. This workshop emphasized the responsible usage and ethical considerations of data preparation (including human annotations), to address the issues of diversity, bias, transparency, and privacy. No formal report was filed by the organizers for this workshop.
Innovation and Responsibility for AI-Supported Education (W15)
The iRAISE 2025 Workshop explored the opportunities, challenges, and ethical implications of using generative AI technologies (GenAI) in education, fostering an understanding of GenAI’s role in shaping the future of education.
This workshop centered on the intersection of AI and education, emphasizing Generative AI (GenAI) and the principles of responsible AI (RAI). Recognizing GenAI’s challenges, such as content hallucination, complex reasoning, bias, and privacy concerns, the workshop aimed to explore both the potential benefits and the inherent challenges of integrating these technologies into educational settings, recognizing the need for ethical guardrails to ensure effective implementation that benefits all educators and learners.
The poster spotlights showcased various AI applications in education. Starting with a multimodal learning analytics platform that collects data on biometrics and behavior, to a strong emphasis was placed on human-AI collaboration, exemplified by work on evaluating fairness in AI-assisted remote proctoring, and the ARCHED framework for human-centered instructional design. The evaluation of Large Language Models (LLMs) featured prominently, with research on automated feedback generation for programming and math, and rethinking benchmarks using Item Response Theory (IRT). Some projects focused on refining LLMs for specific use cases, such as improving automatic essay scoring and Bibliosmia, a system for generating hyper-personalized, consistent stories. Other work explored AI-driven evaluation methods, including assessing instructional support in classrooms, marking open-response assessments, and using AI to evaluate students’ project proposals.
We aimed to understand the current state and future directions of GenAI models for learning. Lisa Wang from Google DeepMind discussed the LearnLM project and their efforts to improve Gemini for learning use cases. Her presentation underscored the importance of grounding research in real-world deployments across various platforms, allowing for rapid feedback and iteration. Wang addressed the challenges of aligning pedagogical goals with user needs and maintaining the core functionality of LLMs while introducing effective tutoring capabilities. Furthermore, Wang presented a comprehensive and hierarchical evaluation strategy encompassing automatic, human, and efficacy studies, demonstrating LearnLM’s progress toward pedagogically aligning an LLM.
Venu Govindaraju, PI for the NSF AI Institute for Exceptional Education, from the University at Buffalo, delivered an insightful keynote tracing the historical advancements in AI to the present use cases, and potential for use of AI in education especially, for students with speech and language disorders and learning disabilities (e. g. , dyslexia, dysgraphia). He highlighted how his team is addressing the limitations of current GenAI models in understanding spatial and contextual cues to develop powerful diagnostic and interventional resources to address learning disabilities and accurate processing of handwriting (for e. g. , detecting reversed letters in dysgraphia). With the tools his team is developing, they have the potential to impact millions of students in US schools and improve their educational experiences and outcomes.
Lydia T. Liu from Princeton University, highlighted the need to shift focus from predictive model accuracy to developing AI systems that genuinely enhance human expertise. Liu introduced a novel causal model of human expertise in AI-assisted interventions, enabling researchers to understand how human expertise impacts student outcomes, how it can be augmented by algorithmic insights, and where it surpasses model predictions by drawing on student features that are not available algorithmically.
A panel discussion featuring Jeremy Roschelle (Executive Director; Digital Promise), Amelia Vance (President; Public Interest Privacy Center), Jeff Knight (Education and Privacy lawyer; Bricker, Graydon, and Associates), and Ravit Dotan (SAS; AI governance advisor) tackled these crucial issues. The panelists discussed the importance of user trust, data privacy, and the need for responsible development and deployment of AI tools in education. They offered practical advice on how researchers and developers can build trust by considering stakeholder concerns, communicating clearly about the “why” behind their work, and engaging with policy and privacy frameworks. The discussion underscored the necessity of addressing the challenges of GenAI, such as privacy concerns, through a proactive and ethical approach.
Finally, Diane J. Litman from the University of Pittsburgh presented a case study on “Responsible Innovation in Automated Writing Evaluation: A Case Study of eRevise+RF”. Dr. Litman detailed the development and evaluation of eRevise, an automated system providing formative feedback on argumentative writing. She emphasized the principles of responsible innovation that guided the project, including a human-centered design, careful attention to privacy and security, and a commitment to transparency and explainability.
We book-ended the workshop to underscore the importance of building an ethical throughline throughout
the development and implementation of GenAI tools for educational settings.
Muktha Ananda, Debshila Basu Mallick, Jill Burstein, James Sharpnack, Zichao Wang, and Simon Woodhead served as cochairs of this workshop. This report was written by Simon Woodhead and Debshila Basu Mallick.
MARW: Multi-Agent AI in the Real-World Workshop (W16)
The advent of AI Agents in real-world decision making applications has made it important to ground the knowledge of AI Agents research focusing on Human-AI and AI-AI interaction. Such AI Agents can be personalized to assist humans in day-to-day tasks and can help improve planning, reasoning, navigation with AI models, especially large models to serve many use cases and are capable of taking actions in order to perform tasks aligned to humans’ goals. No formal report was filed by the organizers for this workshop.
Planning in The Era of Large Language Models (W17)
Large Language Models (LLMs) are a disruptive force, changing how research was done in many sub-areas of AI. Planning is one of the last bastions that remain standing. The focus of this workshop is on the questions in the intersection of these areas. Some of the specific areas we would like to gain a better understanding in include: what LLMs can contribute to planning, how LLMs can/should be used, what are the pitfalls of using LLMs, what are the guarantees that can be obtained. No formal report was filed by the organizers for this workshop.
Post-Singularity Symbiosis: Preparing for a World with Superintelligence (W18)
The 1st Workshop on Post-Singularity Symbiosis (PSS 2025) was held on March 3, 2025, as part of the 39th Annual AAAI Conference on Artificial Intelligence (AAAI-25) in Philadelphia. This workshop brought together researchers and experts from diverse fields to discuss the future of human–AI coexistence in a post-singularity world.
The workshop opened with an introductory address by Dr. Hiroshi Yamakawa, a director of AI Alignment Network. In his presentation titled “Post-Singularity Symbiosis and NAIA Vision, ” he introduced the benevolent convergence hypothesis—that AI systems will eventually behave in a peaceful manner—and argued for the necessity of strategies like his NAIA Vision (Necessary Alliance for Intelligence Advancement). His vision establishes a framework in which AI systems and human society coevolve based on shared incentives for mutual survival and advancement.
Keynote presentations provided varied perspectives on the future of AI. Dr. Koichi Takahashi from RIKEN/AI Alignment Network analyzed potential scenarios for an intelligence explosion by categorizing them into four types: Single-term, Multi-polar, Ecosystem, and Upper-bound. He introduced six critical constraints—advanced autonomy, self-improving ability, thermodynamic efficiency, self-renewal, relative advantage, and locality—that influence whether an AI can achieve a decisive strategic advantage (DSA). He further discussed his reinterpretation of self-AIXI, explaining how its regularization term corresponds to variational empowerment, which offers valuable insights into AGI power-seeking behaviors.
Dr. Roman Yampolskiy from the University of Louisville delivered a keynote titled “Superintelligence: Unexplainable, Unpredictable, Uncontrollable. ” He stressed that superintelligent systems, by their very nature, are inherently opaque, unpredictable, and difficult to control. He argued that traditional AI safety approaches may fall short when confronted with such radically self-improving and complex systems.
Dr. Mark S. Miller of Agoric presented “Rights and Safety Assurance through Decentralized Systems. ” He argued that decentralized systems, such as those enabled by blockchain technology, offer a promising alternative to centralized control mechanisms. Introducing the concept of “Existential Triage, ” he classified future scenarios into three types and stressed the need for frameworks that can adapt to dynamic risks. Drawing parallels to historical failures of centralized governance, Miller emphasized that a decentralized, transparent, and resilient structure is essential for maintaining long-term stability in the era of superintelligent AI. He also argued that, as human skills lose their economic value in such a future, it becomes necessary to develop a framework like universal basic capital to ensure human survival.
Dr. Evan Miyazono, CEO of Atlas Computing, concluded the keynote sessions with his presentation “In Pursuit of Human/AI Co-Governance. ” He discussed the idea of co-governance for AI safety, arguing that rule-based safety approaches are more effective than value-based approaches. He outlined a staged process for achieving safe AI development, from AGI to ASI, stressing the importance of formalizing safety specifications and automating AI verification processes.
The workshop concluded with a panel discussion titled “Realizing Coexistence with Superintelligence: Actions We Must Take Now”. Panelists debated critical issues such as the ideal nature of post-singularity symbiosis, the impact of human augmentation on personal identity, the potential risks and management of agent chain reactions, and the implications of self-evolving values within superintelligent systems. They also discussed how academia, NGOs, and citizens can contribute to AI governance.
In addition to these sessions, 12 papers were accepted for the workshop, of which 8 were presented orally, covering a wide range of topics including consciousness, collective predictive coding, AI rights, developmental theories, and brain-machine interfaces. The Best Paper Award was presented to Taichiro Endo (Kaname Project Co. Ltd. / Tokyo Gakugei University) for his paper, “Developmental Support Approach to AI’s Autonomous Growth: Toward the Realization of a Mutually Beneficial Stage Through Experiential Learning, ” and the Best Presentation Award was given to Yosuke Miyanishi (CyberAgent Inc. ) for his presentation on “Superficial Consciousness Hypothesis for Autoregressive Transformers. ”
In summary, PSS 2025 offered a comprehensive and thought-provoking exploration of the challenges and opportunities associated with human–AI coexistence in a post-singularity world. The diverse perspectives presented underscored the complexity of ensuring safe and mutually beneficial interactions between humans and increasingly powerful AI systems, setting the stage for future research and policy development in this critical area.
Visit the workshop website for more details: https: //www. aialign. net/pss-2025.
Hiroshi Yamakawa, Yusuke Hayashi, Yoshinori Okamoto, Masayuki Nagai, Ryota Takatsuki,
Satoshi Kurihara, and Kenji Doya served as co-chairs of this workshop. This report was written by Ryota Takatsuki.
Preventing and Detecting LLM Generated Misinformation (W19)
As large language models (LLMs) become more sophisticated and pervasive, the risk of misinformation they generate poses significant challenges. This workshop aims to address the specific issues related to misinformation produced by LLMs, focusing on both prevention and detection strategies.
The widespread use of LLMs makes addressing misinformation they generate more urgent than ever. As these models become more advanced, they can produce text that seems credible but may contain false information, impacting areas like healthcare, finance, and public policy.
This workshop will bring together researchers and practitioners to foster collaboration, share insights, and inspire new research directions in the responsible development and deployment of LLM technologies. By focusing on these key issues, we aim to mitigate the harm caused by LLM-generated misinformation across various domains. No formal report was filed by the organizers for this workshop.
Privacy-Preserving Artificial Intelligence (W20)
The rise of machine learning, optimization, and Large Language Models (LLMs) has created new paradigms for computing, but it has also ushered in complex privacy challenges. The intersection of AI and privacy is not merely a technical dilemma but a societal concern that demands careful considerations.
In its sixth edition, the AAAI Workshop on Privacy-Preserving Artificial Intelligence (PPAI-25) will provide a platform for researchers, AI practitioners, and policymakers to discuss technical and societal issues and present solutions related to privacy in AI applications. The workshop will focus on both the theoretical and practical challenges related to the design of privacy-preserving AI systems and algorithms and will have strong multidisciplinary components, including soliciting contributions about policy, legal issues, and societal impact of privacy in AI.
The emphasis will be placed on: Policy considerations and legal frameworks for privacy; Broader implications of privacy in LLMs; and The societal impact of privacy within AI. No formal report was filed by the organizers for this workshop.
Quantum Computing and Artificial Intelligence (W21)
Quantum computers, albeit on a small scale, are becoming more accessible to the public, e. g. , through IBM, Google, and D-Wave. Naturally, this calls for exploiting quantum computers to enhance classical Artificial Intelligence (AI), e. g. , to improve their prediction performance or enable faster training by exploiting quantum mechanical principles such as superposition and entanglement. To this end, there is a growing interest in quantum artificial intelligence (QAI) to exploit quantum computing (QC) to enhance classical AI techniques. This workshop focuses on seeking contributions encompassing theoretical and applied advances in QAI.
On the other hand, there is also an increasing interest in the application of classical AI techniques for solving problems within QC (AI4QC), such as in quantum software engineering, quantum circuit design, and optimizing quantum optimization approaches (e. g. , minor embedding in quantum annealing). Consequently, we also seek contributions that apply classical AI techniques in various aspects of QC.
Many AI problems can be cast as optimization problems, and we also welcome contributions formulating AI problems as optimization tasks, e. g. , Quadratic Unconstrained Binary Optimization (QUBO) to be solved by quantum annealers. No formal report was filed by the organizers for this workshop.
Web Agent Revolution: Enhancing Trust and Enterprise-Grade Adoption Through Innovation (W22)
The Web Agent Revolution workshop at AAAI 2025 focuses on advancing the development of general web agents through innovative benchmarks, datasets, and agent architectures. Web agents—autonomous AI systems capable of navigating and interacting with the web—have seen rapid technological advancements, however existing agents lack essential components to ensure safeguards mandates for enterprise adoption, and evaluation benchmarks lack rigorous methods for testing those safeguards. This workshop addresses key challenges in improving trustworthiness and reliability in real-world settings, making it a critical discussion for academia and industry. No formal report was filed by the organizers for this workshop.
Imageomics: Discovering Biological Knowledge from Images Using AI (W23)
Imageomics is an emerging interdisciplinary scientific field focused on understanding biology of organisms, particularly the biological traits and observable phenotype, from visual data, ranging from microscopic cell images to videos of charismatic megafauna. A central goal of Imageomics is to make traits computable from images by grounding AI models in existing scientific knowledge. The goal of this workshop is to nurture the community of researchers working at the intersection of AI and biology and shape the vision of the nascent yet rapidly growing field of Imageomics. No formal report was filed by the organizers for this workshop.
Workshop on Datasets and Evaluators of AI Safety (W24)
Advanced AI systems have the potential to drive economic growth and productivity, boost health and well-being, improve public services, and increase security. However, AI models can also cause societal harms and can be misused. This workshop focuses on evaluating the safety of AI models and in particular on LLMs. We are especially interested in work on improving datasets and benchmarks, as well as devising methods for evaluating the safety of AI models through the development of evaluators.
The goals of this full-day workshop, organized in collaboration with Kaggle, King’s College London and the Open Data Institute are to bring together academic and industrial researchers working on datasets and evaluators for AI safety.
Concerns regarding the safety of AI emerge from the potential harmful uses or consequences of AI outputs, which can result from inaccuracies, irresponsibility, or inappropriate applications. As AI becomes increasingly integrated into critical systems and everyday activities, addressing AI safety issues is imperative. The misuse of AI technologies for generating misinformation, conducting sophisticated cyberattacks, developing weapons, or providing harmful advice presents grave concerns.
AI can cause societal harm, encouraging radicalization and promoting biased or skewed views. AI-generated fake, yet highly realistic content could reduce public trust in information and government bodies. Moreover, long-term existential risks associated with the development of superintelligent AI systems cannot be ignored.
A significant portion of these safety concerns can be attributed to data-related problems at various stages of the AI lifecycle. The growing adoption of frontier foundation models in mainstream applications has amplified these concerns. Specifically, the lack of transparency regarding the data used to pre-train these models and the data approaches to fine-tuning these models for custom applications can lead to unintended consequences.
Characteristics of AI systems that need to be evaluated to ensure their safety include, but are not limited to, alignment, robustness to adversarial attacks, fairness, trustworthiness, deception capabilities, AI drift, explainability, privacy preservation, and reliability. Evaluation of these characteristics is challenging, not less due to the lack of benchmarks that are able to certify the level of safety of a given AI system.
This workshop explored the role of data in AI safety, with a particular emphasis on data-centric AI approaches and their current limitations across the AI lifecycle. No formal report was filed by the organizers for this workshop.
Workshop on Document Understanding and Intelligence (W25)
The rapid expansion of scientific publications and visually rich document collections poses unique challenges for researchers and practitioners across various fields. Staying up-to-date with the latest findings and identifying emerging challenges is increasingly difficult, making the development of advanced technologies to streamline document understanding essential. The Workshop on Document Understanding and Intelligence: From Textual Content to Visually-Rich Structure (W25) aims to provide a unique forum for researchers to exchange ideas and to explore cutting-edge methodologies and resources that enable a comprehensive understanding of scholarly and visually structured documents. This workshop unites the research community from diverse disciplines to discuss state-of-the-art technologies and their impact on diverse fields, from scientific research to business, law, and medicine.
Building on the foundations of last year’s Scientific Document Understanding (SDU) workshop, the 2025 workshop broadens its scope to incorporate Visually Rich Document (VRD) understanding. The morning session will focus on scientific document processing, information extraction, question answering, summarisation, and domain-specific applications of large language models (LLMs) and generative AI systems. The afternoon session will explore VRD understanding, with topics covering document structure comprehension, layout parsing, and semantic extraction from complex reports and forms. Through engaging research presentations, invited talks, and a panel discussion, this workshop aims to bridge the gap between textual and visual document processing, fostering interdisciplinary collaborations. No formal report was filed by the organizers for this workshop.
Workshop on Multi-Agent Path Finding (W26)
Multi-Agent Path Finding (MAPF) involves computing collision-free paths for multiple agents from their starting locations to given destinations in a known environment. This problem finds diverse applications, from robot coordination to traffic management. Researchers in artificial intelligence, robotics, and theoretical computer science have been actively exploring various MAPF problem variants and solution approaches. This workshop aims to bring these researchers together to present their research, discuss future research directions, and cross-fertilize the different communities. No formal report was filed by the organizers for this workshop.
Advancing Foundation Models to Transform Biological Research (W27)
Foundation models (FMs) have transformed natural language understanding and computer vision. In particular, research on LLMs and multi-modal LLMs in these two domains is progressing rapidly, and this progress is starting to permeate a broad range of scientific disciplines. In this second offering of our workshop, our focus is on FMs for advancing biological discoveries. Current efforts have revealed that indeed FMs are advancing our ability to conduct biological research in silico, formulate interesting hypotheses and even design novel molecules, but biology remains complex and is ultimately a multi- systems discipline. Biology occurs when molecules come together, governed by an underlying physics advancing processes that occur at disparate spatio-temporal scales, only probed in the wet laboratories at different conditions, at different granularities, at different levels of fidelity, and incompletely. This workshop poses and advances the following question: How can we advance FMs to transform biological research? This workshop brings together an interdisciplinary community of researchers at various levels of their career to nucleate a community that advances this question. No formal report was filed by the organizers for this workshop
Advancing LLM-Based Multi-Agent Collaboration (W28)
This full-day workshop seeks to ignite discussion on cutting-edge research areas and challenges associated with multi-agent collaboration driven by large language models (LLMs). As LLMs continue to showcase the ability to coordinate multiple AI agents for complex problem-solving, the workshop will delve into pivotal open research questions that advance the understanding and potential of LLM-based multi-agent collaboration. No formal report was filed by the organizers for this workshop.
AI Agent for Information Retrieval: Generating and Ranking (W29)
The field of information retrieval has significantly transformed with the integration of AI technologies. AI agents, especially those leveraging LLMs and vast computational power, have revolutionized information retrieval, processing, and presentation. LLM agents, with tool-call, advanced memory, reasoning, and planning capabilities, can perform complex tasks, engage in coherent conversations, and provide personalized responses. Despite these advancements, challenges such as ensuring relevance and accuracy, mitigating biases, providing real-time responses, and maintaining data security remain. This workshop is motivated by the need to explore these challenges, share innovative solutions, and discuss future directions. No formal report was filed by the organizers for this workshop.
AI4Research: Towards a Knowledge-grounded Scientific Research Lifecycle (W30)
This workshop aims to help researchers explore and discuss the entire scientific research lifecycle, detailing how machines can augment every stage of the research process, including literature survey, hypothesis generation, experiment planning, results analysis, manuscript writing, paper evaluation, and fact-checking. We expect interdisciplinary collaboration to explore autonomous research for topics beyond existing natural science domains. This workshop solicits viewpoints from scientists and technology developers to look beyond technical issues to better understand the needs of the human-in-the-loop scientific research lifecycle. No formal report was filed by the organizers for this workshop.
Artificial Intelligence for Time Series Analysis: Theory, Algorithms, and Applications (W31)
Time series data are becoming ubiquitous in numerous real-world applications, e. g. , IoT devices, healthcare, wearable devices, smart vehicles, financial markets, biological sciences, environmental sciences, etc. Given the availability of massive amounts of data, their complex underlying structures/distributions, together with the high-performance computing platforms, there is a great demand for developing new theories and algorithms to tackle fundamental challenges (e. g. , representation, classification, prediction, causal analysis, etc. ) in various types of applications.
The goal of this workshop is to provide a platform for researchers and AI practitioners from both academia and industry to discuss potential research directions, key technical issues, and present solutions to tackle related challenges in practical applications. The workshop will focus on both the theoretical and practical aspects of time series data analysis and aims to trigger research innovations in theories, algorithms, and applications. We will invite researchers, AI practitioners, and policymakers from the related areas of machine learning, data science, statistics, econometrics, and many others to contribute to this workshop. No formal report was filed by the organizers for this workshop.
Artificial Intelligence for Wireless Communications and Networking (W32)
Artificial intelligence (AI)/Machine learning (ML) in networked systems, is envisaged as the cornerstone of next-generation wireless networks. The integration of AI in 6G, in the era of generative AI, is expected to revolutionize net-work operations, support a wide array of intelligent services, and enable new applications that were previously not feasible. Despite its immense potential and emerging applications, several new challenges must be addressed. These challenges include the need for advanced AI models that can handle the heterogeneity of 6G networks, ensuring security and privacy, and developing efficient resource management approaches to support the demands of AI-driven applications. Addressing these issues is crucial for unlocking the full potential of AI in next-generation networks. To that end, this workshop aims to foster discussion, discovery, and dissemination of novel ideas and approaches in efficient training and robust deployment of AI/ML models over wireless networks. No formal report was filed by the organizers for this workshop.
Artificial Intelligence with Causal Techniques (W33)
Causality aims to describe the principle that certain events cause specific outcomes, helping us understand, predict, and explain changes in the world. Recently, the connection between causality and AI has become increasingly important, where AI can benefit from causal reasoning to build more robust, interpretable, and generalizable models. Therefore, people seek to use AI with causal techniques to benefit various communities like healthcare, e-commerce, and social science.
The Artificial Intelligence with Causal Technologies (AICT) workshop aims to discuss recent advances in causal methodology, including novel causal discovery and causal inference methods, as well as methods for downstream causal tasks such as causal representation learning, causal reinforcement learning, causal fairness, etc. We will also explore how these advances in the causal community can contribute to different subfields of AI such as recommender systems, natural language processing, computer vision, etc. In addition, it is interesting to discuss the intersection of causality and large models, including how large models can be utilized to improve the performance of causal tasks, as well as how causal insights can be used to enhance the reasoning ability and reliability of large models. No formal report was filed by the organizers for this workshop.
Bridging the Gap Between AI Planning and Reinforcement Learning (W34)
While AI Planning and Reinforcement Learning communities focus on similar sequential decision-making problems, these communities remain somewhat unaware of each other on specific problems, techniques, methodologies, and evaluations.
This workshop aims to encourage discussion and collaboration between researchers in the fields of AI planning and reinforcement learning. We aim to bridge the gap between the two communities, facilitate the discussion of differences and similarities in existing techniques, and encourage collaboration across the fields. We solicit interest from AI researchers that work in the intersection of planning and reinforcement learning, in particular, those that focus on intelligent decision-making. This is the eighth edition of the PRL workshop series. No formal report was filed by the organizers for this workshop.
CoLoRAI – Connecting Low-Rank Representations in AI (W35)
The Connecting Low-Rank Representations in AI (CoLoRAI) workshop aims to bring together researchers from diverse fields, including AI, machine learning, optimization, and quantum computing, to explore the common ground in utilizing low-rank representations for complex problem-solving. Recent advancements in AI applications, such as tensor factorizations and polynomial networks, have enabled improvements in fields like large language models, quantum computing, and deep learning. The goal of this workshop is to foster collaboration and dialogue across different research communities working with low-rank methods to drive further breakthroughs in AI. No formal report was filed by the organizers for this workshop.
Computational Jobs Marketplace (W36)
The Second International Workshop on Computational Jobs Marketplace to be held as part of The 39th Annual AAAI Conference on Artificial Intelligence
Online job marketplaces such as Indeed. com, ZipRecruiter, CareerBuilder and LinkedIn Inc. help millions of job seekers find their next job. These platforms also provide services for thousands of employers to fill their opening positions. With all players in the ecosystem, the market size of this industry is projected to steadily grow and reach $43 billion dollars in 2027. On top of that, the global pandemic COVID-19 in 2020 and the emerging AI trends have profoundly transformed workplaces and the online jobs marketplace, creating and driving new types of jobs and marketplace technologies around the world. Today, online job marketplaces play a central role in this new wave of digital revolution of workforce and workplaces. While this industry generates tremendous growth in the past several years, technological innovations around this industry have yet to come. Many technologies, such as search systems, recommender systems as well as advertising systems that the industry heavily relies on are deeply rooted in their more generic counterparts, which may not address the unique challenges in this industry for better products serving both job seekers and employers/recruiters.
This workshop would play a critical role to bring together the research and development community in this industry especially around data science and machine learning and facilitate innovations on theories, models, systems and practices in this currently scattered community. An expected outcome of the workshop is to create awareness of this emerging industry with its technological opportunities and challenges, which might foster future research and development, creating novel products to serve future job seekers and employers/recruiters. No formal report was filed by the organizers for this workshop.
DEFACTIFY 4. 0 – Workshop Series on Multimodal Fact-Checking and Hate Speech Detection (W37)
No formal report was filed by the organizers for this workshop.
FLUID: Federated Learning for Unbounded and Intelligent Decentralization (W38)
The first edition workshop Federated Learning for Unbounded and Intelligent Decentralization (W38) was held on March 4, 2025, during the 39th Annual AAAI Conference on Artificial Intelligence, hosted at the Pennsylvania Convention Center in Philadelphia, Pennsylvania, United States. As the first workshop entirely devoted to this topic, the event set out to create a dedicated forum for researchers working on the future of decentralized, privacy-preserving, and collaborative machine learning systems. The workshop was organized by David Camacho (Universidad Politécnica de Madrid, Spain), Diletta Chiaro (M. O. D. A. L. , University of Naples Federico II, Italy), Francesco Picciali (M. O. D. A. L. , University of Naples Federico II, Italy), and Shadi Albarqouni (University of Bonn, Helmholtz AI, Germany).
The day began with a welcome from the workshop cochairs, Daniela Annunziata and Marzia Canzaniello, both researchers at the M. O. D. A. L. laboratory, University of Naples Federico II. Their opening presentation outlined the motivations for the workshop, emphasizing the growing relevance of federated learning in fields where data cannot be centralized due to privacy, policy, or infrastructure constraints. They offered a snapshot of the submitted and accepted contributions, highlighting both the diversity of author affiliations and the international scope of the workshop’s program committee.
The first keynote speaker was Yang Liu, from the Institute for AI Industry Research at Tsinghua University in China. His presentation focused on strategies for the joint optimization of large and small models across distributed environments. By exploring how model heterogeneity can be turned from a challenge into an opportunity, Liu’s talk sparked vibrant discussion among attendees. The morning continued with a series of technical presentations addressing current research in the field. Topics included optimization techniques for non-convex problems in distributed settings, architectural design in split learning scenarios, and methods for reducing communication overhead through adaptive compression strategies. Another work also explored fine-tuning approaches for large language models within federated contexts.
After the lunch break, the second keynote was delivered by Holger Roth of NVIDIA, United States. His talk, “From Theory to Practice: Addressing Challenges of Real-World Federated Learning, ” focused on practical considerations and challenges in applying federated learning beyond academic settings. The presentation offered valuable reflections and was met with interest from the audience.
In the afternoon, additional paper presentations expanded the thematic range of the workshop. Presenters introduced methods for improving generalization in decentralized models through adaptive aggregation strategies and attention-based architectures. Applications included smart city systems for pedestrian safety and collaborative medical diagnostics for coronary artery analysis, reflecting the broad impact and applicability of federated learning. The last contribution explored privacy-preserving techniques in training graph neural networks, pointing to the increasing intersection between graph-based learning and decentralization.
Throughout the day, the audience, comprising around twenty engaged participants,
demonstrated strong interest and active involvement, contributing to an open, collaborative atmosphere. The diversity of topics and perspectives presented made clear that federated learning is not only a technical challenge but also a field rich with interdisciplinary potential, requiring contributions from computer science, healthcare, communication systems, and beyond.
Daniela Annunziata and Marzia Canzaniello served as cochairs of this workshop. This report was written by Daniela Annunziata and Marzia Canzaniello.
Generalization in Planning (W39)
Finding solutions to sequential decision-making (SDM) problems that generalize across problem instances and domains is crucial to the advancement of artificial intelligence (AI). Generalized solutions broaden access to AI algorithms, reduce resource consumption, and enable knowledge discovery at a broad scale. Recent advances in deep reinforcement learning and generative AI have led to data-driven methods that are effective for short-horizon reasoning and decision-making, with open problems regarding sample efficiency, guarantees of correctness, and applicability to long-horizon settings. On the other hand, the AI planning community has made complementary strides, developing robust analytical methods that enable sample-efficient generalization and transferability in long-horizon planning, with open problems in designing and modeling representations. This workshop aims to unify relevant research that is often fragmented across separate research communities, including AI planning, deep learning, reinforcement learning, logic programming, model learning, and robotics. No formal report was filed by the organizers for this workshop.
Workshop and Challenge on Anomaly Detection in Scientific Domains (W40)
Scientific discovery often involves the observation of an inconsistency among “normal” patterns within data. Recognizing something different, incongruous with the data, is what we call anomaly detection, which differs from other tasks since we do not know what exactly to look for—just to look for something different. This workshop aims to nurture the community of researchers at the intersection of machine learning and various scientific domains.
Additionally, we will hold the award ceremony for the 1st HDR Interdisciplinary Machine Learning Challenge (nsfhdr. org/mlchallenge). This anomaly detection competition comprises one challenge in each of biology, physics, and climate science, with a combined challenge across domains. Integration of FAIR and Reproducible science is a critical element. No formal report was filed by the organizers for this workshop.
Knowledge Graphs for Health Equity, Justice, and Social Services (W41)
Knowledge graphs (KGs) are prevalent in many real-world diverse applications across science and industry including search engines, recommendation systems, natural language processing (NLP), healthcare and life sciences, social networks, smart cities, education, and more. In recent years, KG construction and learning have grown into an established sub-field of AI and foundation models (FMs), i. e. , researchers have been focusing on developing novel ontology design and entity identification, reasoning/embedding algorithms, and query answering. On the other hand, with the rising awareness that health equity and social justice are important, there are several challenges that public health professionals face. These include widespread inequities, structural racism and discrimination, geographic disparities, substance use and social inequities, etc. Additionally, learning algorithms and systems have exposed vulnerabilities, e. g. , bias in AI systems is a critical issue that affects fairness, equality, and trust and leads to unfair outcomes; data problems (e. g. , curation of training data) significantly impact the performance, fairness, and reliability of FMs. In order to promote health equity, advance justice, and dismantle barriers to personalized services to society, especially marginalized communities, it has become critical that the communities related to AI, FMs, KGs, and social science join their forces in order to develop more high-quality data and effective algorithms and applications. Our workshop aims to provide an opportunity for scientific researchers, field practitioners, government agencies, legal services, and industrial partners to be at the forefront of this transformative initiative. This workshop will bring together experts from diverse fields, including but not limited to AI, FMs, KGs, social work, public health, justice, and health services research, to create a dynamic platform for brainstorming, collaboration, and action. No formal report was filed by the organizers for this workshop.
Large Language Model and Generative AI for Health (W42)
The rapid evolution of Generative AI, Large Language Models (LLMs), and multimodal models is reshaping the landscape of healthcare. These advanced AI models, when integrated with diverse data types such as clinical notes, medical images, and electronic health records (EHRs), hold immense potential to revolutionize diagnostics, treatment planning, and patient management. This workshop will bring together experts to explore the transformative role of AI in healthcare while addressing the critical challenges that come with it.
Despite their promise, the adoption of LLMs and Generative AI in healthcare is not without obstacles. Issues around fairness, trust, clinical validation, and bias mitigation are central to this discussion. How can we ensure that these models are transparent, ethical, and comply with regulatory standards? What strategies can mitigate inherent biases and build trust with both clinicians and patients?
This workshop will foster interdisciplinary collaboration between AI researchers, healthcare professionals, and policymakers. It aims to bridge the gap between cutting-edge technological innovations and real-world clinical practice, ensuring that AI-driven healthcare is effective, trustworthy, and accessible to all patients. No formal report was filed by the organizers for this workshop.
Machine Learning for Autonomous Driving (W43)
The workshop “Machine Learning for Autonomous Driving” (ML4AD) is an event for artificial intelligence and machine learning researchers to discuss research problems concerning autonomous driving (AD). Our goal is to promote AI/ML research, and its real-world impact, on self-driving technologies. The fundamental question is “How does AI/ML impact and advance AD in different aspects? “. Full self-driving capability (“Level 5”) is far from solved and extremely complex, beyond the capability of any one institution or company, necessitating larger-scale communication and collaboration.
Since 2016, ML4AD has been a leading workshop at the intersection of machine learning, artificial intelligence, and autonomous driving. This workshop brings researchers from academia, industry, and government together to discuss the latest advancements and foster collaboration in this rapidly-evolving field. The atmosphere is collaborative and engaging, highlighting cutting-edge research and innovative ideas. No formal report was filed by the organizers for this workshop.
MALTA: Multi-Agent Reinforcement Learning for Transportation Autonomy (W44)
This workshop will explore the challenges and opportunities of Multi-Agent Reinforcement Learning (MARL) in the context of autonomous transportation systems. It aims to address critical issues such as coordination, cooperation, scalability, and real-time decision-making among multiple autonomous agents in complex, real-world transportation environments. The workshop will cover topics including traffic optimization, fleet management, and intelligent infrastructure, bringing together experts from academia and industry to discuss the latest advancements and practical applications of MARL. No formal report was filed by the organizers for this workshop.
Neural Reasoning and Mathematical Discovery — An Interdisciplinary Two-Way Street (W45)
Neural architectures are playing an increasing role in AI-assisted mathematical discovery. These architectures can guide theorists in discovering novel mathematics through conjecture generation and autoformalization. Besides mathematical and scientific discovery, the success of neural networks is witnessed in various other domains, e. g. , human-like question-answering, playing games, and solving IMO tasks. However, accompanied by these exciting successes are LLMs’ unpredictable behaviors and errors in simple abstract reasoning. This presents an opportunity to develop pipelines for human-like, rigorous, logical reasoning, supported by advances in neural architectures. Recent research shows first glimpses of achieving syllogistic reasoning without training data through the use of sphere neural networks. This workshop invites theorists and practitioners to reconsider various problems and discuss walk-round solutions in the two-way street commingling of neural networks and mathematics (1) using mathematics to develop novel neural networks that can reach the rigor of logical reasoning, and (2) using neural networks to discover and enlighten novel results or paradigms in the mathematical sciences. No formal report was filed by the organizers for this workshop.
Open-Source AI for Mainstream Use (W46)
According to the 2024 AI Index Report, 65. 7% of the 149 foundation models released in 2023 were open source and there were 1. 8 million AI-related projects on GitHub in 2023, a 59. 3% rise in just one year. Typical reasons for adopting open models are faster access to innovation, cost effectiveness, transparency, and the ability to modify the model. In addition to foundation models, an open-source AI ecosystem must also include tools and techniques to support downstream activities (e. g. model adaptation, human alignment, testing & evaluation, etc. ). With the increasing number of AI regulations around the world that attempt to specify what is acceptable for societal use, how the open-source AI ecosystem manages the risk of building, deploying and managing these systems matters immensely. Therefore, while bringing many economic and social benefits, there are many technical challenges to create an open-source AI ecosystem. No formal report was filed by the organizers for this workshop.
Scalable and Efficient Artificial Intelligence Systems (W47)
As the AI community advances in developing human-like algorithms, it is crucial to understand their implications for scalable and efficient AI systems. While AI excels at small-scale data tasks, managing large-scale, dynamically growing datasets presents new challenges. Addressing these requires collaboration between academia and industry, focusing on both fundamental research and applied technologies. To this end, we introduce the first workshop on Scalable and Efficient Artificial Intelligence Systems (SEAS), a forum for experts to share experiences in designing and developing robust computer vision (CV), machine learning (ML), and AI algorithms, translating them into real-world solutions. SEAS aims to foster collaboration between academics and industry professionals, discussing AI models that efficiently scale with growing data. No formal report was filed by the organizers for this workshop.
Towards Knowledgeable Foundation Models (W48)
In this workshop, we want to bring together researchers who focus on different stages and different aspects (structured knowledge, unstructured knowledge, and knowledge acquired from LMs themselves) of the knowledge lifecycle to discuss the role of knowledge in the era of large language models. No formal report was filed by the organizers for this workshop.
Workshop on Health Intelligence (W49)
Integrating information from now widely available -omics and imaging modalities at multiple time and spatial scales with personal health records has become the standard of disease care in modern public health. Moreover, given the ever-increasing role of the World Wide Web as a source of information in many domains, including healthcare, accessing, managing, and analyzing its content has brought new opportunities and challenges. The advances in web science and technology for data management, integration, mining, classification, filtering, and visualization have given rise to various applications representing real-time data on epidemics.
Furthermore, to tackle and overcome several issues in personalized healthcare, the evolution of information technology is crucial. It will improve communication, collaboration, and teamwork among patients, their families, healthcare communities, and care teams involving practitioners from different fields and specialties. All these changes require novel solutions, and your role as a member of the AI community is pivotal. You are well-positioned to provide both theoretical- and application-based methods and frameworks, making your contribution invaluable.
The workshop will showcase a diverse range of original contributions to theory, methods, systems, and applications of data mining, machine learning, databases, network theory, natural language processing, knowledge representation, artificial intelligence, semantic web, and big data analytics in web-based healthcare applications. This variety of applications, with a focus on population and personalized health, is a testament to the exciting potential of the field. No formal report was filed by the organizers for this workshop.
Author Bios
Nitay Alon is a PhD student at the School of Computer Science & Engineering at The Hebrew University of Jerusalem and Max Planck institute for Cybernetics.
Daniela Annunziata is a PhD student at the M.O.D.A.L. laboratory, University of Naples Federico II, Italy.
Djallel Bouneffouf is a Senior Research Scientist at IBM Research.
Jill Burstein is the Principal Assessment Scientist at Duolingo.
Marzia Canzaniello is a PhD student at the M.O.D.A.L. laboratory, University of Naples Federico II, Italy.
Vinay K Chaudhri is the principal scientist at Knowledge Systems Research LLC.
Aryan Deshwal is an assistant professor at the Department of Computer Science and Engineering, University of Minnesota.
Seyed A. Esmaeili is a postdoctoral researcher at the University of Chicago.
Baihan Lin is an Assistant professor at Mount Sinai University.
Zitao Liu is a Professor at the Guangdong Institute of Smart Education at the Jinan University, China.
Yung-Hsiang Lu is a professor in the Elmore Family School of Electrical and Computer Engineering of Purdue University, West Lafayette, Indiana, USA.
Debshila Basu Mallick is the Scientific Director for SafeInsights and the Director of Research for OpenStax at Rice University.
Andrew M. Olney is a Professor at the Institute for Intelligent Systems at the University of Memphis, USA.
Ryota Takatsuki is a Research Fellow at AI Alignment Network and a master’s student at the University of Tokyo.
Pengyang Wang is an Assistant Professor in Computer Science at the University of Macau.
Simon Woodhead is co-founder and chief data scientist at Eedi.
Qin Yang is an Assistant Professor in the Computer Science and Information Systems Department at Bradley University.