The online interactive magazine of the Association for the Advancement of Artificial Intelligence

By Nathan Dennler, Binazir Karimzadeh, Takashi Kido, Adithya Kulkarni, Xiaomin Lin, Andreas Martin, Zhonghao Shi, Maja Matarić, Tinoosh Mohsenin, Amy O’Connell, Hasib-Al Rashid, Keiki Takadama

The Association for the Advancement of Artificial Intelligence’s 2025 Spring Symposium Series was held in Burmingame, California, March 31- April 2, 2025. There were eight symposia in the spring program: AI for Engineering and Scientific Discoveries, AI for Health Symposium: Leveraging Artificial Intelligence to Revolutionize Healthcare, Current and Future Varieties of Human-AI Collaboration, GenAI@Edge: Empowering Generative AI at the Edge, Human-Compatible AI for Well-being: Harnessing Potential of GenAI for AI-Powered Science, Machine Learning and Knowledge Engineering for Trustworthy Multimodal and Generative AI, Symposium on Child-AI Interaction in the Era of Foundation Models, Towards Agentic AI for Science: Hypothesis Generation, Comprehension, Quantification, and Validation. This report contains summaries of the workshops, which were submitted by some, but not all, of the workshop chairs.

AI for Engineering and Scientific Discoveries (S1)

This symposium aims to advance and diversify the application of AI in emerging engineering and scientific discovery domains. Inspired by progress in large language models, generative AI, and AI-assisted scientific computing, we seek to foster new collaborations between industry and academia to tackle challenging problems in materials, manufacturing, and life sciences. We also plan to explore new directions in human–machine interaction for accelerating knowledge discovery and address related ethical considerations. Through invited speakers, panel discussions, and contributions from researchers with cross-disciplinary expertise, we hoped to cultivate partnerships that drive transformative advances in both AI and scientific research. No formal report was filed by the organizers for this symposium.

AI for Health Symposium: Leveraging Artificial Intelligence to Revolutionize Healthcare (S2)

Artificial Intelligence (AI) is poised to transform healthcare, offering groundbreaking capabilities in disease diagnosis, treatment, drug discovery, and patient care. By improving access to health services, reducing costs, and addressing workforce shortages, AI can play a pivotal role in tackling global health challenges. However, successfully integrating AI into healthcare requires careful consideration of regulatory frameworks, governance structures, data equity, and privacy protections. This symposium will bring together AI researchers, clinicians, and industry experts to foster meaningful dialogues and insights that contribute to responsible AI development.

Traditional AI models in healthcare often rely on limited, isolated datasets, facing challenges like missing values, data imbalance, and insufficient representation of diverse patient populations. These issues can lead to algorithmic biases, diminished generalizability, and reduced accuracy of AI-driven predictions, especially in clinical settings. Ensuring access to large, high-quality datasets is key to addressing these limitations, yet privacy and security constraints often restrict data sharing and collaboration across institutions.

This symposium explored current challenges and forward-looking solutions to enhance AI’s reliability, inclusivity, and ethical impact in healthcare. By bringing together diverse stakeholders, it seeks to foster cross-disciplinary collaborations and promote the development of people-centered, AI-enabled healthcare systems. No formal report was filed by the organizers for this symposium.

Current and Future Varieties of Human-AI Collaboration (S3)

There is general agreement that AI has the potential to revolutionize society, but existing systems lack flexibility and robustness. Before they can assist humans effectively, they must be able to interact with people as genuine collaborators. We should explore more fully the ways that humans and machines can work together, each drawing on strengths that offset the other’s weaknesses. We should also address concerns about risk, trust, and safety that arise in collaborative settings.

This symposium focused on varieties of human-AI collaboration that occur in task-oriented interactions. This includes team settings in which: a human oversees AI agents; an AI agent advises a person on complex tasks; humans and AI agents cooperate as equals; an AI system coordinates humans and agents; and human and AI agents handle subtasks usually done by one person. These differ in roles, responsibilities, and interactions in ways that cut across different AI paradigms. No formal report was filed by the organizers for this symposium.

GenAI@Edge: Empowering Generative AI at the Edge (S4)

The AAAI Spring Symposium on Empowering Generative AI at the Edge (“GenAI@Edge”) brought together leading experts across academia, industry, and government to explore the intersection of foundation models, generative AI, and embedded edge computing. As generative models grow in capability and scale, so too does the need to deploy them efficiently in resource-constrained environments such as mobile devices, IoT systems, wearables, and robotics platforms. The GenAI@Edge symposium focused on enabling this paradigm shift by convening researchers and practitioners from diverse domains to investigate innovations in efficient model design, training, deployment, and application at the edge.

The GenAI@Edge symposium commenced on Monday, March 31, with opening remarks by Tinoosh Mohsenin, Associate Professor of Electrical and Computer Engineering at Johns Hopkins University. The day featured a technical session focusing on the application and optimization of edge large language models (LLMs), highlighting advancements in hyperparameter-architecture search, neural network subspace analysis, and real-time contextual understanding with ground robots. The afternoon included academic talks from Ang Li, Assistant Professor at the University of Maryland, College Park, and Shaoyi Huang, Assistant Professor in the Department of Computer Science at Stevens Institute of Technology. Their presentations addressed innovative strategies for deploying generative AI on resource-constrained devices and approaches to enhancing the performance of language models from algorithmic and hardware perspectives.

On Day 2 of the GenAI@Edge symposium, the program commenced with opening remarks by Evgeni Gousev, Senior Director of Engineering at Qualcomm Research and Chair of the EdgeAI Foundation. He emphasized the critical role of energy-efficient AI in advancing edge computing. momenta.one The morning keynote was delivered by Helen Li, Professor and Chair of Electrical and Computer Engineering at Duke University. She discussed strategies to enhance the efficiency, privacy, and safety of large language models when deployed at the edge. The academic sessions featured talks from Yezhou “YZ” Yang, Director of the Active Perception Group at Arizona State University, and Huanrui Yang, Assistant Professor at the University of Arizona. They explored advancements in generative AI, focusing on visual reasoning, efficiency, and safety in resource-constrained environments. In the afternoon, industry leaders Pete Warden, CEO of Useful Sensors, and Max Petrenko, Principal Scientist at Amazon, shared insights into the practical applications and future outlook of generative AI at the edge. Their discussions highlighted the balance between theoretical advancements and real-world deployments. The day concluded with a keynote by Philip Wong, Professor at Stanford University, who addressed the growing computational demands of AI and the necessity for diverse memory solutions to support scalable edge deployments.

On the final day of the GenAI@Edge symposium, the program continued to emphasize the integration of generative AI into resource-constrained environments. The morning commenced with a keynote by Lei Yang, Assistant Professor at George Mason University, who explored the convergence of physical modeling and diffusion-based generative AI techniques tailored for edge applications. Subsequently, Avik Santra, Principal Machine Learning Engineer at Infineon Technologies, discussed strategies for optimizing small-scale language models to enhance their capabilities and efficiency in environments with limited computational resources.

In addition to invited sessions, the symposium showcased eleven peer-reviewed paper presentations highlighting advances across (i) system-level and architectural innovations for efficient AI processing, (ii) edge-native model design and optimization, and (iii) domain-driven applications of generative AI in robotics, medical AI, IoT security, and environmental modeling. These contributions reflected the growing momentum in democratizing access to generative models and foundation AI by enabling their deployment beyond cloud infrastructures.

Best Paper and Best Poster Awards were presented by Tinoosh Mohsenin, Binazir Karimzade, and Xiaomin Lin, recognizing outstanding contributions that exemplified innovation, rigor, and relevance to the symposium’s themes. The awardees were selected through a careful review process conducted by a dedicated committee of program members with no conflicts of interest, ensuring fairness, transparency, and merit-based recognition.

GenAI@Edge attracted 30–40 attendees from a wide range of academic institutions and industry leaders, including Johns Hopkins University, Harvard, Duke, UC San Diego, University of Maryland-College Park, University of Maryland-Baltimore County, NC State, Texas Tech University, University of Arizona and Stevens Institute of Technology, as well as professionals from companies such as Qualcomm, Meta, Amazon, Apple, and Infineon. The program maintained a strong balance between cutting-edge theoretical advances and practical deployment strategies, with an emphasis on inclusion, efficiency, and the long-term scalability of generative AI on edge devices.

The symposium was co-organized by Tinoosh Mohsenin, Evgeni Gousev, Eiman Kanjo, Hasib-Al Rashid, Binazir Karimzadeh, Dongkuan Xu, Pretom Roy Ovi, and Xiaomin Lin. This report was written by Xiaomin Lin and Tinoosh Mohsenin and edited by Hasib-Al Rashid and Binazir Karimzade.

Human-Compatible AI for Well-being: Harnessing Potential of GenAI for AI-Powered Science (S5)

The AAAI 2025 Spring Symposium on “Human-Compatible AI for Well-Being: Harnessing the Potential of Generative AI for AI-Powered Science” explored the potential of human-compatible AI and the future of science powered by geneative AI. The discussions focused on frameworks, challenges, and emerging opportunities for aligning advanced AI systems with human values, well-being, and scientific discovery.

The AAAI 2025 Spring Symposium on “Human-Compatible AI for Well-Being: Harnessing the Potential of Generative AI for AI-Powered Science” was held at the San Francisco Airport Marriott Waterfront in Burlingame, California, from March 31st to April 2nd, 2025. The symposium brought together researchers from diverse disciplines to examine how next-generation AI systems are compatible with human values and how generative AI (GenAI) can accelerate scientific discovery without compromising ethical principles.

The symposium focused on two major themes that addressed the challenges of designing AI systems aligned with human well-being in both individual and societal contexts.

First, the theme of “Human-Compatible AI” explored the technical and philosophical foundations for aligning AI behavior with human values, goals, and ethical norms. This includes issues such as value-sensitive design, human oversight, interpretability, and the integration of physiological and cognitive perspectives into AI systems. Discussions have emphasized how AI can enhance individual autonomy, mental health, and personal growth, particularly in domains such as healthcare, education, and creativity, while avoiding unintended manipulation or erosion of agency.

Second, the theme of “AI-Powered Science” examines how generative AI technologies, including large language models (LLMs) and diffusion models, can be responsibly

leveraged to advance scientific discovery. Topics include interactive hypothesis generation, bias detection, misinformation mitigation, and the role of AI in transforming knowledge workflows across disciplines. While showcasing the innovative potential of GenAI, this theme also highlights the need for epistemic responsibility, fairness, and institutional safeguards to ensure scientific trustworthiness.

Related to the above two themes, Keynote Speaker Alex Pentland (Stanford University and MIT Media Lab, USA) set the stage with a talk titled Human-Compatible AI for Science and Well-Being, emphasizing the role of collective intelligence, incentive design, and social computation in aligning AI development with both individual and societal needs. His vision underscored the dual imperative of fostering scientific innovation, while reinforcing social responsibility and long-term well-being.

In the symposium, both technical and philosophical discussions on “Human-Compatible AI for Well-Being” were welcomed, especially as it relates to harnessing the potential of generative AI for scientific advancement. The symposium explored how AI systems can be designed not only to perform effectively but also to act in ways that are compatible with human values, societal needs, and ethical norms. Topics such as value-sensitive AI design, alignment with human intent, LLM-assisted discovery in medicine and genomics, and epistemic risks of AI-generated knowledge are central to our conversations. By bridging discussions on well-being and scientific progress, the symposium highlighted the dual imperative of building AI systems that support both human flourishing and trustworthy knowledge production. We encouraged contributions that questioned foundational assumptions, offered new frameworks for human-AI collaboration, and envisioned responsible futures for AI in science and society.

Our symposium included 20 presentations over two-and-a-half days. Presentation topics were organized into the following categories: (1) Human-Compatible AI for Science and Well-Being (4 presentations); (2) Longevity, Aging, and Elderly Adult Support (3 presentations); (3) Ethics and Morality in Generative AI (2 presentations); (4) Education—Use of Generative AI, Plagiarism, and Hypothesis Formulation (3 presentations); (5) Support for Persons with Disabilities (1 presentation); (6) Education—Curriculum Design and Assessment (2 presentations); and (7) Poster, Demo, and Short Presentations (5 entries).

For example, Takashi Kido (Teikyo University, Japan) presented Human-Compatible AI and AI-Powered Science: Insights from AAAI Spring Symposium and Beyond, highlighting the intersection of generative AI, scientific discovery and human well-being.

Keiki Takadama (University of Tokyo, Japan) stressed the importance of integrating physiological and machine learning perspectives towards Human-Compatible AI for well-being. Melanie Swan (University College London, UK), along with Kido and Renato dos Santos, introduced a categorical framework for longevity and well-being. Han Kyul Kim and Andy Skumanich (Innov8AI Inc., USA) surveyed approaches to counter LLM-generated misinformation. Sahan Hatemo, Christof Weickhardt, Luca Gisler, and Oliver Bendel (Switzerland) examined bias in LLMs through the trolley problem. Dragutin Petkovic and Anoshua Chaudhuri (San Francisco State University, USA) discussed responsible uses of GenAI in education.

The discussions revealed shared awareness of the opportunities and risks posed by generative AI. Across both tracks, participants emphasized that building human-compatible and epistemically trustworthy AI systems will require new interdisciplinary collaborations, technical innovations, and institutional safeguards.

Takashi Kido and Keiki Takadama served as co-chairs of this symposium. Takashi Kido is a professor at Teikyo University in Japan. Keiki Takadama is a professor at the University of Tokyo in Japan.

Machine Learning and Knowledge Engineering for Trustworthy Multimodal and Generative AI (AAAI-MAKE) (S6)

The seventh AAAI Spring Symposium on Machine Learning and Knowledge Engineering for Trustworthy Multimodal and Generative AI gathered researchers and practitioners to explore hybrid AI approaches that integrate symbolic reasoning with machine learning. The focus was developing trustworthy, explainable systems across multiple modalities, including text, speech, image, and video.

The AAAI Spring Symposium on Machine Learning and Knowledge Engineering for Trustworthy Multimodal and Generative AI (AAAI-MAKE) was held in San Francisco, California, from March 31 to April 2, 2025. It brought together a diverse group of participants to discuss the integration of knowledge engineering and machine learning for building robust and explainable AI systems capable of operating across multiple modalities.

The symposium opened with keynote talks that addressed foundational aspects of neuro-symbolic and generative AI. Alessandro Oltramari, President of the Carnegie Bosch Institute (College of Engineering, Carnegie Mellon University) and Senior Research Scientist at the Bosch Research and Technology Center in Pittsburgh, delivered a talk on “Neuro-Symbolic Cognitive Reasoning.” He outlined how neuro-symbolic methods can enhance human–machine collaboration and improve decision intelligence. In the afternoon, Pradeep Ravikumar, Professor in the Machine Learning Department, School of Computer Science at Carnegie Mellon University, presented “Latent Concepts in LLMs.” His presentation focused on the identification and role of latent semantic representations in large language models, with implications for interpretability.

On the second day, the symposium shifted focus to robustness and accountability. Leilani H. Gilpin, Assistant Professor in Computer Science and Engineering at the University of California, Santa Cruz, and Affiliate of the Science & Justice Research Center, presented “Robustifying Trustworthy AI – Building, Guiding, and Unifying Complex Systems in Critical Scenarios.” She discussed strategies for creating accountable, self-explanatory AI systems suited for deployment in critical domains. Andrei Barbu, Research Scientist at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and the Center for Brains, Minds, and Machines (CBMM), concluded the keynote series with “The Measurement Problem in AI/ML.” His talk examined the conceptual foundations of AI evaluation and proposed alternative approaches to assessing the performance of intelligent systems.

As part of the AAAI Spring Symposium Series plenary session, Thomas Schmid (Martin

Luther University Halle-Wittenberg) represented the AAAI-MAKE symposium with a brief and engaging presentation, introducing its themes and activities to participants across all symposia.

The third and final day featured a concluding discussion on potential future topics for the symposium series. Among the key themes that emerged were semantic agents, embodied and non-embodied, and their practical applications. This dialogue suggested a possible direction for future symposia in the AAAI-MAKE series.

Throughout the event, participants exchanged ideas across disciplinary boundaries, addressing shared concerns about trustworthiness, accountability, and system evaluation in the context of generative and multimodal AI. The symposium continued to build on its role as a platform for fostering collaboration between researchers from machine learning, knowledge engineering, and related fields.

The symposium’s co-organizers, Hans-Georg Fill, Jane Yung-jen Hsu, Yen-Ling Kuo, Thomas Schmid, Paulo Shakarian, and Reinhard Stolle, served as session chairs. This report was written by symposium chair and co-organizer Andreas Martin.

Symposium on Child-AI Interaction in the Era of Foundation Models (S7)

Foundation models, such as large language models (LLMs), vision language models (VLMs), and speech foundation models, can enable more effective, natural, and engaging human-AI interactions. Both academic researchers and industry practitioners are increasingly interested in leveraging these models to provide accessible, personalized support for children in areas such as education, entertainment, health, and well-being. However, the opportunities presented by foundation models are accompanied by significant risks and ethical concerns, especially in the context of child-AI interactions.

To understand the benefits and risks of AI systems, the Child-AI Interaction in the Era of Foundation Models Spring Symposium brought together researchers with backgrounds in robotics, speech-language pathology, pediatrics, signal processing, psychology, design, and industry, to discuss child-specific considerations for foundation models.

During the 2.5-day symposium, we invited researchers and faculty from diverse and interdisciplinary backgrounds to share their work on child-centered AI and their perspectives on child-AI interaction in the era of foundation models.

Dr. Vicky Charissi, a research fellow at Harvard University, presented her work on evaluating the use of foundation models to support learning among children in several use cases. She discussed how students learn from and engage with LLM-based conversational agents, and how practices and policies around AI can encourage safer child-AI interactions.

Dr. Patricía Alves-Oliveira, assistant professor of robotics at the University of Michigan, discussed her work on designing robots that help youth across several important tasks, including helping incarcerated youth transition back into society, helping children receive eye exams, child-centered robot design methodologies, and robot-supported dialectical behavioral therapy exercises.

Dr. Dennis Wall, professor of Pediatrics at the Stanford University School of Medicine, shared his work on developing and applying novel machine learning methods to biomedical informatics to untangle complex conditions that originate in childhood and persist throughout life, including autism and related developmental delays. His research aims to enable downstream applications including diagnosis and personalized therapy.

Dr. Yao Du, clinical assistant professor of Speech-Language Pathology at the Keck School of Medicine, University of Southern California, discussed her work on bridging the gap between speech-language pathology and human-AI interaction. She focused on the design and evaluation of web, mobile, and voice technologies for both children and adults with communication and cognitive disorders.

Dr. Ge Wang, an incoming assistant professor in the Department of Computer Science at the University of Illinois Urbana-Champaign, presented her research on designing AI systems that are autonomy-supportive for child users. Her work explored ways to empower children to assert agency in response to algorithmic decisions made about them.

Attendees of the symposium were able to build their algorithmic and design skills through two hands-on tutorials. The first tutorial covered state-of-the-art techniques for speech-language processing. We discussed how to apply techniques to adapt systems to perform well for children’s unique speech patterns, and how artificial voices can be adapted to work more effectively with children. The second tutorial introduced participants to design canvases, a technique to design new robots or AI systems that interact with children, while considering ethics, privacy, and other key concerns.

Throughout the symposium, we discussed several important topics that covered human-centered design of AI systems, algorithmic foundations of such systems, and how those two components mutually shape each other. These discussions sparked conversations about how physical interaction, speech interfaces, and ethical design can be leveraged to support child development, while also requiring safeguards against overreliance or emotional attachment to artificial agents. We discussed questions such as: How do we ensure that systems designed for children respect their autonomy and privacy? How can we meaningfully balance general-purpose AI capabilities with personalized experiences that adapt to individual children’s needs? What role should parents, teachers, and clinicians play in shaping the behavior of these systems?

Several symposium attendees also presented their work, which featured in-the-wild robot deployments and real-world interactions with children. The described robots were deployed as tutors and peer mediators. In addition to interactions, the described work proposed technical approaches to using foundation models with children, such as grounding ethics for LLM generation in rubric-based evaluations.

Although few children attended, their visions of AI were central to the symposium. Organizers invited local students to draw or write about their ideal intelligent agents and concerns about AI’s future, collecting 24 anonymized responses. These drawings and written samples were on display at the symposium. The children imagined helpful, caring, and fun robots, but also expressed concerns about unchecked AI—sparking thoughtful discussion about the role and impact of foundation models on their lives and futures.

Overall, the symposium brought together presenters and participants from diverse fields relevant to child-AI interaction, providing a unique opportunity to foster mutual understanding and facilitate interdisciplinary collaboration, helping to pave the way for future advancements in this emerging area.

Zhonghao Shi, Nathan Dennler, Leigh Levinson, Amy O’Connell, Xuan Shi, and Nicholas Georgiou served as co-chairs of the symposium. This report was co-authored by Zhonghao Shi, Nathan Dennler, Amy O’Connell, and Maja Matarić.

Towards Agentic AI for Science: Hypothesis Generation, Comprehension, Quantification, and Validation (S8)

The “Towards Agentic AI for Science: Hypothesis Generation, Comprehension, Quantification, and Validation” (S8) symposium at the 2025 AAAI Spring Symposium Series focused on the intersection of artificial intelligence and scientific discovery, with an emphasis on developing autonomous AI systems capable of hypothesis generation, comprehension, quantification, and validation. The symposium featured leading voices from academia and industry, presenting cutting-edge work in foundation models, graph neural networks, biomedical applications, and human-AI collaboration, all under the unifying theme of cognitively inspired agentic AI.

The three-day symposium examined how agentic AI can reshape the process of scientific discovery. Opening remarks by Dr. Yujun Yan of Dartmouth College established the theme: how intelligent agents can autonomously perform tasks such as generating novel hypotheses, comprehend their applications, quantify testing resources, and validate feasibility through well-designed experiments. The symposium emphasized interdisciplinary collaboration, welcoming participants from computer science, biology, material science, and cognitive science.

Day one featured a keynote by Dr. Markus J. Buehler of MIT, titled “Physics-Aware AI: Bridging Science Through Multi-Agent Reasoning Systems.” Dr. Buehler showcased how AI, when informed by physical laws, can design resilient materials and simulate multi-agent systems. This was followed by Dr. Michael Mahoney of UC Berkeley, who presented “Foundational Methods for Foundation Models for Scientific Machine Learning,” diving into algorithmic principles necessary for adapting foundation models to scientific domains. Dr. Hanghang Tong of UIUC delivered a virtual keynote on “Graph Neural Networks Beyond Homophily,” emphasizing new paradigms for learning on complex, heterogeneous graphs.

Day one also included interactive sessions like a “speed dating workshop” to foster networking and collaboration. A panel featuring Dr. Markus J. Buehler and Dr. Michael Mahoney addressed challenges in integrating domain knowledge into AI models and the future of multi-agent systems. Later, selected presentations highlighted innovative agentic AI systems, including A Nature-Inspired Colony of Artificial Intelligence Systems, which introduced a biologically inspired framework where AI agents function as fast, detailed, or organized learners, UNIMATE, a unified model for metamaterial design, and MetamatBench, an interface for integrating

heterogeneous data for material discovery.

On the second day, Dr. Jure Leskovec from Stanford University opened with a keynote on “Building an AI biologist,” which explored using generative agents for hypothesis generation in life sciences. Dr. James Zou, also from Stanford, introduced the “Virtual Lab”, a generative team of AI agents capable of conducting biomedical Research and Development. Dr. Lifu Huang from UC Davis presented “METASCIENTIST,” a collaborative human-AI framework for automating mechanical metamaterial design.

Industry presence was marked by Dr. Mingyu Derek Ma from Genentech, who presented a robust agentic ecosystem for drug discovery. From academia, Dr. Yan Liu of the University of Southern California delivered a compelling talk on the frontiers of foundation models for time series, addressing challenges in applying deep learning to complex scientific data. Dr. Yujun Yan of Dartmouth College then showcased explainable methods for graph neural networks applied to brain data, offering insights into cognitive processes. The day concluded with a panel featuring Dr. Erica Briscoe (DARPA) and Dr. Yujun Yan, who discussed future directions and the responsible development of agentic systems.

The final day opened with a keynote by Dr. Alvaro Velasquez from DARPA, “Neurosymbolic AI in Autonomy, Biology, and Creativity,” emphasizing the fusion of symbolic reasoning and neural models. Dr. Hanchen Wang from Stanford introduced “SpatialAgent,” an autonomous system for spatial biology research. Dr. Erica Briscoe’s keynote on “Automating Scientific Assessment” proposed AI methods for evaluating the feasibility of scientific claims through simulation and data. The event concluded with an industry talk by Dr. Siddharth Narayanan of FutureHouse. who offered a practitioner’s perspective on the deployment of agentic AI in applied research environments. The insights of his talk were especially relevant for bridging the gap between academic innovation and industrial implementation. The symposium wrapped up with closing reflections from Dr. Adithya Kulkarni of Virginia Tech, who emphasized the ongoing need for synergy between cognitive science and autonomous AI systems in shaping the future of scientific discovery.

The symposium was widely regarded as both productive and intellectually stimulating, fostering enthusiastic engagement among participants across disciplines. Building on the momentum of this event, the organizing team also hosted two workshops under the same title, “Towards Agentic AI for Science: Hypothesis Generation, Comprehension, Quantification, and Validation,” at ICLR 2025 in Singapore and WWW 2025 in Sydney, Australia. These events further broadened the conversation, reaching new communities in AI and web research.

Lifu Huang, Danai Koutra, Adithya Kulkarni, Temiloluwa Prioleau, Qingyun Wu, Yujun Yan, Yaoqing Yang, James Zou, Mingyu Derek Ma, Hanchen Wang, Kexin Huang, Andrew White, Jure Leskovec, Wei Wang, and Dawei Zhou served as cochairs of this workshop. This report was written by Adithya Kulkarni.

Author Bios

Dr. Nathan Dennler is a recent PhD graduate from the Thomas Lord Department of Computer Science at the University of Southern California.

Binazir Karimzadeh is a postdoctoral fellow in the Department of Electrical and Computer Engineering at the Georgia Institute of Technology.

Takashi Kido is a professor at Common Education Center, Teikyo University.

Adithya Kulkarni is a Postdoctoral Fellow at the Department of Computer Science

Xiaomin Lin is a postdoctoral researcher in the Department of Electrical and Computer Engineering at Johns Hopkins University.

Andreas Martin, PhD, is a professor of applied artificial intelligence at the FHNW University of Applied Sciences and Arts Northwestern Switzerland.

Zhonghao Shi is a PhD candidate in the Thomas Lord Department of Computer Science at the University of Southern California.

Maja Matarić is a Distinguished Professor of Computer Science, Neuroscience, and Pediatrics at the University of Southern California.

Tinoosh Mohsenin is an associate professor in the Department of Electrical and Computer Engineering at Johns Hopkins University.

Amy O’Connell is a PhD candidate in the Thomas Lord Department of Computer Science at the University of Southern California.

Hasib-Al Rashid is a machine learning engineer at Zywie Healthcare.

Keiki Takadama is a professor at the University of Tokyo in Japan.