By Aastha Acharya, Nisar Ahmed, Shelly Bagchi, Nicholas Conlon, Priya Donti, Yolanda Gil, Nick Gisolfi, Zhao Han, Melissa Hatton, Emmanuel Johnson, Andrea Loreggia, Francesca Rossi, Emmanuel Senft, Biplav Srivastava, Frank Stein
The Association for the Advancement of Artificial Intelligence’s 2022 Fall Symposium Series was held held at the Westin Arlington Gateway in Arlington, Virginia, November 17-19, 2022. There were seven symposia in the fall program: Artificial Intelligence for Human-Robot Interaction, Artificial Intelligence for Predictive Maintenance, Distributed Teaching Collaboratives for AI and Robotics, Knowledge-Guided Machine Learning, Lessons Learned for Autonomous Assessment of Machine Abilities, The Role of AI in Responding to Climate Challenges, Thinking Fast and Slow and Other Cognitive Theories in AI.
ARTIFICIAL INTELLIGENCE FOR HUMAN-ROBOT INTERACTION (S1)
On November 17-19, 2022, The ninth AAAI symposium on “Artificial Intelligence for Human-Robot Interaction (AI-HRI)” was held in person for the first time since 2019, with a remote participation option. It brought together researchers whose work spans areas contributing to the development of human-interactive autonomous robots. Last year, we reviewed the achievements of the AI-HRI community over the last decade. Our discussion led to a consensus towards establishing a conference on AI-HRI to further grow our community, and our organizers are actively working towards this goal. In the meantime, we are grateful for the platform and incubator for emerging research communities that the AAAI symposia have provided. As such, this year, we adopt a visionary theme of the future of AI-HRI. Accordingly, we added a Blue Sky Ideas track to foster a forward-thinking discussion on the future at the intersection of AI and HRI. As always, we appreciate all contributions related to any topic on AI/HRI and welcome new researchers who wish to take part in this growing community.
The Artificial Intelligence (AI) for Human-Robot Interaction (HRI) Symposium has been a successful venue of discussion and collaboration since 2014. With the success of past symposia, AI-HRI impacts a variety of communities and problems, and has pioneered the discussions in recent trends and interests for the fields of both AI and HRI. This year’s AI-HRI Fall Symposium aims to bring together researchers and practitioners from around the globe, representing a number of university, government, and industry laboratories. In doing so, we hope to accelerate research in the field, support technology transition and user adoption, and determine future directions for our group and our research.
Seventeen papers were presented at this year’s AI-HRI symposium by participants from universities, industry, and national research laboratories. Topics included Future of AI-HRI and “Blue Sky” ideas, Ubiquitous HRI, Ethics in HRI, Trust and Explainability in HRI, to name a few. In addition to the pa- per presentations, invited talks were given by Cynthia Matuszek (University of Maryland, Baltimore County, USA), Matthew Gombolay (Georgia Institute of Technology, USA), and Hatice Gunes (University of Cambridge, UK).
For the first time since 2019, the symposium was held in-person, with some attendees also participating remotely via Zoom. In an effort to support diversity and inclusion, five complimentary registrations were provided to people from under-represented groups. Three of these attended in-person and provided new viewpoints to our discussions.
The symposium was held over two and half days and had four paper sessions, three invited talks, two poster sessions, and one panel discussion. At the conclusion of each invited talk, a breakout was provided for participants to have a more in-depth conversation with the invited speaker and attendees. At the end of each day, authors had the opportunity to present their paper as posters to trigger more discussions.
The first invited talk was given by Cynthia Matuszek (University of Mary- land), titled Robots, Language, and the World: Language Grounding and Human- Robot Interaction in VR. The presentation has triggered a rich discussion on robots in virtual environment like VR and simulators, with questions including:
- “How do/would you use VR in your research? Why or why not?”
- “If the “Sim2Real for HRI” system of your dreams did exist, (a) what would it be capable of, and (b) how would you use it in your research?”
- “VR is another tool in the toolbox of HRI researchers, what other tools have you been incorporating into your work?”
- “What new language recognition capabilities would be most useful for your work?”
- The second invited talk was given by Matthew Gombolay (Georgia Tech), titled Are Humans Amazing or…Not? Insights for Robot Learning from Human Demonstration with questions such as:
- “What application domains are most benefiting from RL/LfD?”
- “What application domains are not benefiting from RL/LfD, but should?”
- “What application domains are unnecessary or inappropriate for RL/LfD?”
- “How are you using RL/LfD in your research?”
- “What challenges are you facing in using RL/LfD in your research?”
- “Why did the neural network cross the road?”
- The final talk was given by Hatice Gunes (Cambridge University) on Affec- tive Intelligence for Human-Robot Interaction Research: Lessons Learned along the Journey with questions such as:
- “What representations of emotion are you using (or interested in using) in your research?”
- “How feasible and valuable would it be to get the research community to use a standard representation of emotion?”
- “What social signals are you detecting from people? Which ones are being used (or could be used) for human emotion recognition?”
- “What social signals are your robots producing when interacting with people?”
- “Which ones are being used (or could be used) for robot emotion expression?”
- “How can/does the emotional state of a human or robot impact robot decision-making in your research?”
- “What challenges/concerns do you have about integrating emotion into your research and in HRI in general?”
We additionally held a panel on the topic The Future of AI for HRI. The panel was moderated by Ross Mead (Semio AI, Inc.) and included Megan Zimmerman (National Institute of Standards and Technology), Dan Grollman (Plus One Robotics), Cynthia Matuszek (UMBC), and Tom Williams (Colorado School of Mines). The panelists commented on the community’s needs which first inspired the AI-HRI Symposium – a need for a venue to feature more technical work to advance HRI research. There was also a discussion of what is needed to improve the submission and review process of technical work in HRI conferences and to encourage submission of technical work to these venues. They also gave their thoughts on the changes in the HRI community over the past nine years of AAAI fall symposia. A healthy debate was had on the question of whether to make AI-HRI its own independent conference, or whether to co- locate it with an existing conference in the future, as we move towards a larger presence in the community. Although our panel provided only a small set of opinions, our audience came away with many ideas for how we can move AI-HRI forward in the future.
Overall, AI-HRI 2022 was a very productive and stimulating symposium, and the attendees are excited to continue working towards research that will prepare both robots and people to more intuitively interact with each other in the future!
Zhao Han, Emmanuel Senft, Muneeb I. Ahmad, Shelly Bagchi, Amir Yazdani, Jason R. Wilson, Boyoung Kim, Ruchen Wen, Justin W. Hart, Daniel Hernandez Garcıa, Matteo Leonetti, Ross Mead, Reuth Mirsky, Ahalya Prabhakar, Megan L. Zimmerman served as the organizing committee of this symposium. The peer-reviewed and accepted papers of the symposium were published on arXiv at https://arxiv.org/abs/2209.14292.
ARTIFICIAL INTELLIGENCE FOR PREDICTIVE MAINTENANCE (S2)
Complex, physical systems degrade over time, and continued maintenance is required to ensure their peak performance. Upkeep of well-engineered systems is often straightforward, yet significant challenges lie in knowing when and what to maintain. Traditional approaches to managing maintenance activities rely on scheduled or condition-based approaches. Predictive Maintenance (PMx) paradigm complements them with the ability to forecast needs into the future, reducing monetary and logistical burden of ownership, boosting operational safety, and reducing system down-time due to unexpected failures. Past successes show that PMx is capable of achieving those goals, however, there are remaining challenges and untapped opportunities in applying this technology at-scale in real-world settings. Many of these outstanding issues can be resolved with Artificial Intelligence.
The symposium brought together researchers and practitioners across academia, industry, and government. Keynote speakers focused on ways in which these sectors can collaborate to develop robust research, development and practice communities focused on AI-driven PMx.
There were three keynotes, 23 talks, two panel discussions, and a poster session. Some major themes that emerged included data quality and AI readiness, modeling paradigms that support maintenance-specific inquiries, and success stories of AI-driven PMx.
There was consensus that the AI models are only as good as the data used to train, and there are unique challenges in obtaining high quality data to enable AI-driven PMx. When maintenance data exists, it tends to be privately held and the processes by which the data is collected may be noisy, limiting confidence in the quality of the data. It is important that publicly available benchmark and proxy datasets exist to enable rigorous evaluation of the AI-PMx models. Several talks focused on publishing new datasets, collating existing public datasets into a AI-PMx data repository, and algorithmic approaches to bridge the gap between available data and the target, real-world application context.
There was robust discussion on the types of models that support maintenance activities. Some examples include estimating remaining useful life from sensor data obtained for critical components, performing automated optical inspection of wiring systems, robustifying supply chain networks under emerging logistic constraints, fusing multiple decision-making perspectives to optimize maintenance actions, and learning from maintenance logbook writeups by leveraging large language models. Trustworthiness and explainability of these and other AI models was a recurring theme at the symposium, as maintenance recommendations can have sizable real-world impact while carrying significant risks. Some speakers shared models that have safety performance considerations built into them, and others talked about frameworks that can formally verify that trained AI models adhere to their design specifications.
Some presenters shared success stories of commercial and government adoption of PMx technology. Part of this discussion involved assessment of AI readiness levels and levels of AI autonomy. Comparing and contrasting the AI-PMx approach with other maintenance paradigms made it clear how much a blend of new and established techniques is needed. Additional topics included how to educate the workforce of the future that will be both building and using AI-driven PMx technology, how to quantitatively assess the cost and benefits of PMx, and extra-terrestrial maintenance challenges such as automated fault recovery of autonomous systems such as planetary rovers or space habitats.
The symposium brought together many researchers and practitioners who look at the challenge of applying artificial intelligence to maintenance of equipment from diverse perspectives. The unifying vision that emerged is that using AI to maintain real-world systems represents an exciting capability that stands to benefit people who use, maintain, or interact with complex systems. We are looking forward to organizing another opportunity in the future to reconvene, share successes, new challenges and opportunities, and further grow the AI-PMx research, development and practice communities.
The co-chairs of the event were Nick Gisolfi and Artur Dubrawski, and key co-organizers included Abdel-Moez Bayoumi, David Alvord, Dragos Margineantu, and Steven Robinson. The author of this report is Nick Gisolfi.
DISTRIBUTED TEACHING COLLABORATIVES FOR AI AND ROBOTICS (S3)
The symposium on Distributed Teaching Collaborative for Artificial Intelligence and Robotics was held as part of the AAAI Fall Symposium Series in Arlington, VA, on November 16–19, 2022. The primary goal of this symposium was to understand best practices when developing a distributed classroom and to facilitate the creation of new teaching collaboratives. A Distributed Teaching Collaborative (DTC) is a network of institutions that agree to collaborate in offering courses through a distributed classroom model. To this end, the symposium brought together participants from R1 institutions, MSIs and PUIs to discuss challenges and opportunities for developing a distributed teaching collaborative with these institutions in mind. A total of twenty-one participants joined both in person and virtually from across institutions. The symposium consisted of three quick takes sessions, four keynotes, and three break-out sessions.
The goal of the quick-take sessions was to provide insight into the challenges and opportunities for a DTC at various institutions. There were three quick-take sessions, each consisting of a five 5-minute presentation followed by a panel discussion. The first focused on the HBCUs, MSI, and PUIs. One of the key takeaways from this session is that most faculty at these institutions often have a heavy teaching load, which reduces their bandwidth to explore new fields or develop new courses. They saw the DTC model as a way of leveraging what others have done to introduce new topics to their students. Although the distributed teaching model may not reduce their course load, they recognize its ability to help them expand their course offerings and support developing new courses. The second quick take focused on R1 institutions and the challenges and opportunities for a distributive teaching collaborative. Speakers shared that the faculty at R1s had a much lighter teaching load and access to more resources than their counterparts. Their challenges were in the number of students in classes and being able to offer new courses based on enrollment demand. Faculty highlighted the growing need for computer science faculty and being able to provide various specializations through a DTC. The final quick take brought both funders’ and industry perspectives to understand better what funding is available for such collaboration and the industry’s needs. We learned about various programs at the National Science Foundation for funding such collaborative and how this could fit into multiple directorates funding objectives. We also learned about the industry’s challenges with students who need to grow their interpersonal skills and how a DTC could provide students the opportunity to learn and grow with others worldwide.
The symposium consisted of four keynote speakers with varying engagement with the idea of a distributed teaching collaborative. The symposium began with a keynote by Chad Jenkins, a professor of robotics at the University of Michigan. In his talk, he laid out the symposium’s goal and the intent of the distributed teaching collaborative. He highlighted efforts to offer distributed classes between the University of Michigan and Barea College, Howard University, and Morehouse College. The second keynote was Dr. Talitha Washington the current director of the AUC data science initiative. She highlighted the need for more diversity in data science and the efforts being championed at AUC. In doing so, Dr. Washington also discussed their approach to addressing this problem by focusing on the faculty rather than being student focus. Next, we heard from Dr. Stephen Lu, Director of the iPodia program at the University of Southern California. In his talk, Dr. Lu discussed his work over the past ten years in building the iPodia program, a distributed classroom platform used by various universities worldwide. He argued that students of the future need a truly global perspective and the best way to do that is by allowing them to interact with diverse students from around the world. This is the goal of iPodia and was the focus of his talk. Dr. Lu provided examples from iPodia as to how students have benefited from a truly global education. Next, we heard from Dr. Carlotta Berry, a professor at the Rose-Hulman Institute of Technology. Dr. Berry highlighted her work at Rose-Hulman in teaching robotics and striving to bring more diverse voices into the field of robotics. In her talk, she also discussed the need to engage with diverse learners and how a DTC could help. Dr. Dave Touretzky gave the last keynote of the symposium. In his talk, Dr. Touretzky focused on the challenges of teaching robotics distributively and his previous work with ARTSI. Dr. Touretzky highlighted the need for robotics platforms to be open source and the lessons we can learn from the ARTSI project.
At the end of the symposium, we have three breakout sessions. The breakout sessions intended to recap the symposium and document lessons learned, best practices, and next steps. During the breakout sessions, we began with a recap of the quick-take sessions and the keynote presentations. From there, we tasked participants to devise a list of best practices for teaching courses collaboratively. Additionally, participants were asked to devise a list of potential courses they would like to teach. We found that participants wanted to teach one of two potential courses distributively. They either wanted to teach purely technical courses, such as computer vision and hardware for AI, or socially concise engineering courses, such as the social impact of AI, the ethics of AI, etc. We concluded the breakout session with a list of best practices and potential next steps to begin the development of this collaborative for AI and robotics.
Emmanuel Johnson, Jana Pavlasek, Chad Jenkins and Yolanda Gill served as cochairs of this Symposium. This report was written by Emmanuel Johnson and Yolanda Gil.
KNOWLEDGE-GUIDED MACHINE LEARNING (S4)
Knowledge-guided Machine Learning (KGML) is an emerging paradigm of research that aims to integrate scientific knowledge in the design and learning of machine learning (ML) methods to produce ML solutions that are generalizable and scientifically consistent with established theories. KGML is ripe with research opportunities to influence fundamental advances in ML for accelerating scientific discovery and has already begun to gain attention in several branches of science including physics, chemistry, biology, fluid dynamics, and geoscience. The goal of this symposium is to nurture the community of researchers working at the intersection of ML and scientific fields, by providing a common platform to cross-fertilize ideas from diverse fields and shape the vision of the rapidly growing field of KGML. No formal report was filed by the organizers for this symposium.
LESSONS LEARNED FOR AUTONOMOUS ASSESSMENT OF MACHINE ABILITIES (S5)
With the rise of increasingly sophisticated autonomous machines, the question arises about when/how communication of operational intent and assessments of actual vs supposed capabilities of autonomous agents impact overall performance. Lessons Learned for Autonomous Assessment of Machine Abilities (LLAAMA) examined possibilities for enabling intelligent autonomous systems to self-assess and communicate their ability to effectively execute assigned tasks, as well as reason about the overall limits of their competencies and maintain operability within those limits. The symposium brought together experts and researchers working in this burgeoning field to share lessons learned, identify major theoretical and practical challenges encountered so far, and potential avenues for future research and real-world applications.
Invited speakers and panelists shared their diverse experiences from working in industry, academia, and government agencies. Mr. George Hellstern from Lockheed Martin Aeronautics started off the symposium with a viewpoint of real-world applications of advanced autonomous systems and how competency assessment can help to enhance their performance and usage. Professor Jacob Crandall from Brigham Young University presented on measures and metrics for competency self-assessment and in designing autonomous robots that know their limits. Professor Shlomo Zilberstein from University of Massachusetts, Amherst focused on introspective competency and proficiency self-assessment by autonomous systems. Then, Professor Amy Pritchett from Penn State presented on the role of machine competency self-awareness for human-robot teaming in time-critical and safety-critical environments. The panel that closed out the symposium consisted of Professor Amy Pritchett, Dr. Jiangying Zhou of Raytheon BBN, and Dr. Marc Steinberg of the Office of Naval Research. There were several important topics that were discussed, including internal vs external competency assessment, meta analysis of self-assessment and competency assessment systems, and future of competency and proficiency awareness for autonomous systems.
The symposium also featured ten papers, ranging in topics from reinforcement learning and machine learning to formal methods and temporal logic. Some papers focused on components and subcomponents of autonomous systems and how they can be designed to focus on various facets of competency self-assessment. Others analyzed metrics that could be used to evaluate self-assessments formed by the autonomous systems. Furthermore, there were also summary and position papers which presented a holistic view of the challenges, risks and opportunities for competency-aware autonomous systems. As a breakout activity, all of the attendees were broken up into groups and given one of four different autonomous systems along with a set of discussion questions focusing on competency and its assessments for those systems. This activity garnered many thought-provoking questions including the necessity of competency awareness and how much time should be designated to perfecting the system vs analyzing the competency at its current stage.
In summary, the LLAAMA symposium was successful in bringing together a community that is actively working towards and has been thinking about machine self-assessment to create increasingly useful autonomous systems. This topic has implications to human-machine teaming, real-world deployment of AI and machine learning, and usage of autonomous systems in safety-critical and high-risk environments. This is a very new and exciting area of research, and there is interest from the attendees in keeping the discussion going and forming a larger community on this important topic.
The organizing team consisted of Aastha Acharya, Nicholas Conlon, Nisar Ahmed (University of Colorado, Boulder); Rebecca Russell, Michael Crystal (Draper); Brett Israelsen (Raytheon); Ufuk Topcu (University of Texas, Austin), Zhe Xu (Arizona State University), and Daniel Szafir (University of North Carolina). This report was written by Aastha Acharya, Nicholas Conlon, and Nisar Ahmed.
THE ROLE OF AI IN RESPONDING TO CLIMATE CHALLENGES (S7)
Climate change is one of the most pressing challenges of our time, requiring rapid action across society. AI can support applications in climate change mitigation (reducing or preventing greenhouse gas emissions), adaptation (preparing for the effects of climate change), and climate science. At the same time, AI is used in many ways that hinder climate action and has a carbon footprint itself. This symposium brought together participants from academia, industry, government, and civil society to explore these intersections of AI with climate change, as well as how these sectors, individually and together, can contribute to solutions.
The 2 1/2-day symposium included keynotes, paper presentations, panel discussions and roundtable discussion. The first keynote speaker, Burcu Akinci, CMU Professor of Civil and Environmental Engineering and department chair, talked about the challenges around reducing energy usage in the built environment and how digital twin technologies could help. Troy Harvey, CEO of PassiveLogic, discussed creating a generalized autonomy solution to serve as the control system for buildings. Pamela Isom, Chief Innovation Officer of FWG Solutions and former Executive Director of the DOE Artificial Intelligence & Technology Office, provided examples of how AI can be used to adapt to and mitigate climate change across different sectors. Finally, Ranveer Chandra, CTO for Agri-Food at Microsoft Research, described approaches to develop a data-driven agri-food system to improve sustainability for farms.
Twenty-seven papers covered a wide range of topics and used a wide variety of AI tools in areas including computer vision, optimization, NLP, knowledge graphs, super-resolution, prediction, deep generative models, agent-based modeling, and synthetic data. The problem space spanned the areas of mitigation (emissions monitoring, carbon reduction, and carbon capture & sequestration), adaptation (hazard forecasting, vulnerability management), climate science, and enabling frameworks for this work (climate finance, climate policy, and climate-related datasets and benchmarks).
The first panel discussion was on “Where is the funding and is it sufficient to meet the 2030 and 2050 climate targets?” and featured panelists from the NSF, DOE, ITA International, and Banco de España. The second panel topic was “What governance and actions are necessary to align AI with climate change goals, UN SDGs, and associated frameworks?” and featured panelists from international organizations, standards organizations, industry, nonprofits, and startups. Both topics evoked lively discussion among panel members and in-person and virtual participants.
This was our first AAAI Fall Symposium on AI and Climate Change, and we discovered that there was a lot of interest and energy around this topic. We accomplished the goals of the program committee to put spotlights on innovative work applying AI to climate change and to develop a community focused on this topic. We received many more papers than we could accept and had a significant number of international participants, many of whom attended in person. The symposium highlighted that there is a very large gap between what is needed to “Bend the Curve” and what we covered in the Fall Symposium. The Saturday Roundtable was a lively discussion on how we can move the AI community forward on the climate change challenge. The participants concluded that we should continue to build the community, expand into areas such as adaptation and resilience, and reconvene next year to continue our efforts.
The program committee for the symposium included: Feras Batarseh (Virginia Tech), Jan Drgona (PNNL), Kristen Fletcher (Naval Postgraduate School), Pierre-Adrien Hanania (Capgemini), Srinivasan Keshav (U of Cambridge), Bran Knowles (Lancaster University), Raphaela Kotsch (U of Zurich), Peetak Mitra (Excarta), Alex Philp (Mitre), Jim Spohrer (ISSIP), Meghna Tare (UT Arlington), Gege Wen (Stanford). The co-chairs and authors of this report are Frank Stein (Intelligent Systems Research, Virginia Tech), Priya Donti (Executive Director, Climate Change AI & Runway Startup Postdoc at Cornell Tech), and Melissa Hatton (Manager, AI Strategy, Capgemini Government Solutions).
THINKING FAST AND SLOW AND OTHER COGNITIVE THEORIES IN AI (S8)
State-of-the-art AI demonstrates several limitations including the lack of deep understanding of information coming from data, the absence of common-sense reasoning, the difficulty in dealing with causality, and the inability to learn general concepts from few data. AI systems usually employ either machine learning or a logical reasoning approach. Each of these approaches has its strengths and weaknesses, but it is hoped that their combination will bring about a new generation of advanced AI. Many researchers also agree that taking inspiration from how humans make decisions, with the ability to adapt and generalize, is a promising avenue of research. The AAAI 2022 fall symposium on Thinking Fast and Slow and Other Cognitive Theories in AI follows this line of thought. To this aim, it brought together leading researchers from multiple disciplines to discuss the combination of machine learning and symbolic reasoning techniques for human-like decisions that are generalizable, adaptive, robust, and ethical, by taking inspiration from cognitive theories of human decision making, such as the dual system theory of Daniel Kahneman.
The symposium was held in Arlington, Virginia, on November 17-19th, 2022. The technical program included invited talks, presentations by authors of peer-reviewed papers, and open mic sessions.
The first day began with an introduction by Francesca Rossi (IBM Research) which defined the main goals and topics for the symposium, and then had a long talk by Susan Epstein (CUNY) who reviewed the progress of AI using examples of algorithmic advances demonstrated in the context of games, covering alpha-beta pruning, GOFAI, reinforcement learning and Monte-Carlo tree search. She also pointed out efforts to make AI not only skilled but also capable to explain its decisions, generalize, and be reliable. In another long talk, Jason D’Cruz (University at Albany, NY) argued that, although AI can make accurate predictions from data, using such predictions to make decisions about human trustworthiness may not be appropriate, since they may not take into account the unique context of the humans involved. There were also talks that showed how AI cognitive architectures inspired by the dual-process theory can solve effectively epistemic planning problems compared to state of the art planners and how such architectures can get better performance in pathfinding on a grid and in manufacturing. The day also had engaging open-mic sessions on fairness, and speculative and critical thinking and the role of cognitive architectures.
The second day started with a long talk by Daniel Cunnington (IBM Research) on learning logical rules from data using neural networks. Later, the long talk by Breno William Carvalho, showed how graph-based neural modules can be useful to inspect attention-based architectures.
The third and last day saw three invited talks on the symposium topics from prominent scholars in the field. John E. Laird (University of Michigan) talked about AI architectures inspired by neuroscience knowledge of the brain components and their interaction, Yoshua Bengio (Université de Montréal) discussed how to embed higher-level cognition in deep learning models, arguing that causal models involve a family of distributions indexed by interventions, environments, and initial states. Finally, Gary Marcus (NYU) presented some shortcoming in the behavior of current transformer-based dialog systems and discussed their ability to lead to AGI.
The symposium ended with a session on reflections about the whole program and discussion on how to move forward with this research direction. Plans for future events, as well as for collections of contributions, were discussed among the organizers and all the participants. Many attendees thanked for the event and expressed interest in follow-up activities.
Besides the content of the technical program, there were several other features of the symposium that are worth to point out:
The participants made a very multidisciplinary audience, with researchers not only from AI but also from neuroscience, cognitive science, philosophy, and psychology. We believe that this is a very important ingredient to advance AI.
The diversity of the audience was very high, not just in terms of background and expertise, but also in terms of gender and age.
The open mic sessions were very engaging, with many discussion threads that started in these sessions and continued during the technical agenda or the social parts of the program.
Marianna Ganapini, Lior Horesh, Andrea Loreggia, Nicholas Mattei, Francesca Rossi, Biplav Srivastava, and Brent Venable served as co-chairs of the workshop. The papers, presentations, and photos of the workshop are available at the workshop site (https://sites.google.com/view/aaai-fss22).
Authors: Andrea Loreggia (Department of Information Engineering, University of Brescia, Italy). Francesca Rossi (IBM Research, USA), Biplav Srivastava (University of South Carolina, USA).
Aastha Acharya is a Ph.D. candidate in Aerospace Engineering Sciences at University of Colorado Boulder.
Nisar Ahmed is an Associate Professor at the University of Colorado Boulder.
Shelly Bagchi is an Electrical Engineer at the National Institute of Standards and Technology in Gaithersburg, MD.
Nicholas Conlon is a Ph.D. candidate in the Department of Computer Science at the University of Colorado Boulder.
Priya Donti is Executive Director at Climate Change AI & Runway Startup Postdoc at Cornell Tech.
Yolanda Gil is the director of New Initiatives in AI and Data Science and a Research Professor in Computer Science.
Nick Gisolfi is a Project Scientist at the Robotics Institute, Carnegie Mellon University.
Zhao Han is a Post-Doctoral Fellow and Adjunct Faculty of Computer Science at Colorado School of Mines.
Melissa Hatton is Manager of AI Strategy at Capgemini Government Solutions.
Emmanuel Johnson is a Postdoctoral Research Associate at the Information Sciences Institute, University of Southern California.
Andrea Loreggia is affiliated with the Department of Information Engineering at the University of Brescia.
Francesca Rossi is a Research Scientist at IBM Research.
Emmanuel Senft is a Research Scientist at the Idiap Research Institute in Martigny, Switzerland.
Biplav Srivastava is a Professor of Computer Science at the University of South Carolina.
Frank Stein is involved in Intelligent Systems Research at Virginia Tech.