The online interactive magazine of the Association for the Advancement of Artificial Intelligence

By Tabitha Colter, John Laird, Christian Lebiere, Richard G. Freedman, Brian Hu, Suresh Kumaar Jayaraman, Ariel Kapusta, David Porfirio, David Reitter, Paul Rosenbloom, Frank Stein, Andrea Stocco, Kevin S. Xu. 

The Association for the Advancement of Artificial Intelligence’s 2023 Fall Symposium Series was held at the Westin Arlington Gateway in Arlington, Virginia, October 25-27, 2023. There were seven symposia in the fall program: Agent teaming in mixed-motive situations, Artificial Intelligence and Climate: The Role of AI in a Climate-Smart Sustainable Future, Artificial Intelligence for Human-Robot Interaction, Assured and Trustworthy Human-centered AI, Integrating Cognitive Architectures and Generative Models, Second Symposium on Survival Prediction: Algorithms, Challenges, and Applications, and Unifying Representations for Robot Application Development. This report contains summaries of the workshops submitted by the symposium chairs. 

Agent Teaming in Mixed-Motive Situations 

The AAAI symposium on Agent Teaming in Mixed-Motive Situations, held on October 25-27, 2023, provided an insightful exploration into the challenges and innovations surrounding multi-agent interactions with different goals, incentives, and decision-making processes. This symposium gathered experts and researchers from diverse backgrounds, including multi-agent/multi-robot systems, human-agent/robot interaction, artificial intelligence, and organizational behavior. The key themes discussed in the talks, presentations, panel discussions, and breakout sessions offer a comprehensive overview of the state of the field.  

Prof. Subbarao Khambhampati’s (ASU) keynote highlighted the dual nature of mental modeling in cooperation and competition. The significance of obfuscatory behavior, controlled observability planning, and the use of explanations for model reconciliation was also discussed, emphasizing their impact on trust-building in human-robot interactions. Prof. Gita Sukthankar’s (UCF) talk, addressed challenges in teamwork, presenting a case study on software engineering teams and exploring innovative techniques for distinguishing effective teams from ineffective ones. These keynotes set the stage for discussions on the complexities of mixed-motive scenarios.  

Dr. Jean Oh’s (CMU) presentation on “Making Artificial Intelligence Measurable” focused on evolving challenges in evaluating and measuring AI progress beyond traditional benchmarks like the Turing test. On the final day, Dr. Marc Steinberg (ONR) moderated an interactive discussion on “Research Challenges in Mixed-Motive Teams,” exploring intricacies such as  

challenges in modeling humans in mixed-motives, experimental setups for studying mixed-motive teams, measuring and assessing mixed-motive situations, etc. These discussions provided diverse perspectives on the evolving landscape of agent teaming.  

Throughout the symposium, presentations of accepted papers covered a wide range of topics, including Maximum Entropy Reinforcement Learning, robustness in Multi-Agent Path Finding, Bayesian inverse planning for communication scenarios, hybrid navigation acceptability, and safety. Talks also explored challenges in human-robot teams, the impact of aligning robot values with human preferences, and the importance of constructive elaboration in autonomous agents’ communication. These presentations collectively showcased the breadth and depth of research in multi-agent systems.  

The panel session on computational agent teams explored themes such as team structure, effective collaboration within teams with diverse preferences, and the role of game theory in aligning individual and team objectives. Meta-level parameters for multi-agent collaboration were discussed, providing insights into broader considerations essential for effective cooperation in computational agent teams. The day two panel session on explicit and implicit communication within teams explored themes such as communicative nudges, trust, and transparency of both human and artificial agents. Panelists additionally weighed in on both the challenges and importance of context in human-agent communication in mixed-motive settings.  

Breakout Group discussions focused on consensus and negotiation in mixed-motive groups, considering intragroup and intergroup dynamics, and the impact of consensus on trust and also explored future work in mixed-motive teaming, discussing interdisciplinary collaborations, identifying lacking resources, and questioning the relationship between mixed-motive interaction and social interaction.  

The symposium successfully brought together a community actively addressing challenges in agent teaming within mixed-motive situations. The discussions highlighted the complexities of collaboration, trust-building, and decision-making in diverse multi-agent scenarios. Possible future work includes interdisciplinary collaborations, resource identification, and further exploration of the intricate dynamics of mixed-motive interactions. The symposium underscored the significance of ongoing research in this field and the need for continued collaboration to advance our understanding of agent teaming in mixed-motive situations. 

Artificial Intelligence and Climate: The Role of AI in a Climate-Smart Sustainable Future  

Climate change is one of the most pressing challenges of our time, posing an existential threat to civilization and the planet. Artificial Intelligence (AI) can and, when appropriate, must play a key role in accelerating the transition to a low-carbon economy to stave off the risk of catastrophic warming. Recent advances in AI should be harnessed to increase the scale and speed at which low-carbon technologies are developed and deployed. AI can also help civilization to adapt to a warming planet and provide a greater understanding of climate science and climate impacts. At the same time, AI is a multi-purpose tool, which means it has the potential to accelerate many applications that increase greenhouse gas emissions, as well as having a carbon footprint itself. This symposium brought together participants from academia, industry, government, and civil society to explore these intersections of AI with climate change, as well as how different sectors, individually and together, can contribute to solutions. 

The 2 1/2-day symposium included keynotes, paper presentations, panel discussions, and roundtable discussions.   

The first keynote speaker, Anima Anandkumar, Professor of Computing at California Institute of Technology, talked about Physics-Informed Machine Learning, including a project called FourCast Net for fast weather prediction. On the second day, Vipin Kumar, Professor of Computer Science and Engineering at University of Minnesota and AAAI Fellow, provided a keynote on the “Role of Big Data and ML for addressing Global Environmental Challenges.”  On the third day, Erwin Rose, Chair, UN Climate Technology Center & Foreign Affairs Officer, U.S. Department of State presented on “International Cooperation on AI and Climate at the 2023 UN Climate Conference and beyond,” which explained how the UNFCCC is enabling climate solutions powered by AI. 

Twenty-two contributed papers covered a wide range of topics and used a wide variety of AI tools including deep learning, generative AI, probabilistic ML, reinforcement learning, hybrid physical models, large language models, knowledge graphs, and quantum ML.  These papers covered many applications in the areas of mitigation, adaptation, and climate science, and on cross-cutting issues on using AI for climate change.   

The first panel discussion was on “AI, Climate and Policy” which was led by Jennifer Sklarew, GMU, and featured panelists Stefanie Falconi, USAID; Dann Sklarew, GMU; Johannes Kirnberger, OECD; Sorbel Feliz, AAAS STPF; and Carlos Martinez, AAAS STPF. The second panel topic was “Foundation Models” led by Thomas Brunschwiler, IBM and featured panelists Rahul Ramachandran, NASA; Bianca Zadrozny, IBM Research; Aditya Grover, UCLA; Matthew Chantry, ECMWF; Sasha Luccioni, HuggingFace; and Marquita Ellis, IBM Research.  The third panel discussion on “Financing AI for Climate” was led by Jim Spohrer, ISSIP, and featured James Mister, Bavarian US Offices for Economic Development; Honour Masters, Energize Capital; Matt  

Blain, Voyager VC; and Sylvia Spengler, NSF.  All topics evoked lively discussion among panel members and the audience and could have lasted much longer than the allocated time.  

Our Friday roundtable workshop on “Identifying Critical Data Gaps for AI & Climate” was organized to contribute to solving one the big problems facing the AI researchers in this field.  This was led by Xiaojuan Liu and Olivia Mendivil from Climate Change AI.  

This was our second AAAI Fall Symposium on AI and Climate Change and demonstrated substantial interest and energy around this topic. We accomplished the goals of the program committee to put spotlights on innovative work applying AI to climate change and to develop a community focused on this topic.  We received many more papers than we could accept and had participants from around the globe, many of whom attended in person.  The symposium highlighted that there is still a very large gap between what is needed to “Bend the Curve” and what we covered in the Fall Symposium.  The participants concluded that we should continue to build the community, expand into areas such as adaptation and resilience, and reconvene next year to continue our efforts.   

Conclusion 

The program committee for the symposium included: Utkarsha Agwan (UC Berkeley), Feras Batarseh (Virginia Tech), Thomas Brunschwiler (IBM Research Europe), Priya Donti (MIT) – Co-Chair, Christoph Funk (Centre for International Development and Environmental Research), Melissa Hatton (Capgemini Government Solutions) – Co-Chair,  Srinivasan Keshav (University of Cambridge), Alice Lepissier (Brown University), Marina Lesse (Energy Academic Group, Naval Postgraduate School), Peetak Mitra (Excarta), Jorge Montalvo (Centrica), Sebastian Ruf (InterContinental Exchange), Jim Spohrer (ISSIP), Frank Stein (Virginia Tech) – Co-Chair, Gege Wen (Stanford), Andrew Williams (Mila), Ziyi Yin (Georgia Tech). This report was written by Frank Stein. 

Artificial Intelligence for Human-Robot Interaction 

The AAAI Fall Symposium titled “Artificial Intelligence for Human-Robot Interaction (AI-HRI)” was held from October 25, 2023 to October 27, 2023, at the 2023 AAAI Fall Symposium Series in Arlington, Virginia. This year marked the tenth anniversary of AI-HRI, and events reflected on the achievements of the community and discussed plans for the community’s upcoming transition to a new venue and spiritual successor, Technological Advances in Human-Robot Interaction (TAHRI). Thirty attendees participated in the symposium. AI-HRI continued to include its signature community-building efforts, including paper presentations, poster sessions, and breakout group discussions. These activities are ideal within the symposium community because it is small enough that everyone could learn each other’s names and faces, but it is also large enough to draw an audience of people who have a real impact in the field. Breakout group discussions presented opportunities for new researchers in the field to meet senior members in a more informal setting and become more involved in future interactions and collaborations within the community, and there were many first-time attendees in addition to returning members. 

The AI-HRI symposium also included invited talks from Patrícia Alves-Oliveira (Amazon Lab 126), Michael Littman (Brown University), and Katherine Tsui (Toyota Research Institute, Robotics), who each presented methods and applications for state-of-the-art challenges across the spectrum of artificial intelligence (AI) and human-robot interaction (HRI) research. Their presentations sparked many of the conversation topics during the break-out discussions, ranging from thinking about how to adopt related practices into their research to considering new challenges the applications pose at the intersection of AI and HRI. Seventeen paper presentations, including three best paper nominees (the award went to Yuqi Yang (Franklin & Marshall College) et al. for their paper titled “Towards An Ontology for Generating Behaviors for Socially Assistive Robots Helping Young Children”), introduced further innovations over the past year across additional techniques and applications. The presentations delved into topics including the use of generative language models in dialogues and safety; AI acceptance in tech education, elderly care, and social navigation; and long-term autonomy implications in social assistive robots. To support inclusion amongst the community regardless of their ability to physically travel to the Fall Symposium Series, there was hybrid support so that virtual attendees could share their work with the broader community while viewing others’ presentations on-site. 

During the final half-day of the AI-HRI symposium, the community was joined by the Unifying Representations for Robot Application Development (UR-RAD) symposium attendees to discuss needs, expectations, and preferences for future gatherings like TAHRI to maintain the welcoming, inclusive, and close community that has persisted throughout the ten years of AI-HRI.  The AAAI Fall Symposium Series provided an opportunity ten years ago for researchers at the intersection of the fields of AI and HRI to come together in an attempt to identify where they belong when neither field’s venues seemed to provide the correct fit for their research contributions, serving as a bridge between the two areas of study.  As the community met and learned from each other over the past ten iterations of the symposium, it became clearer that AI-HRI is a unique field of study that is equally important to both fields and their venues—more importantly, these gatherings fostered a strong, supportive community that recognizes the challenges that their colleagues face trying to understand AI problems in HRI domains, as well as HRI problems surfacing as AI becomes more ubiquitous in society.  Everyone wanted more time in break-out discussions due to the wonderful, productive conversations and networking, which is a testament to the community’s positive and comfortable engagement between both familiar and new faces.  Although the tenth anniversary marked the last AI-HRI at the AAAI Fall Symposium Series, the opportunities that the venue offered have made this the beginning of a new community.  On behalf of the past and present attendees, the organizers would like to thank AAAI for allowing everyone to participate at the Fall Symposium Series each year to establish and facilitate the growth of this group of researchers.  As the next step, the organizers and attendees all look forward to meeting again at TAHRI alongside additional colleagues, new and old, as this community continues to grow and develop. 

Richard G. Freedman, Emmanuel Senft, Muneeb I. Ahmad, Daniel Hernández García, Zhao Han, Justin W. Hart, Ifrah Idrees, Ross Mead, Reuth Mirsky, and Jason R. Wilson served as the organizers of this symposium. This report was written by Richard G. Freedman and Ifrah Idrees, and this report was edited by Daniel Hernández García, Ross Mead, Reuth Mirsky, and Emmanuel Senft. 

Assured and Trustworthy Human-Centered AI  

The Assured and Trustworthy Human-centered AI (ATHAI) symposium was held as part of the AAAI Fall Symposium Series in Arlington, VA from October 25-27, 2023. The symposium brought together three groups of stakeholders from industry, academia, and government to discuss issues related to AI assurance in different domains ranging from healthcare to defense. The symposium drew over 50 participants and consisted of a combination of invited keynote speakers, spotlight talks, and interactive panel discussions.  

On Day 1, the symposium kicked off with a keynote by Professor Missy Cummings (George Mason University) titled “Developing Trustworthy AI: Lessons Learned from Self-driving Cars.” Missy shared important lessons learned from her time at the National Highway Traffic Safety Administration (NHTSA) and interacting with the autonomous vehicle industry. These lessons included topics like remembering that maintaining AI is just as important as creating AI and that human errors in operation don’t just disappear with automation but instead can get replaced with other human errors in coding. The first panel covered definitions related to AI assurance, and provided several grounding definitions while establishing the lack of consistency across the field. The second panel covered challenges and opportunities for AI test and evaluation, highlighting gaps in current evaluation strategies, but providing optimism that existing evaluation strategies can be sufficient if followed. The final panel covered industry and academic perspectives on AI assurance, suggesting ideas that could be shared across industries and highlighting a potential need for regulation.  

Day 2 began with a panel of experts from domains like defense and healthcare discussing government and policy perspectives on AI assurance. This panel identified several barriers to achieving assured AI, including the lack of required standards and accepted benchmarks for assurance requirements. Fundamental questions like “what is safe and effective enough”, and issues relating to policy and regulation gaps were raised and some possible solutions were presented. Professor Fuxin Li (Oregon State University) gave a keynote titled “From Heatmaps to Structural and Counterfactual Explanations,” which highlighted his research group’s work to explain and debug deep image models, towards the goal of improving the explainability of AI systems. Matt Turek (DARPA) also gave a keynote talk titled “Towards AI Measurement Science,” with a historical lens of how we as humans have measured things over time with an eye toward the need for and possible avenues to create AI measurement science to help the field move beyond standard benchmarks and advance the current state-of-the-art.  

Other highlights of the symposium included a series of 15 2-minute lighting talks of accepted papers, followed by a poster session on these papers. The poster session enabled lively discussion among participants, with research covering a wide range of topics such as tools for rapid image labeling, tools to improve AI test and evaluation, metrics and methods for evaluating AI, and an assured mobile manipulation robot that can perform clinical tasks like vital sign measurement. On the final half day, participants split into two breakout groups for more in-depth discussions and exchange of ideas. One group focused on practical next steps towards AI assurance in the medical domain, focusing on what can be done in the absence of regulatory change. The other group discussed assurance of foundation models and generative AI technologies such as large language models.  

Overall, ATHAI brought together experts from diverse fields to begin building a shared understanding of the challenges and opportunities for AI assurance across different domains. These discussions were also extremely timely, given the President’s recent executive order on safe, secure, and trustworthy AI. Researchers across different backgrounds also gave us separate insights into hopes the community is pursuing for a future of AI that is safe, secure, assured, and explainable.  

Brian Hu, Heather Frase, Brian Jalaian, Ariel Kapusta, Patrick Minot, Farshid Alambeigi, S. Farokh Atashzar, and Jie Ying Wu served as co-organizers of this symposium. This report was written by Brian Hu and Ariel Kapusta, with helpful inputs from Tabitha Colter.  

Integrating Cognitive Architectures and Generative Models  

The Symposium took place on October 25-27 with the aim of exploring existing gaps and integration possibilities between two different traditions in AI: the established field of cognitive architectures (CAs) and the booming field of generative deep-learning models.  

The symposium alternated between presentation-based sessions and breakout discussions. About 60 attendees were present either in person or virtually; they represented different geographical regions (North and South America, Europe, Asia, Australia and New Zealand) and different institutions (private companies, research institutes, and universities). Most attendees were from the CA community, with additional attendees from the deep learning/generative models community.  

The presentation sessions offered four strategies to integrate generative models and cognitive architectures. The first is offline integration, whereby LLMs are used as a knowledge base that can be queried through architecture-generated prompts or where the CA produces behavior on which a transformer could be trained.  

The second is modular integration, whereby an LLM is incorporated within an existing CA, usually as a read-only long-term memory. The CA retrieves data from the LLM via specialized prompts and uses the results in its reasoning.  

The third is system-level integration, where multiple generative models are integrated within a CA, providing different functionalities beyond static knowledge bases. Presentations illustrated potential roadmaps to achieve integration in both existing and proposed cognitive architectures or the Common Model of Cognition.  

The fourth is unification, whereby multiple principles that underlie LLMs are transferred to CAs. In practice, this approach leads to a neural implementation of an architecture. Different memories and functions in a CA would be implemented by different types of networks (e.g.,Hopfield networks for memories and convolutional networks for vision) while vector symbolic algebras can implement structured and composable representations.  

The breakout sessions highlighted that generative models, and large language models (LLMs) in particular, provide exciting opportunities for CAs, especially through their capacity to absorb and retrieve large quantities of data from vast textual corpora. At the same time, LLMs suffer from limitations that have become more apparent since the introduction of ChatGPT. These limitations include the tendency to hallucinate incorrect information that was not in training data; the lack of persistent memory and context awareness beyond their predefined input window; the inability to learn incrementally; the inability to reason through and revise their output; their lack of metacognitive ability and introspection; and the massive amount of resources that are needed to train them. Despite their apparent ability to create coherent semantics, they ultimately also exhibit a grounding problem (like symbolic systems), only transferred into a vector space.  

CAs offer complementary advantages. They are naturally designed to integrate multiple representations and their associated memory stores, to learn incrementally and online, and to implement robust forms of inference and reasoning. They naturally work in open-ended cognitive cycles instead of being one-pass systems and may include metacognitive abilities. Finally, and unlike LLMs, they can be provided knowledge directly and do not require extensive off-line training.  

When asked to design integrated systems for case studies during the breakout sessions, a common solution was a vertical integration strategy, whereby an LLM’s output is inspected by a cognitive architecture that provides a higher-level, system-2 type control over the LLM’s lower-level, system-1 output. The architecture can be used to detect errors, reason about requests, make informed inferences, provide context, and maintain memories.  

The opening and closing presentations highlighted a mirror convergence between the two fields. Generative neural models have been progressively infused with more features (e.g., attention) that mimic concepts of CAs. On the other hand, CAs have coalesced around common tenets (the Common Model of Cognition) that could provide some principled organization. While generative models are in search of top-down structure, architectures are in search of bottom-up, flexible representations, and their integration is both needed and auspicious.  

The symposium was organized by John Laird, Christian Lebiere, David Reitter, Paul Rosenbloom, and Andrea Stocco. The report was written by the organizers.  

Second Symposium on Survival Prediction: Algorithms, Challenges and Applications  

Survival analysis attempts to estimate the time until a specified event (e.g., death of a patient) occurs, or some related survival measures, and is widely applicable for survival prediction and risk factor analysis. A key challenge in learning effective survival models is that this time-to-event data is subject to “censoring’’ so that the time to event is only known up to a bound for such instances.  

The Second Symposium on Survival Prediction: Algorithms, Challenges and Applications (SPACA) featured contributions to survival prediction from researchers in diverse fields including machine learning, healthcare, medicine, statistics, and engineering. The symposium program included four invited talks, three oral presentation sessions, a poster session, and two discussion groups. In total, seventeen contributed papers were presented at the symposium.  

The invited talks were given by Donglin Zeng (University of Michigan, USA), Paidamoyo Chapfuwa (Microsoft Research, USA), Jane-Ling Wang (University of California, Davis, USA), and Sanjay Purushotham (University of Maryland-Baltimore County, USA). Eleven contributed papers were selected for oral presentations, grouped into three thematic sessions: biomedical applications (four papers), deep learning (three papers), and non-deep learning methods (four papers). Six other papers were presented in a short oral spotlight session, where each presenter was given three minutes to provide a summary of their paper. All seventeen papers were also presented during the poster session.  

The best paper award was presented to authors Andre Vauvelle, Benjamin Wild, Roland Elis, and Spiros Denaxas for their paper “Differentiable Sorting for Censored Time-To-Event Data”. Their proposed Diffsurv algorithm took a unique approach to the survival prediction problem by extending differentiable sorting networks to the censored data setting, which allows them to be used for survival prediction problems.  

The audience for this symposium was very well-versed in survival prediction, which led to many discussions between participants and presenters, including the invited speakers, and provided a very interactive atmosphere overall. Perhaps the most interesting discussions took place during the two discussion groups.  

The first discussion group was led by Russ Greiner (University of Alberta, Canada) and focused on the challenge of finding useful evaluation metrics for survival prediction problems. The presence of censored data makes it difficult not only to incorporate them into prediction algorithms, but also to evaluate the accuracy of such prediction algorithms. A takeaway from this discussion was for the group to put together a technical report on best practices and considerations for evaluating survival prediction models that could become a standard for the community.  

The second discussion group was led by Chirag Nagpal (Google Research, USA) and aimed to develop a challenge problem for the survival prediction community. Many papers on survival prediction use the same small biomedical data sets for evaluation, and perhaps coming up with a new challenge problem on a new data set could invigorate the community to move beyond these small biomedical data sets.  

The symposium concluded with a discussion on the future of the SPACA symposium. Participants were enthusiastic about the symposium as a way to gather an interdisciplinary community of survival prediction researchers. We look forward to organizing another opportunity to gather this community together for further discussions and to grow the community to more researchers working on problems involving survival analysis and prediction.  

Kevin Xu, Russ Greiner, George Chen, Weijing Tang, and Chirag Nagpal served as co-chairs of this symposium. This report was written by Kevin Xu.  

Unifying Representations for Robot Application Development 

The first Unifying Representations for Robot Application Development (UR-RAD) AAAI Fall Symposium was held from October 25-27, 2023, in hybrid format. UR-RAD targeted researchers interested in formal representations for robotics, including logics, languages, software frameworks, and other computational abstractions. Specifically, the main focus of the symposium was on how robot application developers leverage these representations in their work, whether it be pre-encoding domain knowledge onto robots, creating robot platforms and libraries, or engineering, programing, or pre-training various different robot skills.  

The goal of UR-RAD was to seek cohesion in how robot application developers use existing representations and design new representations. UR-RAD therefore invited experts, researchers, and industrial practitioners in both artificial intelligence (AI) and robotics to the symposium in order to understand each other’s best practices, derive from each other’s expertise, and identify areas of potential standardization in how these communities apply and create representations.  

UR-RAD was implemented as a mixture of invited keynote talks, paper presentations, and breakout discussions. Papers were solicited from the AI and robotics communities prior to the symposium. UR-RAD invited submissions from variety of different topics: existing or new representations for robotics, robot programming paradigms and interfaces, robot runtime environments, robot software frameworks and libraries, formal methods for robotics, and AI planning for robotics, to name a few examples. Papers were peer reviewed, and ultimately twelve were accepted, including seven research papers, four position papers, and one artifact paper.  

On the first day of the symposium, Dr. Chien-Ming Huang (Johns Hopkins University) and Dr. Stefanie Tellex (Brown University) gave hour-long keynote talks. Dr. Huang opened the symposium by speaking about robot application development challenges and interfaces. Dr. Tellex followed with a presentation about natural language interfaces for robotics and semantic representations of robot environments. In addition, six papers were presented on the first day. Four papers proposed novel representations and interfaces for robotics, while two focused on representations for human-robot interaction.  

The second day of UR-RAD include an additional two hour-long keynote talks from Dr. Dana Nau (University of Maryland, College Park), and Dr. Mattias Scheutz (Tufts University). Dr. Nau opened with a talk on AI planning techniques and advancements. Dr. Scheutz gave a presentation on robot middleware developed in his laboratory at Tufts University. An additional six papers were presented on the second day of the symposium that included another diverse set of topics: advancements in AI planning and robotics, robot software stacks, and how large language models can be used for robot application development.  

The UR-RAD co-organizers nominated three papers for a best paper award. After all nominees had given their presentations, the co-organizers decided on the final awardee. The nominees and final best paper awardee were given certificates signed by all UR-RAD co-organizers. The best paper nominees and final award can be found at https://sites.google.com/view/aaai-ur-rad-symposium/home 

The symposium concluded with a discussion about the larger themes of the symposium. When asked what aspects of robots need to be represented formally, participants responded with a variety of answers such as time, causality, actions, expectations, error, uncertainty, values, norms, relationships, and the beliefs of others. There was doubt that a single, unifying representation can capture all of these  

aspects, though some participants posited that natural language is already the ultimate unifying representation. Questions arose about how successfully new representations could be adopted within the community, and participants shared causes for why they had adopted new representations in the past: dead ends in research, academic peer pressure, and to pursue better long-term software support.  

At the symposium’s conclusion, various members of the AAAI Fall Symposia community expressed interest in participating in future iterations of UR-RAD. In the future, UR-RAD should continue to emphasize its role of facilitating awareness of best practices and standardization where beneficial. The organizing committee would additionally like to engage with participants a few weeks before the symposium begins in order to determine which topics would be most interesting to discuss.  

Overall, the first UR-RAD symposium was received positively, and members of the artificial intelligence and robot application development communities are excited to continue sharing best practices in the future!  

The peer-reviewed papers presented at the symposium can be found at https://sites.google.com/view/aaai-ur-rad-symposium/schedule 

David Porfirio, Ross Mead, Laura M. Hiatt, Mark Roberts, Laura Stegner, Amin Atrash, Nick DePalma, and Ruchen Wen served as co-organizers for UR-RAD. This report was written by David Porfirio.  

Authors 

Tabitha Colter works in AI Assurance & Operations within the AI & Autonomy Innovation Center at the MITRE Corporation. 

John Laird is Co-Director of the Center for Integrated Cognition. 

Christian Lebiere is a Research Faculty in the Department of Psychology, Carnegie Mellon University. 

Richard G. Freedman is a Researcher at Smart Information Flow Technologies (SIFT). Ifrah Idrees is a Ph.D. Candidate in Computer Science at Brown University. 

Brian Hu is a staff R&D engineer and computer vision researcher at Kitware, Inc.  

Dr. Suresh Kumaar Jayaraman is a postdoctoral researcher at the Robotics Institute at Carnegie Mellon University. 

Ariel Kapusta is an autonomous systems engineer at the MITRE Corporation. 

David Porfirio is an NRC Postdoctoral Research Associate at the U.S. Naval Research Laboratory in Washington D.C. 

David Reitter is a Staff Research Scientist at Google DeepMind. 

Paul Rosenbloom is a Professor Emeritus at the Thomas Lord Department of Computer Science and the Institute for Creative Technologies, University of Southern California. 

Frank Stein is a Researcher in the Intelligent Systems Division and the Center for Environmental Security at Virginia Tech, and former Director of the A3 Center at IBM. 

Andrea Stocco is an Associate Professor at the Department of Psychology, University of Washington. 

Kevin S. Xu is an assistant professor in the Department of Computer and Data Sciences at Case Western Reserve University.