Adam Amos-Binks, Shelly Bagchi, Erik Blasch, Mihai Boicu, Dustin Dannenhauer, Thomas E. Doyle, Leilani Gilpin, Anuj Karpatne, Aisling Kelliher, Reuth Mirsky, Nikhil Muralidhar, Audrey Reinert, Reza Samavi, Elizabeth Sklar, Gita Sukthankar, Megan Zimmerman
The Association for the Advancement of Artificial Intelligence’s 2021 Fall Symposium Series was held virtually from November 4-6, 2021. There were seven symposia in the fall program: Artificial Intelligence for Human-Robot Interaction, Artificial Intelligence in the Government and Public Sector, Cognitive Systems for Anticipatory Thinking, Computational Theory of Mind for Human-Machine Teams, 5 Human Partnership with Medical AI: Design, Operationalization, and Ethics, Science-Guided AI, and Where AI Meets Food Security: Intelligent Approaches for Climate-Aware Agriculture.
Artificial Intelligence for Human-Robot Interaction (S1)
The AAAI symposium on “The Past and the Future of Artificial Intelligence for Human-Robot Interaction (AI-HRI)” was held virtually on November 4-6, 2021. This is the eighth annual AI-HRI Fall Symposium, bringing together researchers whose work spans areas contributing to the development of human-interactive autonomous robots. The aim of this year’s symposium was to review the achievements of the AI-HRI community in the last decade, identify the challenges facing ahead, and welcome new researchers who wish to take part in this growing community. Through this symposium, we provided a venue to explore these topics, and hope to encourage further work in the area by doing so.
The Artificial Intelligence (AI) for Human-Robot Interaction (HRI) Symposium has been a successful venue of discussion and collaboration since 2014. During that time, these symposia provided a fertile ground for numerous collaborations and pioneered many discussions revolving around HRI, XAI for HRI, service robots, interactive learning, and more. This unique intersection of expertise, along with the rising interest in robots in mixed human-robot environments, calls for an informed discussion about the future of AI-HRI as a united research community. As such, this year’s symposium had no single theme and AI-HRI submissions were encouraged from across disciplines and research interests. Moreover, with the rising interest in AR and VR as part of an AI-HRI system, along with the difficulties in running physical experiments during the pandemic, this year we specifically encouraged researchers to submit works that do not include a physical robot in their evaluation, but promote HRI research in general. Over the course of the three-day meeting, the symposium facilitated a collaborative forum for discussion of current efforts in AI-HRI, with additional talks focused on the topics of ethics in HRI and ubiquitous HRI.
Twenty-eight papers were presented at this year’s AI-HRI symposium by participants from universities, industry, and national research laboratories. Topics included explanations, ethical HRI, interactive learning, metrics, and tools for development and experimentation in HRI. In addition to the paper presentations, invited talks were given by Sonia Chernova (Georgia Institute of Technology), Justin Hart (The University of Texas at Austin), and Tom Williams (Colorado School of Mines). Additionally, a panel of past AI-HRI organizers was moderated by
Ross Mead (Semio, Inc.), and included Matthew Gombolay (Georgia Institute of Technology), Tom Williams (Colorado School of Mines), Patrícia Alves-Oliveira (University of Washington), Dan Grollman (Plus One Robotics), Kalesha Bullard (DeepMind), and Richard G. Freedman (SIFT).
The symposium was held over three days and had six sessions with four to five presentations in each session. At the conclusion of each session, a Zoom breakout room was available for each of the presentations in the session. This time provided an opportunity for participants to have a more in-depth conversation with the presenters and authors. Additionally, four breakout sessions had participants engage in topical discussion questions and informed the directions of topics at AI-HRI. One more breakout session was introduced at the first day of the conference as an ice-breaker: all participants were assigned to a random breakout room with one or two other participants, and had a 10-minute introduction. This process was repeated three times, to help participants get to know one another. Participants specifically mentioned appreciating these small-group discussions as they gave a feel similar to an in-person conference.
During the multiple discussion opportunities (e.g., questions to authors, moderated discussions, and the panel) several themes emerged which concern current research in robotic and AI systems, and especially conducting interdisciplinary research in this community. One of the main questions recurring throughout the symposium was: “What are the most relevant evaluation metrics to look at?” Tom Williams’ talk addressed this point particularly by showing that much of the state-of-the-art AI is evaluated mostly on shallow understanding of words and statements (locutionary and illocutionary acts) rather than on a deeper understanding of the intents and the intentions of the actors (perlocutionary acts). Additionally, participants proposed a variety of metrics and evaluation methods, exemplifying the difficulty in finding common grounds in terms of contexts, use-cases, and embodiment. These discussions highlighted an additional key topic in our symposium, which is the diversity of the AI-HRI community in terms of varying objectives, expertise, and setup (industry vs. academia). This topic was a main discussion point during the panel, which consisted of both researchers from the industry and academia. This panel concluded with a clear call to facilitate more collaborations between academia and industry. As a challenge that accompanies the diversity of the community, we discussed the evaluation of research papers on AI-HRI, and the venues to publish this work. Finding the right venue for publishing an AI-HRI paper can be challenging, which highlighted the necessity in this forum and its community’s support throughout the year, beyond the annual symposium meeting. Participants were highly encouraged to pursue connections throughout the year using the community’s slack channel: https://tinyurl.com/aihri-slack. Finally, another topic that emerged during the moderated discussions was the barriers of incorporating robots into our daily lives, and how this community can be drafted towards achieving specific milestones towards this vision. The answers to this discussion lead to interesting insights on the opportunities specific to AI-HRI, particularly in future symposia.
Overall, AI-HRI 2021 was a very productive, inclusive, and stimulating symposium, and the attendees are excited to continue working towards research that will prepare both robots and people to more intuitively interact with each other in the future!
Reuth Mirsky (University of Texas, Austin and Bar Ilan University), Megan L. Zimmerman (National Institute of Standards and Technology), Shelly Bagchi (National Institute of Standards and Technology), Jason R. Wilson (Franklin & Marshall College), Muneeb I. Ahmad (Swansea University), Felix Gervits (US Army Research Lab), Zhao Han (Colorado School of Mines), Justin W. Hart (University of Texas, Austin), Daniel Hernandez Garcia (Heriot-Watt University), Matteo Leonetti (University of Leeds), Ross Mead (Semio), Emmanuel Senft (University of Wisconsin, Madison), and Jivko Sinapov (Tufts University) served as the organizing committee of this symposium. This report was written by Reuth Mirsky, Megan Zimmerman, and Shelly Bagchi.
The peer-reviewed papers of the symposium were published on arXiv at https://arxiv.org/abs/2109.10836.
Artificial Intelligence in the Government and Public Sector (S2)
In the 7th year of AI-GPS, the conference was positioned to address the contemporary needs within the AI community, bridging policies of interest to governments that affect the public and private sector. Key policy issues focused on developing explainable, interpretable, trusted, and certifiable AI methods across a range of applications from agriculture to safety.
The highlight of the conference was a keynote presentation from Cynthia Rudin on interpretable machine learning scoring methods which invigorated the audience on applications of human-machine teaming that could be utilized in various government agencies. While a predominant set of talks focused on North American ideas, there were participants from the UK and Africa. For example, Sekou Remy (IBM Africa) presented on the opportunity for AI to transform healthcare in Africa by leveraging theory design with governmental data. Building on the healthcare opportunities, presentations were given in support of AI methods to enhance medical facilities at the Veterans administration. Other policies presented included that of securing the power industry, enhancing human-machine teaming, and providing insights for cyber security and safety. Given the tremendous focus on AI from governments (including machine and deep learning), the discussion and post analysis afforded the participants to address areas of interest for the community to compare and contrast emerging AI methods in public competitions, accreditation of results, metrics of interest, digital gravity, and catalyzing AI for effective decision making. With the broad spectrum of participants, the underlying themes of explainable, interpretable, and accountable AI methods to support human reporting over data towards policies remained this year’s AI-GPS theme. Hence, the navigation of where and when to apply AI focused on improving safety, security, sustainability, and reliability of information for developing public policies.
Presentations were given by emerging professional PhD students to senior staff and policy makers each of which added diversity to the conference in how presentations and papers reflected on the opportunities for AI to change the landscape of the digital age. Key questions and discussions focused on implementation strategies, data collection, interpretation of results, and the future needs for government agencies. A highlight from agriculture to healthcare was that key social good can result from AI implementations to determine if current policies, substantiated by data and sound mathematical principles in data analytics, are moving society in the direction of benefits. Having different perspectives allowed each author and participant, each representing a different sector of society, the opportunity to see how AI can support their agency’s mission needs. The workshop brought together some of the leading researchers to highlight the issues from which answers to questions offered methods to enhance their own
implementations and understanding of utilizing AI within their organization. As this was the 7th year of the workshop and with the trends in AI workforce development, interest in AI, and use case studies, next year would facilitate further peer-reviewed analysis of lessons learned in applying AI to government and public areas of interest.
AI has remarkable opportunities to support government and public policies that when conducted with rigorous approaches can aid decision makers from real-time analysts to strategic leaders in crafting policies.
The organizing committee consisted of Erik Blasch (USAF), Mihai Boicu (GMU), Nathaniel D. Bastian (USMA), Lashon Booker (MITRE), Michael Garris (MITRE), Mark Greaves (PNNL), Michael Majurski (NIST), Kathy McNeill (DoL), Tien Pham (ARL), Alun Preece (Cardiff University), Ali Raz (GMU), Peter Santhanam (IBM), Jim Spohrer, Frank Stein, and Utpal Mangla (IBM). Erik Blasch and Mihai Boicu wrote this report.
Cognitive Systems for Anticipatory Thinking (S3)
This symposium investigated the link between anticipatory thinking – the cognitive process that drives our ability to manage risk – and how it applies to AI. We defined perception and cognition challenges in autonomous vehicles and a procedural generated game called Dungeon Crawl Stone Soup. Authors of accepted abstracts gave lightning talks about methods that could address one of the challenge problems. Our guest speakers drew on their experience from space, design, machine learning, computer vision, and human-robot interactions to identify the impact of robust anticipatory thinking in their domains.
Anticipatory thinking drives our ability to manage risk – identification and mitigation – in everyday life, from bringing an umbrella when it might rain to buying car insurance. As AI systems become part of everyday life, they too have begun to manage risk.
Autonomous vehicles are constantly perceiving their surrounding environments with perception systems, e.g., vision, LiDAR, and radar, to resolve the difference between reality and their representation of it. The vision system, which is opaque to humans, is not prepared for rare but highly impactful perception errors; necessitating risk management and is the basis for our perception challenge problem.
Our perception challenges for anticipatory thinking in open worlds involve autonomous driving perceiving and processing their world view. The first challenge: object bias concerns vision systems that are easily fooled by out-of-distribution inputs that exploit a model’s observational bias. The second challenge: rare scenes highlights that the vision system can still be “fooled” by rare scenes such as traffic lights on a truck, even if the object detection system is mostly accurate. Autonomous driving is a suitable domain for testing and improving a perception system’s ability to manage the risk presented by new circumstances. If these challenges are tackled successfully, this would make autonomous systems safer and cover a larger number of error and failure cases. This type of anticipatory, introspective, solution impacts public safety, manufacturers, and regulators.
Our cognition challenges for anticipatory thinking in open worlds involve AI agents acting in unexplored, hostile, and dynamic environments. The first challenge: long-term strategy concerns issues of risk as agents move between situations and micro environments where resource availability changes. The second challenge: short-term tactics concerns more
immediate risks that require agents to make use of the right resources at the right time and choice of enemy engagements. We use the rogue-like video game Dungeon Crawl Stone Soup (DCSS) to operationalize this challenge problem. In DCSS, a player moves through a procedurally generated, partially observable, and stochastic environment to retrieve the `Orb of Zot’ while managing the risks (permanent death) of encountering thousands of monsters.
DCSS is one of two rogue-like games with increased interest in recent years and remains an unsolved domain for AI.
Each accepted abstract gave a 10-minute lightning talk that highlighted a method relevant to a specific AT challenge. The symposium had five invited talks across a range of disciplines related to AT.
Kevin O’Connell (Space Economy Rising, Inc.) discussed the intelligence community’s history of anticipatory thinking and how crucial it is for the burgeoning commercial space economy. Space is a complex domain where decisions made now will have a large impact for years to come and AT is important to identify these second and third order downstream consequences.
Tom Dietterich (Oregon State University) identified AT as a mechanism for realizing a vision of eliminating all preventable AI mistakes. He discussed how a tandem of humans and AI could act like highly-reliable organizations that actively seek out ways to improve an organization’s processes.
Laura Hiatt (Navy Research Lab) focused on how priming affects human AT abilities. Priming, when exposure to a stimulus influences responses to a later stimulus, both enables AT by mentally simulating how possible futures could come to pass and can inhibit AT by converging too quickly and before deliberately considering alternatives.
Matt Klenk’s (Toyota Research International) talk highlighted how AT applies to design methodologies. Typical design involves first considering a diverse range of alternatives, then pursuing a selected few for finer grained and future analysis. AT can help this process by working backwards from failures to identify key assumptions.
James Stewart (TrojAI) described how models trained on data with adversarial interference affect computer vision models. By identifying a model’s architectural weaknesses we can begin to characterize an autonomous vehicle’s ability to perform AT and manage risk.
The symposium was a huge success, evident by the interest in the challenge problems, discussion, and planned collaborations. For more information about AT, including the accepted abstracts, please visit www. anticipatorythinking.ai. The chairs were Adam Amos-Binks, Leilani Giplin, and Dustin Dannenhauer. Adam, Leilani, and Dustin wrote this report.
Computational Theory of Mind for Human-Machine Team (S4)
The Computational Theory of Mind for Human-Machine Teams symposium was held virtually on Zoom from Nov 4-5, 2021. The purpose of this symposium was to bring together researchers from computer science, cognitive science, and social science to discuss the creation of artificial intelligence systems that can generate theory of mind, exhibit social intelligence, and assist human teams.
Humans intuitively combine pre-existing knowledge with observations and contextual clues to construct rich mental models of the world around them and use these models to evaluate goals, perform thought experiments, make predictions, and update their situational understanding. When the environment contains other people, humans use a skill called theory of mind (ToM) to infer their mental states from observed actions and context, and predict future actions from those inferred states. When humans form teams, these models can become extremely complex. High-performing teams naturally align key aspects of their models to create shared mental models of their environment, equipment, team, and strategies. ToM and the ability to create shared mental models are key elements of human social intelligence. Together, these two skills form the basis for human collaboration at all scales, whether the setting is a playing field or a military mission. Artificial intelligence (AI) technologies have made little progress in understanding the most important component of the environments in which they operate: humans. This lack of understanding stymies efforts to create safe, efficient, and productive human-machine teams.
A major theme of the symposium was how to construct agents who exhibit theory of mind when confronted with a false belief task in which different team members have conflicting information. Paulo Soares (University of Arizona) presented Theory of Mind-based Cognitive Architecture for Teams (ToMCAT) which uses a dynamic Bayes net to infer team members’ mental models in real-time. Many of the papers proposed new neural architectures that can be used to infer intention and facilitate team coordination. Dung Nguyen (Deakin University, Australia) presented an architecture for learning a latent trait vector of an actor from past trajectories. A fast weight concept is employed to represent the character traits; character trait weights modulate ToM predictions and are generated using a hypernetwork.
In addition to the empirical studies, some of the talks presented literature surveys on the diversity of approaches used when operationalizing theory of mind and creating artificial social intelligence. A few papers focused on extracting information from team communication. The invited speaker, Diane Litman (University of Pittsburgh), spoke about entrainment, the convergence of linguistic properties of spoken conversation, and its usage to predict team outcomes. This talk launched a discussion on the challenges of analyzing team communication data.
The final session was devoted to the question of evaluating agents with theory of mind. Jared Freeman (Aptima) described how these evaluations were conducted for DARPA’s program, Artificial Social Intelligence for Successful Teams. Experiments were performed on a search and rescue task environment simulated using Minecraft. Three human team members were tasked to find and rescue victims; supporting ASI agents were scored on their ability to predict effects of future interventions, infer player knowledge, and predict team members’ actions in a false belief situation. The symposium wrapped up on a high note with participants feeling excited both about the progress that has been made and opportunities for future work on creating agents that can intervene as well as infer. There was an open discussion session for brainstorming about future collaboration and publication opportunities.
The organizing team was composed of Joshua Elliott (DARPA), Nik Gurney (University of Southern California), Guy Hoffman (Cornell), Lixiao Huang (Arizona State University), Ellyn
Maese (Gallup), Ngoc Nguyen (Carnegie Mellon University), Gita Sukthankar (University of Central Florida), and Katia Sycara (Carnegie Mellon University). Papers will appear in post-event proceedings with Springer. Gita Sukthankar wrote this report.
Human Partnership with Medical AI: Design, Operationalization, and Ethics (S5)
The Human Partnership with Medical Artificial Intelligence: Design, Operationalization, and Ethics AAAI symposium was held virtually November 4-6, 2021. The goal of the symposium was to investigate our human relationship and partnership with medical artificial intelligence, especially focusing on challenges in design, operationalization, and ethics.
Human interaction with artificial intelligence takes many forms; however, the risk tolerance in a medical context is very low. As academics and practitioners at this intersection, in fields such as medicine, engineering, computer science, psychology, and human factors, we each seek to contribute to improved clinical outcomes through intelligent decision support and prediction.
The symposium brought together researchers and clinicians from a variety of AI backgrounds and perspectives. Topics discussed were privacy preservation concerns using the natural language processing Bidirectional Encoder Representations from Transformers (BERT) with clinical data, multimodal explanations for decision support, interpretable models for survival analysis, experts privileged information under uncertainty, challenges to AI in clinical practice, automated medical text translation for different user types, and intelligent tutoring for anatomical education. Keynotes by Dr. Jenna Wiens (University of Michigan) and Dr. Jason Corso (Steven Institute of Technology) presented From Diagnosis to Treatment – Augmenting Clinical Decision Making with Artificial Intelligence, and Video Understanding in the Clinic: Progress and Challenges, respectively. Guest speakers shared their clinical AI experiences in chronic pain, clinician involvement for enhancing trust, and the patient perspective on AI in their healthcare. In addition, round table discussions covered the future of medical AI partnership, enhancing trust in AI, and improving clinical adoption.
In addition to the talks, the symposium also ran a rapid-modified Delphi to better understand the challenges of medical AI partnership. Two questions were initially asked: 1) What aspects (or characteristics) of AI implementation drive, or help gain, merited trust in clinical adoption?, and 2) How the aspects (characteristics) identified can be operationalized in clinical AI implementation? The responses were discussed with the symposium participants for consensus and then ranked based on complexity and importance. Rankings were presented for further discussion and synthesis of concepts. The outcome is expected to provide the community with insight and research directions with the greatest impact in the pursuit of improving human partnership with medical AI for improved clinical outcomes.
Thomas E. Doyle and Aisling Kelliher served as co-chairs of this symposium. The papers of the symposium were published as a CEUR-WS.org proceedings available through the symposium web site aaai-human.ai. The organizing committee consisted of Reza Samavi, Barbara Barry, Steven Yule, Sarah Parker, Michael Noseworthy, and Qian Yang. Thomas E. Doyle, Reza Samavi, and Aisling Kelliher wrote this report.
Science-Guided AI (S6)
The second symposium on Science-Guided Artificial Intelligence (SGAI) was held virtually from November 4 – 6 2021 as part of the AAAI Fall Symposium Series. The goal of this symposium was to nurture the community of researchers working at the intersection of AI and scientific areas and shape the vision of the rapidly growing field of SGAI. The symposium events included 6 keynote talks, 7 invited talks, and 18 contributed paper presentations from researchers working in SGAI from diverse disciplines.
Science-guided AI (SGAI) is a growing field of research that aims to principally integrate scientific knowledge in AI models and algorithms to learn patterns and relationships from data that are not only accurate on validation data but are also consistent with known scientific theories. SGAI is ripe with research opportunities to influence fundamental advances in AI for accelerating scientific discovery and has already begun to gain attention in several scientific communities including fluid dynamics, quantum chemistry, biology, hydrology, and climate science. Our SGAI symposium was designed to bring together researchers from AI and various scientific disciplines to serve as a starting point for future research synergies in this field.
To this end, through the course of our three day SGAI Symposium, we had a total of 6 keynote talks, 7 invited talks and 18 contributed talks by students, faculty and researchers from academia and industry working at different facets of SGML research in diverse application domains. We also had panel discussions comprising of leading researchers to discuss important issues in SGAI, concluding the proceedings on each of the first two days.
There were two overarching themes of discussions that commonly appeared across all SGAI symposium activities:
1) AI for Science – How can AI methods help address grand scientific challenges?
2) Science for AI – How can scientific knowledge be leveraged to improve the performance of AI models (e.g., their ability to generalize to unseen data scenarios)?
We provide a summary of topics discussed in each of these themes throughout the symposium in the following. AI for Science: We had many interesting scientific applications of AI highlighted in the keynote/invited talks and paper presentations. For example, the keynote talk by Dr. Forrest Hoffman, Distinguished Computational Earth System Scientist at ORNL, discussed various applications of AI for spatio-temporal modeling in the Earth sciences and climate modeling. Several contributed and invited talks also discussed the potential of AI for applications in mechanical design. Dr. Kieron Burke, Distinguished Professor in Physics and Chemistry at UC Irvine, motivated the current need (and potential) of employing AI methods for material discovery by discovering better density functionals  in his invited talk. Other areas of focus included AI for disease and epidemic modeling, fluid dynamics, and hydrological modeling. Another major theme of research was the potential of AI methods for system identification, which entails discovering either the systemic interactions or the governing equations driving a process or a system, e.g., in power systems, cyber-physical systems, fluid dynamics, and cosmology .
The promise in using AI techniques to accelerate scientific discoveries have also led to the emergence of new scientific fields such as the recently funded NSF HDR Institute on establishing the new field of Imageomics, introduced at the symposium by Dr. Tanya Berger-Wolf, Director of the Translational Data Analytics Institute at Ohio State University, in her keynote talk. The goal of Imageomics  is to utilize knowledge-guided machine learning methodologies to extract biological information from images, including biological traits such as the behavior, physical appearance of an organism, or even the distinguishing skeletal structure of a species.
Science for AI: Several speakers highlighted the potential of science-guided inductive biases for developing effective AI models. Of special mention is the keynote talk by Dr. Peter Battaglia, Research Scientist at DeepMind, who discussed his research explorations in modeling complex fluid flows using graph neural networks  and highlighted the potential of using other forms of inductive biases in neural network models. Dr. Christopher Rackauckas, Director of Modeling and Simulation at Julia Computing and Applied Mathematics Instructor at MIT, in his keynote talk highlighted the potential of incorporating differentiable solvers in various applications such as modeling the evolution of differential processes ,  including predator-prey models and disease evolution. Dr. Rahul Rai, Dean’s Distinguished Professor in Automotive Engineering at Clemson University, highlighted the effectiveness of using scientific knowledge for guided feature generation as inputs to AI models and also detailed the effectiveness of SGAI models in several cyber-physical system applications  in his invited talk. Other major research themes included employing scientific knowledge to verify decision consistency of AI models, employing explicit mathematical relationships and initial, boundary conditions as direct (hard or soft) constraints in the AI/ML pipeline and incorporating ontologies, taxonomies to constrain the learning of the AI models.
Opportunities for Future Research in SGAI: The numerous discussions that ensued in the expert panels, breakout sessions, and in the Q&A sessions following each invited/keynote talk and paper presentation resulted in the exposition of many interesting directions of future research in SGAI. One of the future research directions brought up by multiple speakers and participants was the need to develop AI methods with a goal of extrapolating to unseen data scenarios (with out-of-sample distributions relative to training data), to be useful in practical scientific problems. There were multiple talks, including a keynote talk by Dr. Michael Mahoney, Professor in Statistics at UC Berkeley, that highlighted the need to better understand the complex learning dynamics  of SGAI models. Dr. Mahoney further stressed the need for developing a theoretical framework of SGAI to establish a common vocabulary of problems and methods. Other researchers reiterated the need to avoid reinventing the wheel and to incorporate the existing powerful numerical solution techniques in conjunction with the AI/ML pipelines in addressing scientific problems of interest. Finally, there was a common opinion voiced by several AI/ML researchers echoed by scientific domain experts to avoid bias towards state-of-the-art modeling approaches developed on benchmark problems and instead allow the problem to inform the best AI/ML solution structure.
We would like to thank AAAI for giving us the opportunity to organize this symposium and all the attendees for making the symposium a success. We would especially like to thank Dr. Pat Langley, Director for the Institute for the Study of Learning and Expertise at Stanford
University, who extensively engaged in lively and informative discussions with all the speakers and panelists throughout the symposium. We would also like to express our gratitude to all the keynote/invited speakers and presenting authors for sharing their work at the SGAI symposium.
This symposium was organized by Anuj Karpatne (Virginia Tech), Nikhil Muralidhar (Virginia Tech), Ramakrishnan Kannan (ORNL), Jonghyun “Harry” Lee (University of Hawaii at Manoa), Naren Ramakrishnan (Virginia Tech), and Vipin Kumar (University of Minnesota ). This report was written by Anuj and Nikhil.
Where AI Meets Food Security: Intelligent Approaches for Climate-Aware Agriculture (S7)
Where AI meets Food security (WAIF): Intelligent Approaches to Climate-Aware Agriculture was held as a part of the Virtual AAAI 2021 Fall Symposium series. The symposium focused on the agriculture domain, with emphasis on climate-aware methods, including smart approaches to water and soil monitoring and management, as well as decision making in agricultural production, using intelligent, data-driven, adaptive, trustworthy, science-based models.
As global climate change poses an ever more tangible threat to global agricultural production and agricultural sustainability, agricultural stakeholders will need to address a complex, interconnected series of challenges to ensuring global food security. However, agricultural stakeholders must collect and synthesize information from a variety of diverse sources, from in-field robots and soil sensors to remote sensed satellite data. The aim of this symposium was to bring together (1) researchers in Artificial Intelligence (AI), Machine Learning (ML), and/or Robotics; (2) Agriculture and/or Climate scientists; and (3) industry, government, and policy stakeholders to engage in broad discussion and multidisciplinary exploration of the many ways in which AI-grounded tools and techniques can be applied to reshape global agriculture.
The symposium brought together researchers from a variety of AI and robotics related sub-fields including: physics informed machine learning, optimization, climate modeling, computer vision, and route planning. One of the major themes discussed during the symposium focused on the variety of data sources available to computer scientists and roboticists. This variety and wealth of data does pose a unique series of data science and human computer interaction questions as without proper computation support, an agricultural stakeholder will quickly become overwhelmed. The symposium included four invited talks on this theme. A keynote talk given by Prof Peter McBurney (King’s College London) focused on the use of blockchain and distributed ledgers in agri-food. A keynote talk given by Dr. Nicola Cannon (Royal Agricultural University) outlined the need for intelligent approaches to climate aware agriculture. A third invited talk was given by Dr Marin Lujak of Universidad Rey Juan Carlos on the use of multi-agent teams in agricultural settings. The final invited talk was given by Dr. David Ebert on the AI challenges to achieve deployable solutions for agricultural producer problems.
The symposium included invited talks from industry partners. The first speakers – Prof Girish Chowdhary and Thomas Aref – from Eathsense discussed the use of computer vision and robotics to help farmers gain a real time understanding of plant health from the ground up.
The second speaker – Dr. Henry Sztul from Bowery Farming – discussed the use of computer vision in vertical farming to optimize plant health and provide sustainable food in urban environments.
The symposium closed with a Saturday discussion session centering on potential research directions that would synthesize work in climate science, soil science, robotics, and machine learning to address questions of global sustainability. The participants noted that symposiums such as this one are beneficial in addressing these questions as the knowledge and expertise is fragmented across multiple disciplines.
Drs. Audrey Reinert and Prof Elizabeth Sklar served as the co-chairs for this symposium. The papers associated with the WAIF symposium were published as AAAI Press Technical Reports. This report was written by Audrey Reinert and Elizabeth Sklar.
1] B. Kalita, L. Li, R. J. McCarty, and K. Burke, “Learning to approximate density functionals,” Accounts of Chemical Research, vol. 54, no. 4, pp. 818–826, 2021.
 M. Cranmer, A. Sanchez-Gonzalez, P. Battaglia, R. Xu, K. Cranmer, D. Spergel, and S. Ho, “Discovering symbolic models from deep learning with inductive biases,” arXiv preprint arXiv:2006.11287, 2020.
 NSF HDR Institute, “Imageomics: A new frontier of biological information powered by knowledge-guided machine learning,” https:// imageomics.osu.edu//.
 A. Sanchez-Gonzalez, J. Godwin, T. Pfaff, R. Ying, J. Leskovec, and P. Battaglia, “Learning to simulate complex physics with graph networks,” in International Conference on Machine Learning. PMLR, 2020, pp. 8459–8468.
 C. Rackauckas, Y. Ma, J. Martensen, C. Warner, K. Zubov, R. Supekar, D. Skinner, A. Ramadhan, and A. Edelman, “Universal differential equations for scientific machine learning,” arXiv preprint arXiv:2001.04385, 2020.
 M. Innes, A. Edelman, K. Fischer, C. Rackauckas, E. Saba, V. B. Shah, and W. Tebbutt, “A differentiable programming system to bridge machine learning and scientific computing,” arXiv preprint arXiv:1907.07587, 2019.
 R. Rai and C. K. Sahu, “Driven by data or derived through physics? a review of hybrid physics guided machine learning techniques with cyber physical system (cps) focus,” IEEE Access, vol. 8, pp. 71 050–71 073, 2020.
 A. Krishnapriyan, A. Gholami, S. Zhe, R. Kirby, and M. W. Mahoney, “Characterizing possible failure modes in physics-informed neural networks,” Advances in Neural Information Processing
Systems, vol. 34, 2021.
Adam Amos-Binks is at Applied Research Associates, Inc.
Shelly Bagchi is an Electrical Engineer at the National Institute of Standards and Technology in Gaithersburg, MD
Erik Blasch works in the Air Force Office of Scientific Research
Mihai Boicu works at George Mason University
Dustin Dannenhauer works for Parallax Advanced Research
Thomas E. Doyle is an Associate Professor of Electrical and Computer Engineering, Member of the School of Biomedical Engineering at McMaster University, and Faculty Affiliate of the Vector Institute of Artificial Intelligence
Leilani Gilpin is at the University of California Santa Cruz
Anuj Karpatne is an Assistant Professor at Virginia Tech
Aisling Kelliher as an Associate Professor of Computer Science at Virginia Tech
Reuth Mirsky is a Postdoctoral Researcher at the University of Texas at Austin, and an incoming Assistant Professor at Bar Ilan University in Israel
Nikhil Muralidhar is a Ph.D. student at Virginia Tech
Audrey Reinert is a Postdoctoral researcher at the University of Oklahoma
Reza Samavi is an Assistant Professor of Electrical and Computer Engineering at Ryerson University and Faculty Affiliate of the Vector Institute of Artificial Intelligence
Elizabeth Sklar is at the Lincoln Institute for Agri-Food Technology at the University of Lincoln, UK
Gita Sukthankar is a professor in the Department of Computer Science at University of Central Florida
Megan L. Zimmerman is a Computer Scientist at the National Institute of Standards and Technology in Gaithersburg, MD