The online interactive magazine of the Association for the Advancement of Artificial Intelligence

Reports of the Association for the Advancement of Artificial Intelligence’s 2020 Fall Symposium Series

 

Muhammad Aurangzeb Ahmad, Shelly Bagchi, Erik Blasch, Danish Contractor, Arjuna Flenner, Julia K. Haines, Bonnie Johnson, Tony Kendall, Doug Lange, W.F. Lawless, Tom McDermott, Daniel McDuff, Melanie Mitchell, Ranjeev Mittu, Bruce Nagy, Hemant Purohit, Oshani Seneviratne, Emmanuel Senft, Thomas Shortell, Don Sofge, Frank Stein, Jason R. Wilson, Ying Zhao

 

The Association for the Advancement of Artificial Intelligence’s 2020 Fall Symposium Series was held virtually from November 11-14, 2020, and was collocated with three symposia postponed from March 2020 due to the COVID-19 Pandemic. There were five symposia in the fall program: AI for Social Good, Artificial Intelligence in Government and Public Sector, Conceptual Abstraction and Analogy in Natural and Artificial Intelligence, Physics-Guided AI to Accelerate Scientific Discovery, and Trust and Explainability in Artificial Intelligence for Human-Robot Interaction. Additionally, there were three symposia delayed from spring: AI Welcomes Systems Engineering: Towards the Science of Interdependence for Autonomous Human-Machine Teams, Deep Models and Artificial Intelligence for Defense Applications: Potentials, Theories, Practices, Tools, and Risks, and Towards Responsible AI in Surveillance, Media, and Security through Licensing.

 

Artificial Intelligence for Social Good (S1)

 

            Recent developments in big data and computational power are revolutionizing several domains, opening up new opportunities and challenges. In this symposium, we highlighted two specific themes, namely humanitarian relief, and healthcare, where AI could be used for social good to achieve the United Nations (UN) sustainable development goals (SDGs) in those areas, which touch every aspect of human, social, and economic development. The talks at the symposium were focused on identifying the critical needs and pathways for responsible AI solutions to achieve SDGs, which demand holistic thinking on optimizing the trade-off between automation benefits and their potential side-effects, especially in a year that has upended societies globally due to the COVID-19 pandemic.

Riding on the success of the AI for Social Good symposium that was held in Washington, DC, in November 2019, we organized the 2020 version of the symposium. While keeping the focus on the two UN SDGs of healthcare and disaster relief, we strove to highlight the challenges of trust deficit as the cost of AI errors and how to do responsible AI system design as part of the 2020 symposium. We identified several directions of AI for social good, including the reliability and robustness guarantees, human-centered approach for testing, ethical design, explainability, fairness, and the elimination of AI bias.

Given the unique circumstances in 2020, the symposium’s two themes were uniquely fitting for this year. The world-wide healthcare crisis with the COVID-19 pandemic was akin to a Grey Rhino event (i.e., highly probable but neglected threat that has an enormous impact). Simultaneously, various disaster scenarios such as wildfires early in the year in Australia and later in the year in the Western United States requiring humanitarian relief characterized the Black Swan event (i.e., an unpredictable event beyond what is typically expected of a situation and has potentially severe consequences).

The key objectives of the symposium along the two key themes are as follows. AI for Healthcare: Healthcare is one of the foremost challenges of today’s world, highlighted by the recent COVID-19 pandemic where it has come to the forefront of the global discourse. In general, healthcare data is characterized by data missingness, poor data standardization, data incompleteness, and other data quality issues that have down-stream consequences. These factors hinder the deployment of solutions relevant to real-world use cases. Moreover, AI, particularly Machine Learning (ML), system design in healthcare is characterized by the last mile problem, where delivering a practical solution that is reliable and robust to errors (especially in “break glass in case of emergency” situations) has proven hard to implement. These have broader implications in the context of fairness, explainability, and transparency in ML. Therefore, the implementation and deployment of AI/ML systems in healthcare bring up challenges that go far beyond model building and scoring. This symposium also focused on a broad range of AI health-care applications and challenges encountered, including but not limited to: automation bias, prescriptive AI models, explainability, privacy and security, transparency, and decision rights, especially in the context of deployment of AI in real-world scenarios in healthcare.

AI for Humanitarian Technologies and Disaster Management: Technology can have an incredible impact on how we address humanitarian issues and achieve SDGs world-wide. Detecting and predicting how a crisis or conflict could develop, analyzing the impact of catastrophes in a cyber-physical society, and assisting in disaster response and resource allocation are of utmost importance, where the advances in AI can be utilized. The AI techniques can allow better preparation for disasters, help save lives, limit economic losses, provide adequate disaster relief, and make communities more robust and resilient. The symposium focused on all aspects of humanitarian relief operations supported by the novel use of AI technologies from enabling missing persons to be located, leveraging crowdsourced data to provide early warning for rapid response to emergencies, increasing situational awareness, to logistics and supply chain management.

We had several esteemed researchers and thought leaders deliver several keynotes at the workshop. Our opening keynote speaker was Professor Malik Magdon-Ismail from the Rensselaer Polytechnic Institute, who gave an insightful keynote on simple local models with robust change-point analysis and model identification for COVID-19 prediction that can be applied at the county or organization level. Highlighting the importance of AI explainability, Dr. Rich Caruana from Microsoft Research talked about glass box models in an aptly titled keynote “Friends Don’t Let Friends Deploy Black-Box Models: The Importance of Intelligibility in Ma- chine Learning.” We were fortunate to have Hon. Maleeh Jamal, the Minister for Communication, Science, and Technology of The Maldives, deliver the opening keynote of the second day on “AI for Social Good: Small Nations’ Perspectives.” Dr. Walter Dorn from the Royal Military College of Canada & Canadian Forces College presented a vision on “Intelligence for Peace: AI in UN Field Operations and Cyber-Peacekeeping,” providing perspectives from various exciting use cases. Our closing keynote was by Dr. Suranga Nanayakkara from the University of Auckland, New Zealand, who talked about inspiring tools and techniques to augment human capabilities, focusing on assistive technologies.

The symposium had two panels, one focused on AI for healthcare and the other on AI for Humanitarian Technologies and Disaster Management.

The panel on AI for Healthcare, moderated by Dr. Muhammad Aurangzeb Ahmad consisted of Dr. Carly Eckert, MD (Department of Epidemiology, University of Washington & KenSci), Dr. Vikas Kumar (KenSci), Dr. Nicholas Mark, MD (Swedish Hospital), and Dr. Oshani Seneviratne (Rensselaer Polytechnic Institute). The panelists discussed the last mile problem in healthcare AI, which is the challenge of adopting and implementing ML models in the clinical workflow, highlighting the main hurdles that need to be overcome in their opinion. The conversation also focused on what roles, if any, AI can play in reducing delivery bias because while data bias and algorithmic bias are relatively straightforward to quantify, the delivery bias in healthcare is trickier to quantify. The discussion also included what regulatory bodies should focus on given the rapid pace of technological progress in AI/ML, and more importantly, whether such technologies can be regulated meaningfully. This is especially important given that underserved and underprivileged communities often do not have access to the tools even to know if they are being discriminated against, the panel discussed what the AI community, with cooperation from the healthcare practitioners, can do to remedy these problems along with many insights from their work on applying AI in healthcare in real-world settings.

The panel on AI for Humanitarian Technologies and Disaster Management, moderated by Dr. Hemant Purohit, consisted of Dr. Jennifer Chan, MD, MPH (Professor, Feinberg School of Medicine, Northwestern University), Mr. Steve Peterson, CEM (Montgomery County CERT, and National Institutes of Health), Dr. Walter Dorn (Professor, Royal Military College of Canada and United Nations Peacekeeping Operations), and Dr. Oshani Seneviratne (Rensselaer Polytechnic Institute). The panelists shared success stories of using AI technology in humanitarian assistance and disaster management. The discussion then shifted to potential barriers for adopting AI in this space, both in terms of operational and data or technology-centric challenges. The panelists identified the concerns of limited capabilities and the need for AI tools to reach and respond to the last mile during disaster relief, such as diverse speaking populations and remote vulnerable areas with conflicts.

We received 28 papers from 68 authors to the call for papers. After a rigorous peer-review process with the help of our program committee members that consisted of 27 researchers from a variety of research areas, we selected 22 papers as regular papers and three papers as short papers. Each paper received at least two reviews. In terms of the topics, we had a variety of novel research spanning healthcare and humanitarian technologies. Unsurprisingly, we had many papers on COVID-19 related topics ranging from applications of policy guidance and mitigation from epidemiological data to using computer vision to ascertain that individuals follow social distancing guidelines. The first day of the symposium was dedicated to discussing healthcare technologies, and the second day for humanitarian technologies.

Unlike the previous year, due to the COVID-19 pandemic, we decided to hold the symposium virtually. We used this as an opportunity to increase participation, as those who would not usually be able to travel to Washington, DC., would now be able to attend the symposium. We saw participants from all over the USA, as well as from around the world.

The symposium details, including the program, keynote speakers and the panelists, and the recorded videos of all the paper presentations are available on our website at https: //ai-for-socialgood.github.io/2020/index.html.

The AI for Social Good Fall-2020 symposium was built upon our continued efforts in bringing the AI community members together for the healthcare and humanitarian technology themes and reinforced the success of last year’s successful AAAI Fall Series symposium on AI for Social Good. The symposium brought together AI researchers, domain scientists, practitioners, and policymakers to exchange problems and solutions, identify synergies across different application domains, and lead to future collaborative efforts. We will continue to organize similar events to have further discourse on this important topic of AI for Social Good.

This symposium was co-organized by Dr. Muhammad Aurangzeb Ahmad, Dr. Hemant Purohit, and Dr. Oshani Seneviratne. Dr. Muhammad Aurangzeb Ahmad is an Affiliate Assistant Professor at the University of Washington Tacoma and Principal Research Scientist at KenSci Inc. Dr. Hemant Purohit is an Assistant Professor of Information Sciences and Technology at George Mason University. Dr. Oshani Seneviratne is the Director of Health Data Research at Rensselaer Polytechnic Institute. This report was written by Oshani Seneviratne, Hemant Purohit, and Muhammad Aurangzeb Ahmad.

 

AI Welcomes Systems Engineering: Towards the Science of Interdependence for Autonomous Human-Machine Teams (S6)

 

Our Spring Symposium was unique; planned to be held at Stanford University, our favorite conference location, but was cancelled due to Covid-19 and converted into a virtual Zoom Symposium. About half of our speakers, however, were unfamiliar with zoom and balked. AAAI stepped in to offer us not one, but two Symposia, the second held last November. For our Fall Replacement Symposium, including speakers in Australia and France, there were no cancellations. Surprisingly, both Symposia were well attended with over 30 participants at each event.

Challenged by the prospect of designing autonomous systems, we were pleased by the large numbers of participants at what initially struck some of our colleagues as a stretch to include Systems Engineers in an AI Symposium. However, history links Systems Engineering and the science of teams with, surprisingly, quantum theory and social psychology, making the science of autonomy truly interdisciplinary.

First, in 1935 (p. 555), Schrödinger wrote about quantum theory by describing entanglement, “… the best possible knowledge of a whole does not necessarily include the best possible knowledge of all its parts, even though they may be entirely separate and therefore virtually capable of being ‘best possibly known’ … The lack of knowledge is by no means due to the interaction being insufficiently known … it is due to the interaction itself.”

Lewin (1951, p. 146), the founder of Social Psychology, wrote that the “whole is greater than the sum of its parts.” From the Systems Engineering Handbook (Walden et al., 2015), “A System is a set of elements in interaction” (Bertalanffy, 1968) where systems “… often exhibit emergence, behavior which is meaningful only when attributed to the whole, not to its parts” (Checkland, 1999).

There is more. Returning to Schrödinger (p. 555), “Attention has recently been called to the obvious but very disconcerting fact that even though we restrict the disentangling measurements to one system, the representative obtained for the other system is by no means independent of the particular choice of observations which we select for that purpose and which by the way are entirely arbitrary.”

But if parts of a whole team are not independent, does a state of interdependence among complementary parts confer a thermodynamic advantage to the whole (Lawless et al., 2019)?

An answer comes from the science of teams, “Compared to a collection of the same but independent individuals, the members of a team when interdependent are significantly more productive” (Cooke & Hilton, 2015).

Based on our two Symposia, the history linking four seemingly disparate disciplines may help us to explore not only interdependence in human-machine teams and systems, but also to advance the science of autonomy. For now, interdependence is essential to “teamwork” and “emergence” and possibly to “autonomy,” and in turn the complementarity afforded by interdependence may simulate quantum concepts.

One highlight should be mentioned in closing. Daniel Serfaty (Founder & CEO, Aptima, Inc.), invited by our two Systems Engineering co-organizers, introduced us to “Charlie,” an artificial entity, and promised that Charlie would author a chapter for our forthcoming book by Springer: Systems Engineering and Artificial Intelligence.

Our five Symposia organizers in 2020 were W.F. Lawless, Ranjeev Mittu, Don Sofge, Thomas Shortell, and Tom McDermott. This report was written by W.F. Lawless; Ranjeev Mittu; Don Sofge; Thomas Shortell ; and Tom McDermott.

 

Deep Models and Artificial Intelligence for Defense Applications: Potentials, Theories, Practices, Tools, and Risks (S7)

 

This workshop addressed two key issues: AI challenges and uniqueness of defense applications.

Challenges: Advancements in hardware, algorithms, and data collection are enabling unexplored defense applications of AI. Development of these applications requires overcoming several challenges. The first challenge is noisy and unstructured data. The second is that adversaries can deceive, corrupt, and camouflage true data; defense applications need to evaluate bad data, find fake data, and perform with limited data. A second challenge is mapping AI algorithms at the strategic, operational, and tactical levels to defense applications [1]. During this mapping, AI applications need to comply with four factors: data; trust; security; and human-machine teaming. In conjunction with AI, data analytics must address the issues of agility, interoperability, and maintainability. Agility of product development includes five topics: open architectures; signal processing; systems software; autonomy via context awareness; and health monitoring. Interoperability is essential for multi-domain coordinated sensing, modeling, and instrumentation. Maintainability enables disaster operations, cyber sensemaking, and predictive maintenance. These topics were discussed through data strategy, algorithms, trust, and standards.

Data strategy: To foster better data collection, the 2020 U.S. DoD data strategy [2] contains seven desirable data elements: visibility; accessible; understandability; linkages; trustworthiness; interoperable; and security. Collected training data must be secured to prevent hostile takeover and made robust against external attacks. Moreover, due to expensive data collections such as battle damage assessment, the DoD needs high-fidelity 3D modeling to generate synthetic training data. The presence of adversaries and unique data requirements necessitates careful consideration of collected and synthetic data.

Algorithms and Technologies: A wide variety of algorithms and their related technologies were discussed. Presenters discussed (co)evolutionary algorithms, game theory, and optimization techniques. Evolutionary algorithms, which do not require gradient computation, can quickly search and evolve to find new battlespace measure/countermeasure configurations and emerging properties. Evolutionary algorithms were also inventively applied to look for tax loopholes and fixes [7]. Counterfactual regret minimization (CFR) and Alpha-Zero algorithms were highlighted in four applications: AFSIM enabled competitive wargaming simulations; Gomoku; Othello; and DARPA sail-on. Lexical link analysis, an unsupervised learning algorithm, was used to improve prediction and readiness for Navy logistics and supply enterprise. Deep learning was applied to Synthetic Aperture Radar (SAR) images. Interactive machine learning (IML), in a human-machine shared environment, learns human tasks. Lastly, a problem was presented still in need of an algorithmic solution: With implicitly self-similar structures such as fractals, order may emerge from a randomly generated but constrained topology [4].

Many technologies were not tied to any algorithm, such as: cyber malware detection; attack and defense’s arms race; multi-segment asymmetrical wargames; strike mission planning; battlespace readiness engagement matrix; and SoarTech’s technology for DARPA’s AlphaDogFight trials. Two important technologies with needed applications were highlighted: trusted AI and complex system theory. The first technology was used to build warfighter assistants where trusted AI is an automation tool. The second category of complex system theory controlled a swarm in battlefield conditions. This technology was shown to produce millisecond topological pictures of IoT/edge devices over distributed C2/resilient communications within denied environments.

Trust: Mission execution requires trust between the AI enabled and human team members. Due to this importance, many of the talks chose to address trust. The DARPA XAI program discovered that users understand and trust models that match expectations; they even prefer satisfying models over high-performing models. Also, Lipton et al. discussed ten model interpretability dimensions of trust [3]. The diversity of discussions on trust demonstrated that the defense community needs teams including experts on algorithms, design guidance, and best practices to access measures of trust concepts in AI.

Standards: As proposed by the Joint AI center (JAIC), AI systems need standards for responsible, equitable, traceable, reliable, and governable AI systems [5]. A Multisource AI scorecard table (MAST) supports Test and Evaluation, which may be viewed as an initial version of AI application standards. MAST connects governance, explainability and compliance for AI enterprises [6]: Mast adheres to AI defense applications need to be resilient to deception/misclassification, to noisy data, to exploitation of classifiers from known weaknesses and unanticipated attacks.

In conclusion, defense applications tend to be human-in-the-loop, where Defense AI and deep models are a “force multiplier” supporting moral, ethical and legal human decision making.

Ying Zhao, Erik Blasch, Doug Lange, Tony Kendall, Arjuna Flenner, Bonnie Johnson, and Bruce Nagy were the chairs of this symposium, and also authored this report.

 

Towards Responsible AI in Surveillance, Media, and Security through Licensing (S8)

 

            This symposium was held virtually over two days, November 11-12, 2020. The goal of the symposium was to engage a diverse, interdisciplinary group to help formulate the challenges, risks, and specific conditions Responsible AI licenses should seek to address.

In sharing ML and AI algorithms, researchers and developers can collaborate to solve big, global problems and make progress towards the common good. But there is growing concern around how to enable this while also preventing misuse, particularly in the areas of surveillance, media, and security. Ultimately, the context in which an algorithm is applied can be far removed from that which the developers had intended. Recent initiatives such as the Responsible AI Licenses1 initiative (RAIL) and the Montreal Data Licenses2 (MDL) attempt to find a middle ground by providing legally enforceable license clauses to prevent certain uses of AI technology and data. This symposium brought together an interdisciplinary group of 29 experts and practitioners in the fields of AI and law to discuss these challenges and possible ways forward for designing end-user and source code licenses that developers could include with AI software to restrict its use.

The two-day symposium was hosted online, and it was structured to include a series of talks along with three dedicated brainstorming sessions with all attendees. Speakers at the symposium on Day 1 included Francesca Rossi (IBM) who gave the participants an overview of current work in AI & Ethics, Sameer Singh (University of California, Irvine), who presented recent work on testing AI systems, and Aviv Ovadya (The Thoughtful Technology Project), who shared his work on how AI is playing a role in misinformation. Christopher Hines (K&L Gates) and Jim Sphorer (IBM) described some of the challenges associated with licensing, including adoption, interaction with legislation, and interaction with other license types, such as Open Source Licenses. Wenjing Chu (Futurewei Technologies) presented a framework for a decentralized approach for licensing and regulation. The brain storming session on Day 1 was used to identify potentially harmful use cases that may benefit from restrictions via licensing. This exercise brought up interesting aspects of dual-use situations, for instance, assistive technology based on lip-reading being repurposed for work-place surveillance.

The second day included talks by Danish Contractor (IBM) on an ongoing effort to define an IEEE Standard for Responsible AI Licensing, and Alka Roy (The Responsible Innovation Project) and Brent Hecht (Microsoft & Northwestern), who spoke respectively about the challenges of responsible innovation and the role academia can play in addressing some of these issues. Bogdana Rakova (Accenture) and Laura Kahn (Accenture) presented their paper on Dynamic Algorithmic Service Agreements. Other talks included those by Daniel McDuff (Microsoft) on privacy issues arising from health-sensing technologies, Joseph Lindley (Lancaster University) on definitional dualism in AI, and an overview by Casey Fiesler (University of Colorado, Boulder) on ethics and licensing in which she shared lessons and parallels from her work on Creative Commons Licenses. The brainstorming sessions on this day were dedicated to identifying possible next steps in adoption and standardization. They opened up interesting directions for future work – from defining what a Responsible AI License should be, as is currently being done by the IEEE P2840 Standard Working Group on Responsible AI Licensing, to identifying how licenses could be made more usable and how clause enforcement could happen. For instance, should there be a body that is tasked with standardizing a “use-case library” with associated clauses that people can use to insert into a RAIL template license? What are the changes in the AI ecosystem that could help make licensing a more viable approach? Could there be community driven mechanisms for enforcement in addition to relying on copyright and contractual law? The symposium wrapped up with a discussion of next steps and opportunities for ongoing contribution in this space.

Danish Contractor, Julia K. Haines, Daniel McDuff, Brent Hecht, Christopher Hines, and Jenny Lee served as co-chairs of the symposium. This report was written by Danish Contractor, Julia K. Haines, and Daniel McDuff.

Biographies

Muhammad Aurangzeb Ahmad is an Affiliate Assistant Professor at the University of Washington Tacoma and a Principal Research Scientist at KenSci Inc., USA.

 

Shelly Bagchi is an Electrical Engineer at the National Institute of Standards and Technology in Gaithersburg, MD.

 

Erik Blasch works at the Air Force Office of Scientific Research.

 

Danish Contractor is a co-founder of RAIL and a Senior Researcher at IBM Research in New Delhi.

 

Arjuna Flenner is at GE Aviation Systems.

 

Julia K. Haines is a co-founder of RAIL and a Senior User Experience Researcher at Google, San Francisco.

 

Bonnie Johnson is at the Naval Postgraduate School working in Systems Engineering.

 

Tony Kendall is at the Naval Postgraduate School.

 

Doug Lange is at the Naval Information Warfare Center, Pacific.

 

William F. Lawless is a Professor of Mathematics, Sciences and Technology / Professor of Social Sciences at Paine College, USA.

 

Daniel McDuff is a co-founder of RAIL and a Principal Researcher at Microsoft Research, Redmond.

 

Melanie Mitchell is the Davis Professor at the Santa Fe Institute and Professor in the Department of Computer Science at Portland State University.

 

Bruce Nagy is at NAVAIR China Lake.

 

Hemant Purohit is an Assistant Professor of Information Sciences and Technology at George Mason University, USA.

 

Oshani Seneviratne is the Director of Health Data Research at Rensselaer Polytechnic Institute, USA.

 

Emmanuel Senft is a Research Associate at the University of Wisconsin-Madison in the People and Robots Lab.

 

Frank Stein is the Director of the A3 Center at IBM.

 

Jason R. Wilson is an Assistant Professor of Computer Science at Franklin and Marshall College.

 

Ying Zhao is a Research Professor in the Information Sciences Department at the Naval Postgraduate School.