The online interactive magazine of the Association for the Advancement of Artificial Intelligence

Sathyanarayanan N. Aakur, David W. Aha, Dean Alderucci, Adrian El Baz, Simone Bianco, Tanmoy Chakraborty, Xin Cynthia Chen, Lukas Chrpa, Huáscar Espinoza, Lixin Fan, Ferdinando Fioretto, Chulaka Gunasekara, Isabelle Guyon, José Hernández-Orallo, Xiaowei Huang, Filip Ilievski, Hoon Pyo Jeon, Jr., Tarun Kumar, Rahul Ladhania, Jim Larimore, Zhengying Liu, Prashan Madumal, Deepak Maurya, Theodore Metzler, Martin Michalowski, Reuth Mirsky, Armineh Nourbakhsh, Charles L. Ortiz, Balaraman Ravindran, Jan N. van Rijn, Arash Shaban-Nejad, Amirreza Shirani, Kai Shu, Richa Singh, Biplav Srivastava, Sebastien Treguer, Silvia Tulli, Lyle Ungar, Mauro Vallati, Joaquin Vanschoren, Mayank Vatsa, Amir Pouran Ben Veyseh, Rosina Weber, Pengtao Xie, Eric Xing, Han Yu

The Workshop Program of the Association for the Advancement of Artificial Intelligence’s Thirty-Fifth Conference on Artificial Intelligence was held virtually from February 8-9, 2021. There were twenty-six workshops in the program: Affective Content Analysis, AI for Behavior Change, AI for Urban Mobility, Artificial Intelligence Safety, Combating Online Hostile Posts in Regional Languages during Emergency Situations, Commonsense Knowledge Graphs, Content Authoring and Design, Deep Learning on Graphs: Methods and Applications, Designing AI for Telehealth, 9th Dialog System Technology Challenge, Explainable Agency in Artificial Intelligence, Graphs and More Complex Structures for Learning and Reasoning, 5th International Workshop on Health Intelligence, Hybrid Artificial Intelligence, Imagining Post-COVID Education with AI, Knowledge Discovery from Unstructured Data in Financial Services, Learning Network Architecture During Training, Meta-Learning and Co-Hosted Competition, Meta-Learning for Computer Vision, Plan, Activity, and Intent Recognition, Privacy-Preserving Artificial Intelligence, Reasoning and Learning for Human-Machine dialogs, Reinforcement Learning in Games, Scientific Document Understanding, Towards Robust, Secure and Efficient Machine Learning, and Trustworthy AI for Healthcare. This report contains summaries of the workshops, which were submitted by most, but not all the workshop chairs.

Affective Content Analysis (W1)

AffCon-2021 was the fourth Affective Content Analysis workshop at AAAI. The workshop series built upon the state of the art in neural and AI methods, for modeling affect in interpersonal interactions and behaviors and (ii) bringing a confluence of research viewpoints representing several disciplines. The 2021 edition of the workshop took interaction in yet another new direction. A large share of content created were outcomes of collaboration. A basic question that was worth examining was whether and how collaboration among creatives impact the affective characteristics of the content. A follow up question then was how to model and computationally measure affect in collaborative creation.

This year, collaboration took on an extra meaning in a physically distanced world. Understanding the dynamics of affect in collaborative content is more topical. The theme for AffCon@AAAI-2021 was ‘Affect in Collaborative Creation’. This was relevant for increasingly decentralized workplaces, asynchronous collaborations, and computer-mediated communication. Studying and codifying user reactions in this setup can help understand the society and aid towards better tools for content analysis.

No formal report was filed by the organizers for this workshop.

AI for Behavior Change (W2)

The first installment of the AI for Behavior Change workshop was held on February 8, 2021, at the Thirty-Fifth AAAI Conference on Artificial Intelligence. The workshop was designed to bring together scholars from the causal inference, artificial intelligence, and behavior science communities, gathering insights from each of these fields to facilitate collaboration and adaptation of theoretical and domain-specific knowledge amongst them.

In decision-making domains as wide ranging as medication adherence, vaccination, college enrollment, retirement savings, and energy consumption, behavioral interventions have been shown to encourage people towards making better choices. It is important to learn how to use AI effectively in these areas to be able to motivate and help people to take actions that maximize their welfare. The AI for Behavior Change workshop at AAAI 2021 was a first of its kind event and brought together researchers from a wide array of disciplines including, but not limited to, statistics, computer science, economics, public policy, psychology, management, and decision science, who work at the intersection of causal inference, machine learning, and behavior science. It was organized in partnership with three centers: the Behavior Change for Good Initiative at Penn, the Center for Applied AI at Chicago Booth, and the Psychological of Technology Institute.

For its contributed talk and poster presentations, the workshop invited submissions on a range of topics across multiple disciplines, including the following areas: Intervention design, Adaptive treatment assignment, Heterogeneity estimation, Optimal assignment rules, Targeted nudges, Observational-experimental data, Mental health/wellness, Habit formation, Social media interventions, Psychological science, and Precision health.

The workshop boasted an impressive line-up of keynote and invited speakers, and a diverse set of contributed talks and poster presentations. It set off on a terrific note with MacArthur ‘Genius Award’ recipient, Sendhil Mullainathan, delivering the keynote lecture titled, “Can Algorithms Help Us Better Understand Ourselves?”. It was followed by four contributed paper talks by Hannah Mieczkowski, Emaad Manzoor, Jackson Killian, and Sudeep Bhatia. The contributed talks covered an array of topics at the frontier of AI and behavior change – psychological implications of applying the principal-agent framework to AI-mediated communication, role of reputation in affecting online deliberation, multi-action restless bandits in resource-constrained domains such as community health care where there are multiple intervention options and determining knowledge representations in health judgements.

To ensure that the workshop went beyond being just a series of insightful talks, we also hosted a poster session and a robustly attended “lunchroom” session on gather, with separate “rooms” themed on multiple topics: AI-informed Intervention Design, leveraging observational Data for Behavior Change, AI in the Field: Translating Theory to Action, Psychology to Generate New Features, and New Directions for Research on the Psychology of Technology. The workshop received very positive feedback on how this facilitated meaningful and organic conversations between the attendees. Accepted posters are available on the workshop website https://ai4bc.github.io/ai4bc21/posters.html.

The post-lunch session of the workshop began with spotlight talks by the nine poster presenters. The second keynote speaker was Susan Athey, one of the leading researchers working at the intersection of econometrics and machine learning, who talked on, “Designing and Analyzing Large Scale Experiments for Behavior Change”. Her talk was followed by Eric Tchetgen Tchetgen, who spoke on challenges of using proxies for confounding variables in observational data for causal learning in his talk titled, “Proximal Causal Learning”. Next was Munmun De Choudhury whose talk was on bridging machine learning and collaborative action research, in the context of digital mental health. The workshop ended with an invited talk by MacArthur ‘Genius Award’ recipient Jon Kleinberg who spoke on simplicity and bias in decision making.

It was organized by Lyle Ungar (University of Pennsylvania), Sendhil Mullainathan (University of Chicago), Eric Tchetgen Tchetgen (University of Pennsylvania), Rahul Ladhania (University of Michigan), and Tony Liu (University of Pennsylvania). This report was written by Rahul Ladhania and Lyle Ungar.

 

AI for Urban Mobility (W3)

 The expected increase in urbanization in 21st century, coupled with the socio-economic motivation for increasing mobility, is going to push transportation infrastructure well beyond its current capacity. In response, more stringent and more intelligent control mechanisms are required to better monitor, exploit, and react to unforeseen conditions. The AI4UM workshop brought together researchers who leverage various AI techniques for innovation in areas of Urban Mobility and Transportation.

The AI for Urban Mobility workshop was a successful and well-attended event. It focused on AI-based approaches for innovation in any area of Urban Mobility and Transportation, such as (but not limited to) Traffic Signal Control, Vehicle Routing, and Highway and Intersection Control.

The workshop brought researchers from the fields of AI, control theory, and transportation science together, along with practitioners focused on various aspects of urban mobility and transportation. Our authors presented 15 papers: 7 full papers and 8 spotlight papers. The former was allocated 15 minutes for the talk and 5 minutes for questions, and the latter 7 minutes for the talk and 3 minutes for questions. The workshop talks spanned a multitude of topics in AI for urban mobility: navigation planning; adaptive traffic signal control; machine learning for predicting autonomous vehicle behavior, public transportation loads, environmental impact, transport anomaly detection, foot traffic, vehicle travel time, congestion, and traffic flow. The presentations gave an extensive overview of how AI approaches can effectively tackle complex and challenging real-world transportation and mobility problems, and how autonomous and connected vehicles can usefully be leveraged to support urban traffic controllers.

The workshop included four invited speakers, with the aim of providing a variegated and encompassing overview of the state of the art of work in AI and urban mobility. Professor Jorge Laval (Georgia Tech) presented his interesting and thought-provoking perspective on the potential limited impact that traffic control may have on very congested urban networks, and on the implications, this may have for machine learning approaches. Professor Bart De Schutter (Delft Technical University) described multi-agent model-based control methods for urban traffic networks, and proposed approaches for dealing with the high complexity of large traffic networks. Professor Alexandre Bayen (UC Berkeley) presented Lagrangian control at large and local scales in mixed autonomy traffic flow, with a particular focus on how autonomous vehicles will change traffic flow patterns. Finally, Professor Pradeep Varakantham (Singapore Management University) described a range of Deep RL methods for Improving Efficiency in Urban Environments.

The key strength of this workshop was, as intended, the integration of multiple communities interested in urban mobility and transportation. The questions posed after each talk were stimulating and highlighted how different communities can perceive similar problems in very different ways. The AIUM workshop was co-chaired by Lukas Chrpa, Mauro Vallati, Scott Sanner, Stephen F. Smith and Baher Abdulhai. This report was written by Lukas Chrpa and Mauro Vallati.

 

Artificial Intelligence Safety (W4)

The AAAI-21 Workshop on Artificial Intelligence Safety (SafeAI 2021, http://www.safeaiw.org) was co-located with the Thirty-Fifth AAAI Conference on Artificial Intelligence virtually held on February 8th. SafeAI aimed to explore new ideas at the intersection of AI and safety, as well as broader strategic, ethical, and policy-oriented aspects of AI safety.

AI safety should be considered a design principle. There are varying levels of safety, diverse sets of ethical standards and values, and varying degrees of liability, for which we need to deal with trade-offs or alternative solutions. These choices can only be analyzed holistically if we integrate technological and ethical perspectives into the engineering problem and consider both the theoretical and practical challenges for AI safety. The aim of the series of SafeAI workshops (since SafeAI 2019) is to achieve this holistic view of AI and safety engineering, to build trustworthy intelligent autonomous machines.

The workshop received 44 submissions and accepted 13 full papers, 1 talk and 11 posters, resulting in a full-paper acceptance rate of 29.5% and an overall acceptance rate of 56.8%. The SafeAI 2021 program was organized in four thematic sessions. The thematic sessions followed a highly interactive format, structured into short pitches and a joint panel each to discuss questions and common issues.

Session 1 discussed dynamic safety and anomaly assessment, covering out-of-distribution detection, several self-driving car simulator scenarios, and confidence calibration.

Session 2 explored safety considerations for the assurance of AI-based systems, such as the utility of neural network test coverage measures, the safety properties of inductive logic programming and competence assessment of automated driving functions.

Session 3 focused on adversarial machine learning and trustworthiness, including adversarial robustness for face recognition, the analysis of multi-modal generative adversarial networks in ill-posed problems and Deepfakes detection based on heart rate estimation.

Session 4 covered several aspects of safe autonomous agents. The session discussed SafeRL techniques to generate law-abiding behavior, bounded-rational agents with self-modification, human-like risk sensitivity in artificial agents and impact regularizers to avoid negative side effects.

A keynote opened the morning sessions: Christophe Gabreau (Airbus), Beatrice Pesquet-Popescu (Thales) and Fateh Kaakai (Thales) talked about EUROCAE WG114 – SAE G34, a joint standardization initiative to support AI revolution in aeronautics. The first invited talk was presented by Juliette Mattioli (Thales) and Rodolphe Gelin (Renault) on methods and tools for trusted AI as an urgent challenge for industry. In the afternoon, Sandhya Saisubramanian (University of Massachusetts Amherst), discussed the main directions in avoiding negative side effects.

Eight co-chairs served to SafeAI 2021: Huáscar Espinoza, José Hernández-Orallo, Xin Cynthia Chen, Seán Ó hÉigeartaigh, Xiaowei Huang, Mauricio Castillo-Effen, Richard Mallah and John McDermid. The papers were published at CEUR-WS, Vol. 2808: http://ceur-ws.org/Vol-2808/. Huáscar Espinoza, José Hernández-Orallo, Xin Cynthia Chen, and Xiaowei Huang authored this report.

 

Combating Online Hostile Posts in Regional Languages during Emergency Situations (W5)

The first workshop on combating online hostile posts in regional languages during emergency situations was collocated with AAAI-2021 virtually on February 8th, 2021. The goal of this workshop was to foster interdisciplinary research on social media in low-resource languages and to think beyond conventional ways for combating online hostile posts.

The increasing accessibility of the Internet has dramatically changed the way we consume information. The ease of social media usage not only encourages individuals to freely express their opinion, but also provides content polluters with ecosystems to spread hostile posts (hate speech, fake news, cyberbullying, etc.). Such hostile activities typically increase sharply during emergencies such as earthquakes, floods, and the current COVID-19 pandemic. Most of such hostile posts are written in regional languages, and therefore can easily evade online surveillance engines, the majority of which are trained on the posts written in resource-rich languages such as English and Chinese.

The workshop brought together researchers from a variety of subfields of AI such as social impact of AI, knowledge representation and reasoning, social networks in human computation, computational models of social good, and natural language text generation, in addition to researchers in fields providing empirical or theoretical foundations (including health communication, social psychology, cognitive psychology, psycholinguistics, and human factors). The workshop program included an invited keynote talk by Amit Sheth, director of the AI Institute and professor of Computer Science and Engineering at the University of South Carolina, on “Cyber-Social Threats: Is AI ready to counter them?”. In addition to selected paper presentations, one panel discussion and two shared tasks were organized on “COVID19 Fake news detection (English)” and “Hostility detection (Hindi)”.

The panel discussion was valuable for identifying the current challenges and future research for combating online hostile posts. It was led by researchers from both academia and industry, Ebrahim Bagheri (Ryerson University), Meeyoung Cha (KAIST), Debdoot Mukherjee (ShareChat) and Preslav Nakov (Qatar Computing Research Institute), representing both academia and industry, with contributions from the attendees. Among the issues that arose from the discussion, we mention the following: (1) Dealing with disinformation in low-resource languages faces unique challenges of data annotation and model design. In particular, how to leverage and transfer the knowledge from rich-source languages to low-resource domains remains under studied; (2) Bias exists in human perception of misinformation, and it is important to understand human factors (e.g., emotion, credibility, stance) that lead to its wide propagation; and (3) There is a pressing need for interdisciplinary research and collaborative efforts from both academia and industry to build actional solutions to combat misinformation and hostile posts online.

There was consensus that a longer workshop should be held in the near future, allowing more time for discussion and synthesis of the many different approaches and applications. The workshop web page, http://lcs2.iiitd.edu.in/CONSTRAINT-2021/, includes pointers to the workshop papers, related relevant papers, and data sets for the shared tasks.

The workshop’s steering committee consisted of Tanmoy Chakraborty (IIIT, Delhi), Kai Shu (Illinois Institute of Technology), Huan Liu (Arizona State University), and H. Russell Bernard (Arizona State University). The workshop papers can be found in the AAAI Technical Report WS-21-05. Tanmoy Chakraborty and Kai Shu authored this report.

 

Commonsense Knowledge Graphs (W6)

The workshop on Commonsense Knowledge Graphs was held virtually on February 8, 2021. The goal of this workshop was to discuss the creation of commonsense knowledge graphs and their utilization to support tasks that require common sense, with a particular focus on natural language tasks.

Next to being one of the core research topics of AI since its beginnings, machine common sense has recently received new traction, mainly because of two factors: the recent surge of commonsense benchmarks, and the large success of neural language models on various AI tasks including commonsense question answering.

Given the gaps in both language models and evaluation, knowledge graphs are often leveraged to enhance language models and further improve performance on the benchmarks, as well as to create better knowledge sources and evaluation challenges. The goal of this workshop was to discuss the creation and representation of commonsense knowledge graphs, and their role in supporting a range of downstream tasks.

The workshop brought together researchers from a variety of AI disciplines, primarily natural language processing, knowledge graphs, and cognitive science, but also computer vision, computational creativity, and robotics. It featured two keynotes: by Joshua Tenenbaum from MIT, and by Yejin Choi from the University of Washington. There was a panel discussion on the provocative topic “Are language models enough?”, with four excellent panelists: David Ferrucci (Elemental Cognition), Tony Veale (University College Dublin), Lukasz Kaiser (Google Brain & CNRS), and Shih-Fu Chang (Columbia University. In addition, nine papers were presented in three sessions.

A major theme at the workshop was the kind of representation that is most suitable for commonsense knowledge graphs. The proposed representations generally included a probabilistic component to account for assumed uncertainty, but the format ranged from symbolic representation (e.g., the probabilistic programs devised by Joshua Tenenbaum and colleagues), to natural-language representations of concepts (e.g., proposed by Yejin Choi). Another common theme in the papers and the keynotes was techniques for collecting and integrating various commonsense knowledge. Joshua Tenenbaum focused on two key dimensions of common sense in humans: intuitive psychology and intuitive physics. Other researchers proposed to collect cultural (Anurag Acharya, Florida International University), moral knowledge (Yejin Choi), as well as knowledge about affordances of objects (Alessio Sarullo, University of Manchester), or about situational awareness (Boulos El Asmar, BMW, and Kwabena Nuamah, University of Edinburgh) through a range of methods, including crowdsourcing, extraction from definitions (Zhicheng Liang from RPI), or elicitation from experts. As enumerating all commonsense knowledge is impractical and perhaps impossible, it is desired to couple knowledge sources with probabilistic generalization mechanisms, as pointed by the two keynotes. A third key theme was the usage of commonsense knowledge graphs for tasks, including using knowledge graphs for coherent and constrained text generation (Pulkit Goel from CMU, and Litton Jose Kurisinkel from A*STAR), for the generation of evaluation probes (Yasaman Razeghi from UCI), or to represent knowledge sources, including benchmarks themselves (Henrique Santos from RPI).

The panelists agreed that current language models are insufficient for building open-world AI agents, but had different ideas on the way forward: next-generation language models (Lukasz Kaiser), adapted neural architectures (Tony Veale), grounding of language to other modalities (Shih-Fu Chang), or higher-level conceptual abstractions (David Ferrucci, also argued for by Joshua Tenenbaum during his keynote). The ‘sufficiency’ of current models is inherently tied to their intended applicability. The panelists agreed that, going forward, evaluating intelligence would ideally include an interactive setup, like dialogues or simulated environments, in which the AI agent must behave consistently over time and occasionally explain its behavior or reasoning. A key requirement for intelligence is generalization. While language models can react to any input and seem to contain a statistical mechanism for abstraction, it remains unclear to which extent their generalization corresponds to the cognitive abstractions manifested by humans. There was also discussion about the current adequacy and coverage of evaluation strategies, including the current benchmarks.

The workshop featured a rich palette of research on commonsense knowledge, albeit it is often fragmented across different disciplines, including natural language processing, knowledge graphs, cognitive science, computer vision, and robotics. The participants and organizers shared the goal of organizing related workshops and symposia in the future, to bring the different disciplines and perspectives closer to each other and facilitate collaborations.

The workshop was organized by Filip Ilievski (USC Information Sciences Institute), Alessandro Oltramari (Bosch Research), Deborah McGuinness (Rensselaer Polytechnic Institute), and Pedro Szekely (USC Information Sciences Institute). This report was written by Filip Ilievski.

 

Content Authoring and Design (W7)

Content Authoring and Design was one of the first workshops on AI-assisted design and content authoring. More specifically, we study AI-based solutions that can assist users during the authoring process. Given a wide range of applications and their unique challenges, this interdisciplinary field has received little attention and has seen little cross-disciplinary collaboration. CAD21 brought together researchers interested in this field to share their findings.

The goal of the Content Authoring and Design workshop was to engage the AI, and NLP community around the open problems in authoring, reformatting, optimization, enhancement, and beautification of different forms of contents from articles, news, presentation slides, flyers, posters to any material one can find online such as social media posts and advertisement. Content Authoring and Design refers to the interdisciplinary research space of Artificial Intelligence, Computational Linguistics, and Graphic Design. The area addresses open problems in leveraging AI-empowered models to assist users during creation by estimating the author/audience needs so that the outcome is aesthetically appealing and effectively communicates its intent.

We had keynote talks by Aaron Hertzmann from Adobe Research on “Can computers create art?”, Elizabeth Churchill from Google on “Amplifying Design Creativity with AI,” and Gerard de Melo from University of Potsdam on “Enhancing the presentation of text using cross-modal representation learning and emotion analysis.”

In addition, in our research track, authors submitted their work on various topics, including Display Text Layout, Typography, Financial Reports, and Table Formatting. More specifically, Ravi et al. (Ravi et al. 2020) introduced an AI-powered virtual assistant for creating and modifying content in digital documents by modeling natural language interactions as skills and using them to transform underlying data. Kraus (Kraus 2021) proposed a binary tree data model that can be used to facilitate the automatic generation of aesthetically pleasing text layouts in graphic designs. Dong et al (Dong et al. 2021) introduced CellGAN, a neural formatting model for learning and recommending formats of spreadsheet tables. Based on a novel conditional generative adversarial network (cGAN) architecture, CellGAN learns table formatting from real-world spreadsheet tables in a self-supervised fashion without requiring human labeling.

As part of this workshop, we organized a shared task on “Predicting Emphasis in Presentation Slides” to stimulate the development of new approaches and methods for slide design (https://competitions.codalab.org/ competitions/27419).

The widespread use of presentation slides has prompted researchers to create resources to assist presenters in creating successful slides (Alley and Robertshaw 2004; Alley and Neeley 2005; Jennings 2009). However, these guidelines cover only general design tips, such as color and font size recommendations to ensure text is readable at a distance, as well as suggestions for graphical representations of one’s content. We organized a new shared task in which participants were asked to create automated methods for predicting the emphasis on presentation slides to improve their readability and aesthetic appeal. By emphasis, we mean the use of special formatting (e.g., boldface, italics) to draw attention to a word or group of words.

Shirani et al. (Shirani et al. 2019) first introduced the Emphasis Selection (ES) task with a focus on short written text in social media, and it later became a SemEval 2020 task (Shirani et al. 2020). In the CAD21 shared task, we focused on presentation slides, introducing a new corpus as well as automated emphasis prediction approaches. This line of research is among the first to use the content of the slides to provide automated design assistance.

In total, five teams made submissions during the evaluation phase (Ghosh et al. 2021; Hu et al. 2021). We observed a diverse range of novel and intriguing solutions for this task, ranging from non-transformer-based models such as BiLSTM-ELMo to more advanced pre-trained models such as XLNet, RoBERTa, ERNIE 2.0, and SciBERT. Ensemble Transformer-based models were the most frequently used solution. Numerous hand-crafted features, such as part-of-speech (POS) tags, keywords, and lexical features (such as capitalized words and punctuation), were explored to improve the models’ efficiency. The full description of the shared task can be found in (Shirani et al. 2021).

The organizing committee consisted of: Thamar Solorio, University of Houston, Franck Dernoncourt, Adobe Research, Nedim Lipka, Adobe Research, Paul Asente, Adobe Research, Jose Echevarria, Adobe Research, and Amirreza Shirani, University of Houston. This report was written by Amirreza Shirani.

 

Deep Learning on Graphs: Methods and Applications (W8)

Deep Learning models are at the core of research in Artificial Intelligence research today. It is well-known that deep learning techniques that were disruptive for Euclidean data such as images or sequence data such as text are not immediately applicable to graph-structured data. This gap has driven a tide in research for deep learning on graphs on various tasks such as graph representation learning, graph generation, and graph classification. New neural network architectures on graph-structured data have achieved remarkable performance in these tasks when applied to domains such as social networks, bioinformatics, and medical informatics.

This one-day workshop brought together both academic researchers and industrial practitioners from different backgrounds and perspectives to above challenges. The workshop consisted of contributed talks, contributed posters, and invited talks on a wide variety of the methods and applications. This workshop succeeded in sharing visions of investigating new approaches and methods at the intersection of Graph Neural Networks and real-world applications.

No formal report was submitted by the organizers for this workshop.

 

Explainable Agency in Artificial Intelligence (W11)

The AAAI-21 Workshop on Explainable Agency in Artificial Intelligence was held virtually during February 8-9, 2021. This workshop aimed to discuss the topic of explainable agency and bring together researchers and practitioners from diverse backgrounds to share challenges, discuss new directions, and present recent research in the field.

Explainable agency has received substantial but disjoint attention in different sub areas of AI, including machine learning, planning, intelligent agents, and several others. There has been limited interaction among these subareas on explainable agency, and even less work has focused on promoting and sharing sound designs, methods, and measures for evaluating the effectiveness of explanations (generated by AI systems) in human subject studies. This has led to the uneven development of explainable agency, and its evaluation, in multiple AI subareas. Our aim was to address this by encouraging a shared definition of explainable agency and by increasing awareness of work on explainable agency throughout the AI research community and in related disciplines (e.g., human factors, human-computer interaction, and cognitive science).

To ensure the quality of our contributed talks and presentations, every submitted paper received three reviews and one meta review. We had a diverse program committee of 46 members from all over the world, and from different disciplines not limited to computer science. We received 46 submissions and accepted 26 of them. The accepted papers were divided in oral presentations and contributed talks allocating respectively 15 and 20 minutes each. The presenters focused on different aspects of explainable agency: explainable machine learning models, counterfactuals, feature attribution, user-centered aspects, transparency, fairness, and evaluation methods. Several methods and algorithms to explain the reasoning of AI agents were proposed, including visualization methods (e.g., saliency maps) for Convolutional Neural Networks (CNNs) or deep reinforcement learning agents, a querying algorithm that generates interrogation policies for Mixtures of Hidden Markov Models (MHMMs), summarization techniques for sampling-based search algorithms (e.g., Monte-Carlo Tree Search), and explanations for competing answers in Visual Question Answering (VQA) systems. Furthermore, design and evaluation techniques of explainable AI systems were presented, such as measurement domains (e.g., for qualitative investigations), explanations architecture features, and formats of explanation.

In addition to the contributed talks and presentations, this workshop included four invited speakers who are experts in their fields. Ofra Amir, Professor of Industrial Engineering and Management at Technion IE&M, introduced the topic of agent policy summarization to describe the agent’s behavior to people. Timothy Miller, Professor of Computing and Information Systems at the University of Melbourne, discussed the scope of explainable AI, its relation to the social sciences, and explainable agency in model-free reinforcement learning. Margaret Burnett, Professor of Computer Science at Oregon State University, proposed personas for identifying the goals (e.g., diversity of thoughts, appropriate trust, informed decisions) of explainable AI. Pat Langley, Director for the Institute for the Study of Learning and Expertise (ISLE), presented the concepts of explainable, normative, and justified agency and discussed the definition and representation of explanation as well as the advantages of designing and constructing justifiable agents.

This workshop included two panel discussions. The first panel (which included panelists Been Kim – Google Brain, Freddy Lecue – CortAIx and Inria, and Vera Liao – IBM), focused on lessons learned and insights gained from deploying XAI techniques while the second panel (which included panelists Denise Agosto – Drexel University, Bertram Malle – Brown University, and Eric Vorm – US Naval Research Laboratory), focused on XAI from a cognitive science perspective. There was some consensus on the continuous co-adaptive nature of the explanatory process and that explanation can be modeled as a form of exploration. The industry panel tackled the problem of explain ability as a property of the system and described tools that can accurately and rigorously explain a system’s model (i.e., interpretable models). In contrast, research on human-computer interaction, human factors, and cognitive science focus on human perception of the information provided by a system and stressed the importance of shaping an explanation for its target audience. In this context an explanation can allow a receiver to understand, criticize, correct, and improve a system.

The proceedings and recordings of this workshop were published on the workshop’s website. Prashan Madumal, Silvia Tulli, Rosina Weber, and David W. Aha served as co-chairs of this workshop and authored this report.

 

Graphs and More Complex Structures for Learning and Reasoning (W12)

 The first workshop on Graphs and more Complex structures for Learning and Reasoning was conducted to stimulate interdisciplinary discussions among researchers from varied disciplines such as computer science, mathematics, statistics, physics, etc. The workshop received overwhelming participation from several parts of the world.

In the opening keynote, Professor Manlio De Domenico, Head of CoMuNe Lab at Bruno Kessler Foundation Italy, delivered a talk on the characterization of a range of complex systems and modeling them as multilayer networks. The opening talk helped set up the momentum of discussions among the speaker and participants, continuing until the end of the workshop. Professor Ginestra Bianconi from the Queen Mary University of London presented the recent developments in the information theory of networks. Professor Bianconi explained the information-theoretical framework and its implications to several generalized complex structures such as multiplex networks, simplicial complexes, etc. On the application side, Professor Gesine Reinert from the University of Oxford presented her work on fraud detection in infrastructure transaction networks and discussed how to detect unknown anomalies in such networks.

Recently, learning with hypergraphs is drawing a lot of attention. Dr. Phil Chodrow, Hedrick Visiting Assistant Adjunct Professor in the Department of Mathematics from the University of California, Los Angeles, presented his seminal work on the hypergraph generative models and their implications for hypergraph clustering.

The workshop held four exciting talks on the deep learning-based approaches used with different complex graphical structures. Professor Anima Anandkumar, the Bren Professor of Computing at California Institute of Technology and a director of Machine Learning Research at NVIDIA, presented her work on infusing structure and domain knowledge into deep learning-based methods. The talk was also centered around neuro-symbolic reasoning, a focus area of AAAI 2021. Professor Stephen Bach from Brown University presented the work on incorporating knowledge graphs in zero-shot learning problems. The problem finds immediate applications in several domains, such as computer vision and natural language processing.

Dr. William L. Hamilton, Assistant Professor of Computer Science at McGill University, and a Canada CIFAR AI Chair at the Mila AI Institute of Quebec, delivered a talk on graph representation learning. Dr. Hamilton spoke about different domains such as chemical synthesis, 3D-vision, recommender systems, question answering, and social network analysis, where graph representation can be used and pointed out the current challenges and open problems from these domains. Ines Chami, a Ph.D. candidate in ICME at Stanford University talked about learning representations of nodes that preserve graph properties by leveraging non-Euclidean geometries, such as hyperbolic or spherical geometries. She also demonstrated the use of this approach for link prediction tasks on knowledge graphs.

In addition to the talks, there were many high-quality submissions to the workshop. Our program committee consisted of more than 60 researchers with diverse areas of expertise. All the paper submissions received at least three, and many of them got five constructive reviews. Based on the reviews, 14 high-quality papers were accepted. Authors were encouraged to make flash presentations of their work, and they were allotted individual breakout rooms in Zoom to interact with the attendees.

The workshop concluded with a panel discussion among the keynote speakers on “Learning and Reasoning with Complex graphs – a multi-disciplinary challenge.” The discussion brought up major challenges in the area, such as learning-based vs. model-driven approaches and their applications in complex networks. The panelists shared their perspectives on such topics, which sparked interesting debates. The panelists also gave suggestions for inspiring researchers working on interdisciplinary problems. All the keynote talks, panel discussion, and paper presentations are uploaded on our YouTube channel.

The audience was very attentive and asked some interesting questions during the keynote talks and panel discussion which made the virtual event very interactive. We believe some of the attendees made new friends at the GCLR workshop, which may lead to future collaborations.

The GCLR workshop was co-organized by Balaraman Ravindran (IIT Madras), Kristian Kersting (TU Darmstadt), Sarika Jalan (IIT Indore), Partha P. Talukdar (IISc Bangalore), Sriraam Natarajan (Univ. of Texas Dallas), Tarun Kumar (IIT Madras), Deepak Maurya (IIT Madras), Nikita Moghe, (University of Edinburgh), Naganand Yadati (IISc Bangalore), Jeshuren Chelladurai (IIT Madras) and Aparna Rai (IIT Guwahati). This report was written by Deepak Maurya, Tarun Kumar, and Balaraman Ravindran.

5th International Workshop on Health Intelligence (W13)

The 5th International Workshop on Health Intelligence was held virtually on February 8 and 9, 2021. This workshop brought together a wide range of computer scientists, clinical and health informaticians, researchers, students, industry professionals, national and international health and public health agencies, and NGOs interested in the theory and practice of computational models of population health intelligence and personalized healthcare to highlight the latest achievements in the field.
Population health intelligence includes a set of activities to extract, capture, and analyze multi-dimensional socio-economic, behavioral, environmental and health data to support decision-making to improve the health of different populations. Advances in artificial intelligence tools and techniques and internet technologies are dramatically changing the ways that scientists collect data and how people interact with each other, and with their environment. The Internet is also increasingly used to collect, analyze, and monitor health-related reports and activities and to facilitate health-promotion programs and preventive interventions. In addition, to tackle and overcome several issues in personalized healthcare, information technology will need to evolve to improve communication, collaboration, and teamwork between patients, their families, healthcare communities, and care teams involving practitioners from different fields and specialties.
This workshop follows the success of previous health-related AAAI workshops including the ones focused on personalized (HIAI 2013-16) and population (W3PHI 2014-16) healthcare, and the four subsequent joint workshops held at AAAI-17 through AAAI-20 (W3PHIAI-17 – W3PHIAI- 20). This year’s workshop brought together a wide range of participants from the multidisciplinary field of medical and health informatics. Participants were interested in the theory and practice of computational models of web-based public health intelligence as well as personalized healthcare delivery. The papers (full and short) and the posters presented at the workshop covered a broad range of disciplines within Artificial Intelligence including knowledge representation, machine learning, natural language processing, prediction, mobile technology, inference, and dialogue systems. From an application perspective, presentations addressed topics in epidemiology, environmental and public health informatics, COVID-19, disease surveillance and diagnosis, medication dosing, health behavior monitoring, and human-computer interaction.
The workshop included three invited talks: (1) Dr. Andreas Holzinger (Medical University Graz) who gave a presentation on explainability and robustness in health intelligence, (2) Dr. Akane Sano (Rice University) discussed digital health and wellbeing using a personalized and adaptive assistant, and (3) Dr. Tanveer Syeda-Mahmood (IBM Research) presented and described the medical sieve radiology grand challenge on chest x-rays. With a total of 19 papers (8 long and 11 short) and 4 poster presentations, the workshop participants engaged in discussions around many cutting-edge topics affecting the way evidence is produced for and delivered in healthcare to improve patient outcomes.
Martin Michalowski, Arash Shaban-Nejad, and Simone Bianco served as co-chairs of this workshop and wrote this report. All the workshop papers are published by Springer in their “Studies in Computational Intelligence” series.

 

Knowledge Discovery from Unstructured Data in Financial Services (W16)

 Over the past decades, knowledge discovery has rapidly expanded beyond structured sources (such as DB transactions) and encompassed unstructured data such text, images, and videos. Despite a growing body of research focusing on discovery from news, web, and social media data, its application to enterprise datasets such as legal documents, financial filings, and government reports still present major challenges. This is partly due to strict precision and recall requirements, coupled with the sparsity of available signals.

In the financial services industry, data professionals devote a large amount of manpower to knowledge discovery and extraction from different data sources, such as financial filings, legal contracts, industry reports, and enterprise documents, before any analysis can be conducted. This manual extraction process is usually inefficient, error-prone, and inconsistent, compounding risk and resulting in bottlenecks in the operational productivity of financial institutions. These challenges and issues call for robust artificial intelligence algorithms and systems to help.

The second workshop on Knowledge Discovery from Unstructured Data in Financial Services brought together academic researchers and industry practitioners in a joint effort to address the challenges in the design and implementation of these AI techniques, including linguistic processing, semantic analysis, and knowledge representation and learning.

The second workshop on Knowledge Discovery from Unstructured Data in Financial Services was announced on September 9, 2021. Following the tradition of the first KDF workshop at AAAI-20, the organizers focused on original contributions and applications in Artificial Intelligence, Machine Learning, Natural Language Processing, Big Data Analytics, and Deep Learning, with a focus on knowledge discovery in the financial services domain.

Based on feedback from the 2020 workshop, the organizers targeted studies in financial-domain-specific representation learning, open financial datasets and benchmarking, and transfer learning with applications to financial data. Although textual data is prevalent in challenges related to the finance business, the organizers also encouraged submissions of studies or applications pertinent to finance using other types of unstructured data such as financial transactions, sensors, mobile devices, satellites, social media, etc.

The resulting pool of submissions covered topics focused on a wide range of applications, including transaction modeling, fraud detection, predictive profiling based on SEC filings or earnings calls, and credit risk modeling. A program committee composed of 26 experts reviewed and accepted eleven long and short papers.

The workshop was held virtually on February 9, 2021, bringing together academic and industry researchers and practitioners in a full-day program including 11 presentations and 4 keynote talks. The opening remarks were delivered by Sameena Shah, Managing Director at J.P. Morgan AI Research, who welcomed the participants and encouraged the audience to follow the organizers’ continued efforts in future conferences.

With an audience of 40-50 researchers and practitioners calling in from more than 10 countries and 7 different time-zones, the workshop explored themes at the cutting-edge of research in the financial domain including data augmentation, multi-modal learning, and knowledge representation. Johannes Hoffart, Senior Research Scientist at Goldman Sachs R&D AI Group, demonstrated the importance of holistic document understanding. Heng Ji, Professor at the Computer Science Department of the University of Illinois at Urbana-Champaign illustrated how complex events can be detected and represented using contextualized, temporal schemata. Gideon Mann, Head of Data Science at Bloomberg L.P. reviewed the vast array of research under the umbrella of dialogue-profiling. Hannaneh Hajishirzi, Assistant Professor at University of Washington, described latest research in the challenging task of fact extraction and verification from scientific corpora, and its applications in the enterprise domain.

All papers, as well as selected presentation videos are available on the workshop’s website.

The workshop was co-chaired by Xiaomo Liu, Director of Data Science, S&P Global Ratings, Manuela M. Veloso, Managing Director, Head of J.P. Morgan AI Research, Sameena Shah, Managing Director, J.P. Morgan AI Research, Armineh Nourbakhsh, Executive Director, J.P. Morgan AI Research, Gerard de Melo, Professor, University of Potsdam (Chair for AI and Intelligent Systems), Le Song, Associate Professor, Georgia Tech, and Sr. Director of AI, Ant Financial, Quanzhi Li, Senior Manager, Alibaba Group. This report was written by Armineh Nourbakhsh.

 

Learning Network Architecture During Training (W17)

 The 2021 Learning Network Architecture During Training workshop focused on a diverse range of approaches to avoiding the manual design of neural network architecture. A prominent benefit of learning network architecture during training is eliminating the need to guess the right network topology in advance, leading to savings in time and computational resources. The workshop highlighted some approaches that were qualitatively different from popular NAS methods.

The workshop featured two invited talks, several talks during the plenary session, and brief spotlight presentations by authors of submitted papers. Scott Fahlman (Carnegie Mellon University) opened the workshop with a keynote address that outlined the state of the field and described his Cascade Correlation algorithm. That algorithm was a very early deep learning technique that incrementally builds the network during training, one layer at a time, in a manner that adapts it to the problem at hand. Dean Alderucci (Carnegie Mellon University) discussed several intuitions underlying Cascade Correlation and extensions to that algorithm.

In her invited talk, Sindy Löwe (University of Amsterdam) presented Greedy InfoMax – a self-supervised representation learning approach that can overcome the need to train every candidate network in Neural Architecture Search from scratch. In this method a neural network is trained without labels and without end-to-end backpropagation, while achieving highly competitive results on downstream classification tasks.

In the second invited talk Maithra Raghu (Google Brain) presented techniques that go beyond standard evaluation measures of networks, enabling quantitative analysis of the complex hidden representations of machine learning systems. This analysis provides insights into the underlying deep neural network models and the principled way to inform many aspects of their design, from characteristics when varying architecture width/depth, signs of overfitting, and catastrophic forgetting.

In the plenary session Nicholas Roberts (Carnegie Mellon University) presented Neural Architecture Search research motivated by the following question: can we enable users to build their own search spaces and discover the right neural operations given data from their specific domain? The research sets forth a construction that allows users to design their own search spaces adapted to the nature and shape of their data, to warmstart search methods using convolutions when they are known to perform well, and to discover new operations from scratch when they do not.

Liam Li (Carnegie Mellon University) argued for the study of single-level empirical risk minimization to understand NAS with weight-sharing. This reduces the design of NAS methods to devising optimizers and regularizers that can quickly obtain high-quality solutions to this problem. The theory and experiments demonstrated a principled way to codesign optimizers and continuous relaxations of discrete NAS search spaces.

Several brief spotlight presentations allowed the audience to a heterogeneous set of approaches to neural network design. Techniques reviewed included a joint optimization method for data augmentation policies and network architectures; a genetic algorithm to produce CNN topologies with low computational cost by using partial training to rank candidate architectures and regularization on training time; training the class of integer-valued neural networks with a new Mixed Integer Programming MIP model; and using Curriculum Learning together with Cascade Learning to trade-off generalization and training consumption in training the early layers of Cascade Learning networks.

The organizers were especially delighted at the vibrant turnout and the diverse selection of work. Papers and videos of the talks are posted on the conference web page. The cochairs of the workshop were Scott Fahlman, Kate Farrahi, George Magoulas, Edouard Oyallon, Bhiksha Raj Ramakrishnan, and Dean Alderucci. Dean Alderucci ([email protected]) of Carnegie Melon University authored this report.

 

Meta-Learning and Co-Hosted Competition (W18)

 The performance of many machine learning algorithms depends highly upon the quality and quantity of available data, and (hyper)-parameter settings. Deep learning methods, including convolutional neural networks, are known to be ‘data-hungry,’ and require properly tuned hyper-parameters. Meta-Learning is a way to address both issues. Simple, but effective approaches reported recently include pre-training models on similar datasets, such as Prototypical Networks, Matching Networks or MAML. This way, a good model or good hyperparameters can be pre-determined or learned model parameters can be transferred to the new dataset. As such, higher performance can be achieved with the same amount of data, or similar performance with less data (few shot learning).

Despite its enormous potential, there is a lack of good benchmark datasets and standardized tools in the field of few shot learning. As with all thriving fields with many active researchers, it is hard to establish consensus about which techniques have potential and make meaningful statements about what is state-of-the-art. There are only a few tools that can be used with ease.

Additionally, there are only few standardized benchmarks and uniform protocols. By co-hosting a competition, we aimed to lay the foundations for establishing consensual evaluation protocols and a wider range of benchmark sets.

The workshop featured four keynote speakers: Chelsea Finn (Stanford University, USA), Oriol Vinyals (Google Deepmind, UK), Lilian Weng (OpenAI, USA), and Richard Zemel (University of Toronto, CA).

The competition was won by the team “Meta_Learners” from Tsinghua University, and team “ctom” from Czech Technical University was the runner-up. Both teams were invited to submit a paper to the conference workshop and got a plenary slot to describe their winning methods. The first team used an ensemble of meta-learners and cleverly integrated a time management tool in its model. The runner-up solution leveraged a recent few-shot learning technique based on the combination of power transform and optimal transport. Besides the invited contributions, we had an open call for papers. In total, we received 18 submissions to the workshop. Apart from the winning team of the competition and the runner-up of the competition, we accepted 14 more papers, coming from a diversity of authors from all continents.

The workshop ended with a panel-discussion. We interviewed the panelists Chelsea Finn (Stanford University, USA), Frank Hutter (University of Freiburg, DE), Lars Kotthoff (University of Wyoming, USA) and Richard Zemel (University of Toronto, CA) to hear their working definitions of meta-learning, how they see the field evolve and what are, according to them, the most interesting and impactful challenges to overcome.

Several of the presentations, including all keynote speakers, were recorded, and can be viewed through our website https://metalearning.chalearn.org/ . The current workshop with co-hosted competition is a first, in a series of competitions that we aim to organize. We hope that after this initial success many more people will get involved.

This workshop was organized by: Adrian El Baz (INRIA and Université Paris Saclay, France), Isabelle Guyon (INRIA and Université Paris Saclay, France, ChaLearn, USA), Zhengying Liu (INRIA and Université Paris Saclay, France), Jan N. van Rijn (LIACS, Leiden University, the Netherlands), Sebastien Treguer (INRIA and Université Paris Saclay, France, ChaLearn, USA), and Joaquin Vanschoren (Eindhoven University of Technology, the Netherlands). This report was written by Adrian El Baz, Isabelle Guyon, Zhengying Liu, Jan N. van Rijn, Sebastien Treguer, and Joaquin Vanschoren.

 

Meta-Learning for Computer Vision (W19)

 The first workshop on Meta-learning for Computer Vision (ML4CV) was organized at AAAI-21 on February 8, 2021. The objective of the workshop was to discuss the recent advancements in the meta learning domain with applications in computer vision. The workshop hosted eight keynote speakers and five full research papers presenting the breadth of the research progress in meta learning.

Progress and advancements in deep learning in the last few years have significantly boosted the performance of several computer vision tasks including object recognition, face recognition, semantic segmentation, and visual question answering. While the computer vision algorithms have become exceptionally powerful, these modern systems still are surprisingly narrow as compared to the way humans learn. For instance, contrary to the most current systems which learn just a single model from a single data set, we humans acquire knowledge from diverse experiences and tasks over several years. As an attractive alternative, meta-learning and life-long learning a.k.a. never-ending learning has been emerging as a new paradigm in the machine learning literature.

The paradigm of meta learning and lifelong learning relate to the human ability of continuously learning new tasks with very limited labeled training data. In the current computer vision problems, we train one architecture for every individual problem, as soon as the data distribution or the problem statement changes, the machine learning algorithm must be retrained or redesigned. Further, once the model is updated to incorporate newer data distribution or task, the knowledge learnt from the previous task is “forgotten”. Meta learning focuses on designing models that utilize prior knowledge learnt from other tasks to perform a new task. In a broad sense, meta learning attempts to build models for “general artificial intelligence”.

The workshop was organized as a combination of invited keynote presentations and paper presentations. The keynote talks at the workshop focused on introducing efficient models of meta learning and lifelong learning for computer vision. Different algorithms for never-ending multimodal networks and robust approaches to address catastrophic forgetting were discussed by the speakers. The keynote presentations also focused on neural architecture search, autoML, imitation learning, active domain generalization, meta domain generalization, and domain-shift. The speakers also discussed meta-learning applications in visual domains including biometrics, language acquisition, medical imaging, and crowdsourced epidemiology for Covid-19.

The authors of research papers presented several cutting-edge topics such as Learning Invariant Representation for Continual Learning, FSIL: Few-shot and Incremental Learning for Image Classification, and Task Conflict in Meta Learning for Few-Shot Segmentation. The algorithms and their applications in computer vision tasks show the advancements in this domain. The paper on Large Scale Neural Architecture Search with Polyharmonic Splines shows that the proposed approach can perform search directly on large scale target datasets. Finally, an interesting application of meta learning in processing spectral EEG images for schizophrenics was presented.

Mayank Vatsa, Richa Singh, Nalini Ratha, and Vishal Patel served as the Co-Chairs of the ML4CV workshop and Surbhi Mittal served as the Web-Chair. Mayank Vatsa and Richa Singh authored this report.

 

Privacy-Preserving Artificial Intelligence (W21)

 The availability of massive amounts of data, coupled with high-performance cloud computing platforms, has driven significant progress in artificial intelligence (AI) systems. It has profoundly impacted several areas, including computer vision, natural language processing, and transportation. However, the use of rich data sets also raises significant privacy concerns: They often reveal personal, sensitive, information that can be exploited, without the knowledge and/or consent of the involved individuals, for various purposes including monitoring, discrimination, and illegal activities.

The goal of PPAI-21 was to provide a platform for researchers, AI practitioners, and policymakers to discuss technical and societal issues and present solutions related to privacy in AI applications. The workshop focused on both the theoretical and practical challenges related to the design of privacy-preserving AI systems.

PPAI-21 was a two-day event that included a rich collection of contributed and invited talks, tutorials, poster presentations, and a panel discussion. The workshop brought together researchers from a variety of subfields of AI and security and privacy, including optimization, machine learning (ML), differential privacy, and multiparty computation.

The predominant theme of the contributed and invited talks was the development of privacy-preserving algorithms, often based on the framework of differential privacy, for private data release or private machine learning. The workshop accepted ten spotlight talks and fifteen poster presentations.

PPAI-21 included five invited talks on this research theme. John M. Abowd (U.S. Census Bureau) discussed the implementation of differential privacy used to protect the data products in the 2020 Census of Population and Housing. The talk focused on the high-level policy and technical challenges that the U.S. Census Bureau faced during the implementation. Ashwin Machanavajjhala (Duke University) talked about recent experiences and lessons learned while deploying differential privacy at scale. The talk highlighted how the process of deploying DP often differs from the idealized problem studied in the research literature, and illustrated a few key technical challenges encountered in these deployments. Steven Wu (Carnegie Mellon University) discussed the important task of differentially private synthetic data and how to leverage practical optimization heuristics to circumvent computational bottlenecks. Reza Shokri (National University of Singapore) introduced a new analysis to bound the privacy loss in differentially private machine learning models. Finally, Nicolas Papernot (University of Toronto) discussed the importance of designing new ML models for privacy-preserving learning and explored the synergies between privacy and generalization in machine learning.

The workshop also featured two tutorial: “A tutorial on privacy amplification by subsampling, diffusion, and shuffling”, by Audra McMillan (Apple), which discussed the toolbox of “privacy amplification” techniques, developed to simplify the privacy analysis of complicated differentially private mechanisms; “Privacy and Federated Learning: Principles, Techniques, and Emerging Frontiers “, by Brendan McMahan (Google), Kallista Bonawitz (Google), and Peter Kairouz (Google), which discussed several aspects related to theory, practice, and deployments of privacy-preserving federated learning and federated analytics.

The workshop panel, served by Rachel Cummings (Columbia University), Ander Steele (Tonic.ai), Aleksandra Korolova (University of Southern California), and Christine Task (Knexus Research), focused on the theme: “Differential Privacy: Implementation, deployment, and receptivity. Where are we and what are we missing?”. The panelists discussed the importance to raise awareness of the privacy risks associated with various computational models, the pressure that companies are facing to use privacy-preserving technologies, and the need to create tools to simplify the diffusion of differential privacy algorithms.

PPAI-21 was extremely engaging and featured an outstanding program. The recordings of all contributed talks invited talks, tutorials, and the panel discussion are available online at https://ppai21.github.io/. PPAI-21 was organized by Ferdinando Fioretto, Pascal Van Hentenryck, and Richard W. Evans. This report was written by Ferdinando Fioretto.

 

Reasoning and Learning for Human-Machine Dialogs (W22)

             This brief report presents highlights from the day-long workshop at AAAI-2021 on state-of-the-art in methods and practice for human-machine conversation using reasoning and learning techniques.

The workshop on reasoning and learning for collaborative, dialog-based system looks at methods at the rich intersection of these two areas for creating productive partnerships between humans and automated systems to make good decisions under uncertainty and constraints. Further, the systems need to be designed for working with people in a manner that they can explain their reasoning, convince humans about making choices among alternatives, and stand up to ethical standards demanded in real life settings. To discuss these, the fourth DEEP-DIAL 2021 workshop built on the successes of the first event at AAAI 2018 at New Orleans, the second event at AAAI 2019 at Honolulu and the third, DEEP-DIAL 2020 at AAAI-2020 in New York. Each of the editions had over 60+ registrants from around the world, with the program consisting of invited talks, contributed talks (both oral and lightening) from peer reviewed papers, and a panel discussion on a topical subject. DEEP-DIAL 2021 followed in the same vein with 5 keynotes, 6 talks from peer-reviewed, contributed papers (2 full paper presentations and 4 lightening presentations) and 2 panels on topics relevant to the design and implementation of dialog systems in the real-world, and had over 500 registrations.

The day started with an invited talk titled “Towards Open World Video Event Understanding – Flexible Representations and Commonsense Priors” by Dr. Sudeep Sarkar of the University of South Florida, which introduced the idea of neuro-symbolic reasoning frameworks for integrating commonsense knowledge into generating flexible, compositional representations of multi-modal data such as images, text, and videos. Sudeep showed examples of such flexible representations which could recognize events from unlabeled videos and images for human interaction tasks such as event recognition for open-world video captioning. Building on this encouraging avenue of research, the second talk was by Dr. Tathagata Chakraborti from IBM Research, whose presentation “How Symbolic AI and ML can combine for the Design of Conversational Agents at Scale” demonstrated how neuro-symbolic approaches can help design and scale conversational interfaces for enterprise applications. The third talk was from Dr. Kalai Ramea of Xerox PARC, who presented on the practical problems in scaling and user modeling for building public domain chatbots during her talk on “BEBO: A benefits chatbot to help unemployed people during COVID-19 pandemic”. The talk outlined the background research that is necessary to build an understanding of the user demographics and usage patterns to build conversational agents that are used by a user groups with large, diverse demographics. The fourth talk was on “Conversations with Data: Toward more contextual and interactive natural language interfaces” by Dr. Ahmed Awadallah of Microsoft Research focused on the challenges and advances of building conversational agents for assisting with complex tasks such as building SQL queries using neural networks trained end-to-end. Finally, the fifth talk was delivered by Prof. John Licato from the University of South Florida on “Do We Really Want Human-Machine Argumentation Dialogues?”, which analyzed the nuances of argumentation and reasoning and their associated biases in human dialog and provided a way forward for evaluating reasoning and argumentation capabilities of conversational agents. Combined, these talks analyzed various aspects of conversational agents such as multi-modal understanding, reasoning, user-centric design, and scalability, and presented the latest advances in building conversational agents to help the attendees gain a wider perspective.

The program also had authors of peer-reviewed papers discussing ideas for efficient learning of dialog policies in various data settings and learning architectures, and aspects like identifying humor and deployment considerations. They generated a lot of questions and discussions. In addition to the technical discussions on the development and deployment of chatbots, the workshop also had two panel discussions dedicated to two major questions. The first focused on the “Role of Chatbots during the COVID-19 Pandemic”, moderated by Prof. Biplav Srivastava. The panelists were Dr. Kalai Ramea (XEROX PARC) and Venkataraman Sundareswaran, (MCHC Fellow, AI & Machine Learning, World Economic Forum) who discussed the advantages of having an automated dialog agent to provide key information to the public on different important topics such as claiming unemployment benefits. The second panel was titled “Chatbots and the Society”, was moderated by Dr. Imed Zitouni and had the keynote speakers as the panelists. The discussion centered around the desiderata and ethical considerations when designing chatbots for general use in society, the need for argumentation, the potential of chatbots during COVID19 and the lack of real success stories, the source of knowledge and the possible implicit biases that they introduce into the reasoning process when interacting with a user.

The event thus continued the momentum from the three earlier events and built it further with a mix of theoretical and practical discussions. Many attendees thanked for the event and expressed future interest.

Sathyanarayanan Aakur, Ullas Nambiar, Imed Zitouni, and Biplav Srivastava served as cochairs of the workshop.  The papers, presentations and photos of the workshop are available at workshop site (https://sites.google.com/view/deep-dial2021/).

Author’s title and affiliation: Sathyanarayanan Aakur is an Assistant Professor in the department of Computer Science at Oklahoma State University, where he works on multi-modal understanding with commonsense reasoning and planning for improved human-machine interaction through dialog. Biplav Srivastava is a Professor in the AI Institute at the University of South Carolina, where he works on goal-oriented human-machine collaboration via natural interfaces using domain and user models, learning and planning.

Biplav Srivastava, Imed Zitouni, Sathyanarayanan Aakur, and Ullas Nambiar chaired this workshop. This report was written by Sathyanarayanan N. Aakur of Oklahoma State University and Biplav Srivastava of the AI Institute at University of South Carolina.

 

Reinforcement Learning in Games (W23)

 Games provide an abstract and formal model of environments in which multiple agents interact: each player has a well-defined goal and rules to describe the effects of interactions among the players. The first achievements in playing these games at super-human level were attained with methods that relied on and exploited domain expertise that was designed manually, such as chess and checkers. In recent years, we have seen examples of general approaches that learn to play these games via self-play reinforcement learning (RL), as first demonstrated in Backgammon. While progress has been impressive, we believe we have just scratched the surface of what is capable, and much work remains to be done to truly understand the algorithms and learning processes within these environments.

The main objective of the workshop was to bring researchers together to discuss ideas, preliminary results, and ongoing research in the field of reinforcement in games.

No formal report was filed by the organizers for this workshop.

Trustworthy AI for Healthcare (W26)

In this workshop, we brought together researchers in AI, healthcare, medicine, NLP, social science, etc. and facilitated discussions and collaborations in developing trustworthy AI methods that are reliable and more acceptable to physicians. The workshop was featured with 13 invited talks and 23 accepted papers.

AI for healthcare has emerged into a very active research area in the past few years and has made significant progress. AI methods have achieved human-level performance in skin cancer classification, diabetic eye disease detection, chest radiograph diagnosis, sepsis treatment, etc. While existing results are encouraging, not too many clinical AI solutions are deployed in hospitals or actively utilized by physicians. A major problem is that existing clinical AI methods are less trustworthy. For example, existing approaches make clinical decisions in a black-box way, which renders the decisions difficult to understand and less transparent. Existing solutions are not robust to small perturbations or potentially adversarial attacks, which raises security and privacy concerns. All these problems render the existing solutions less trustworthy. As a result, physicians are reluctant to use these solutions since clinical decisions are mission-critical and must be made with high trust and reliability.

In this workshop, we aim to address the trustworthy issues of clinical AI solutions. We brought together researchers in AI, healthcare, medicine, NLP, social science, etc. and facilitated discussions and collaborations in developing trustworthy AI methods that are reliable and more acceptable to physicians. The workshop was a one-day workshop covering topics including 1) interpretable AI methods for healthcare; 2) robustness of clinical AI methods; 3) medical knowledge grounded AI; 4) physician-in-the-loop AI; 5) security and privacy in clinical AI; 6) fairness in AI for healthcare; 7) ethics in AI for healthcare; 8) robust and interpretable natural language processing for healthcare; 9) methods for robust weak supervision, etc.

The workshop was featured with 13 invited talks and 23 accepted papers. Professor

Emily Fox from University of Washington and Apple gave a talk titled “Combining mechanistic + ML models in health: When, how, why, and why not?”. Professor Russ Greiner from University of Alberta gave a talk titled “Learning Models that Predict Objective (Actionable) Labels.” Professor Ricardo Henao from Duke University gave a talk titled “Interpretable Predictions for Vision Models with Proactive Pseudo-Interventions”. Professor Joyce Ho from Emory University gave a talk titled “Can you trust your phenotyping algorithm?”. Professor Heng Ji from University of Illinois at Urbana-Champaign (UIUC) gave a talk titled “Biomedical Information Extraction with More Structures and External Knowledge.” Professor Sanmi Koyejo from UIUC gave a talk titled “Towards algorithms for measuring and mitigating ML unfairness. Professor Yan Liu from University of Southern California gave a talk titled “Deciphering Neural Networks through the Lenses of Feature Interactions.” Professor Susan Murphy from Harvard University gave a talk titled “Assessing Personalization in Digital Health.” Professor Sendhil Mullainathan from University of Chicago gave a talk titled “Healthcare’s Buggy Data Problem.” Dr. Tristan Naumann from Microsoft Research gave a talk titled “Trustworthy NLP for Health.” Professor Lucila Ohno-Machado from UC San Diego gave a talk titled “Privacy considerations when sharing clinical data to build and evaluate AI models.” Professor Rajesh Ranganath from New York University gave a talk titled “A Deployed Model for COVID-19.” Professor Jimeng Sun from UIUC gave a talk titled “Spatio-temporal models for pandemic prediction.” Professor Eric Xing from MBZUAI/CMU gave a talk titled “On Trustworthiness of ML Algorithms and implications in AI-driven healthcare.”

The workshop was concluded with a panel discussion. Professors Joyce Ho, Heng Ji, Yan Liu, Rajesh Ranganatha, and Eric Xing were panelists. The panelists shared their insightful discussions on the following topics: 1) 1. What factors do you think affect the trustworthiness of AI solutions in healthcare? 2) Can we measure trustworthiness objectively and quantitatively? 3) Can we better educate physicians to encourage them to trust AI solutions? 4) What’s the open problems in building trustworthy AI solutions for healthcare?

The workshop was organized by Pengtao Xin (UC San Diego), Marinka Zitnik (Harvard), Byron Wallace (Northeastern University), Jennifer G. Dy (Northeastern University), and Eric Xing (MBZUAI/CMU). This report was written by Pengtao Xie, Assistant Professor at the University of California San Diego, and Eric Xing, Professor at MBZUAI and Carnegie Mellon University.

Biographies

 

Sathyanarayanan N. Aakur is an Assistant Professor in the Department of Computer Science at Oklahoma State University.

 

David W. Aha is at the Navy Center for Applied Research for AI in the Naval Research Laboratory.

 

Dean Alderucci is at Carnegie Melon University.

 

Adrian El Baz is at INRIA and Université Paris Saclay, France.

 

Simone Bianco works in the Department of Functional Genomics and Cellular Engineering at IBM Almaden Research Center.

 

Tanmoy Chakraborty is an assistant professor and a Ramanujan Fellow at the Department of Computer Science and Engineering, IIIT Delhi, India.

 

Xin Cynthia Chen is a researcher on AI Safety at the University of Hong Kong, China.

 

Lukas Chrpa is at the Czech Technical University in Prague.

 

Huáscar Espinoza is a principal researcher of AI runtime safety and monitoring and enforcement at CEA, France.

 

Lixin Fan is at WeBank in China.

 

Ferdinando Fioretto is an Assistant Professor at Syracuse University.

 

Chulaka Gunasekara is a Research Staff Member at IBM Research AI, USA.

 

Isabelle Guyon is at INRIA and Université Paris Saclay, France.

 

José Hernández-Orall is Professor at the Universität Politècnica de València, Spain and Senior Research Fellow at the Leverhulme Centre for the Future of Intelligence, UK.

 

Xiaowei Huang is a lecturer at the Department of Computer Science, University of Liverpool, UK.

 

Filip Ilievski is a Computer Scientist at the Information Sciences Institute within the University of Southern California.

 

Rahul Ladhania is an Assistant Professor of Health Informatics in the School of Public Health at the University of Michigan.

 

Jim Larimore works in Riiid Labs.

 

Zhengying Liu is at INRIA and Université Paris Saclay, France.

 

Hoon Pyo Jeon works for Stanford University.

 

Charles L. Ortiz Jr. works for the Palo Alto Research Center.

 

Tarun Kumar is at IIT Madras.

 

Prashan Madumal is at the University of Melbourne in the School of Computer Science and Information Systems.

 

Deepak Maury is at IIT Madras.

 

Theodore Metzler is an adjunct professor at the Wimberly School of Religion at Oklahoma City University.

 

Martin Michalowski works in the School of Nursing at the University of Minnesota.

 

Reuth Mirsky is a Postdoctoral Researcher at The University of Texas at Austin.

 

Armineh Nourbakhsh is the Executive Director of AI Research at J.P. Morgan.

 

Balaraman Ravindran is at IIT Madras.

 

Jan N. van Rijn is at LIACS, Leiden University in the Netherlands.

 

Arash Shaban-Nejad is at the University of Tennessee Health Science Center-Oak Ridge National Laboratory Center for Biomedical Informatics.

 

Amirreza Shirani is at the University of Houston.

 

Kai Shu is a Gladvin Development Chair assistant professor in the Department of Computer Science at Illinois Institute of Technology, USA.

 

Richa Singh is the Head and Professor at the Department of CSE, IIT Jodhpur, India.

 

Biplav Srivastava is a Professor in the AI Institute at the University of South Carolina.

 

Sebastien Treguer is at INRIA and Université Paris Saclay, France, and ChaLearn, USA.

 

Silvia Tulli is at Marie Curie ITN, INESC-ID, IST, in the Department of Computer Science and Engineering.

 

Joaquin Vanschoren is at Eindhoven University of Technology, the Netherlands.

 

Mayank Vatsa is a Professor at the Department of CSE, IIT Jodhpur, India.

 

Amir Pouran Ben Veyseh is at the University of Oregon.

 

Lyle Ungar is a Professor in the Computer and Information Science Department at the University of Pennsylvania.

 

Mauro Vallati is at the University of Huddersfield.

 

Rosina Weber is at Drexel University in the College of Computing and Informatics.

 

Pengtao Xie is an Assistant Professor at the University of California San Diego

 

Eric Xing is a Professor at MBZUAI and Carnegie Mellon University.

 

Han Yu is at the Nanyang Technological University in Singapore.