The online interactive magazine of the Association for the Advancement of Artificial Intelligence

Updated 10/25/2023updated with report on Reinforcement Learning in Games, which was mistakenly excluded in the initial publication.

David W. Aha, Dean Alderucci, Öznur Alkan, Guy Barash, Ahmad Beirami, Simone Bianco, Olivia Brown, Mauricio Castillo-Effen, Chi-Hua Chen, Xin Cynthia Chen, Elizabeth Daly, Julie Delon, Huáscar Espinoza, Ali Etemad, Scott E. Fahlman, Lixin Fan, Eitan Farchi, Ferdinando Fioretto, Behnam Hedayatnia, Chinmay Hegde, José Hernández-Orallo, Seán S. Ó hÉigeartaigh, James Holt, Xiaowei Huang, Mark Keane, Parisa Kordjamshidi, Tarun Kumar, Viet Dac Lai, Hung-yi Lee, Chia-Yu Lin, Viliam Lisy, Xiaomo Liu, Zhiqiang Ma, Prashan Madumal, Richard Mallah, Deepak Maurya, John McDermid, Martin Michalowski, Facundo Mémoli, Shane Moon, Tom Needham, Gabriel Pedroza, Kuan-Chuan Peng, Edward Raff, Balaraman Ravindran, Ahmad Ridley, Dennis Ross, Pritam Sarkar, Arash Shaban-Nejad, Onn Shehory, Adish Singla, Arunesh Sinha, Diane Staheli, Wolfgang Stammer, Sin G. Teo, Stefano Teso, Silvia Tulli, Amir Pouran Ben Veyseh, Yevgeniy Vorobeychik, Segev Wassekrug, Allan Wollaber, Ling Wu , Ziyan Wu, Hongteng Xu, Han Yu
 

The Workshop Program of the Association for the Advancement of Artificial Intelligence’s Thirty-Sixth Conference on Artificial Intelligence was held virtually from February 22 – March 1, 2022. There were thirty-nine workshops in the program: Adversarial Machine Learning and Beyond, AI for Agriculture and Food Systems, AI for Behavior Change, AI for Decision Optimization, AI for Transportation, AI in Financial Services: Adaptiveness, Resilience & Governance, AI to Accelerate Science and Engineering, AI-Based Design and Manufacturing, Artificial Intelligence for Cyber Security, Artificial Intelligence for Education, Artificial Intelligence Safety, Artificial Intelligence with Biased or Scarce Data, Combining Learning and Reasoning: Programming Languages, Formalisms, and Representations, Deep Learning on Graphs: Methods and Applications, DE-FACTIFY: Multi-Modal Fake News and Hate-Speech Detection, Dialog System Technology Challenge, Engineering Dependable and Secure Machine Learning Systems, Explainable Agency in Artificial Intelligence, Graphs and More Complex Structures for Learning and Reasoning, Health Intelligence, Human-Centric Self-Supervised Learning, Information-Theoretic Methods for Casual Inference and Discovery, Information Theory for Deep Learning, Interactive Machine Learning, Knowledge Discovery from Unstructured Data in Financial Services, Learning Network Architecture during Training, Machine Learning for Operations Research, Optimal Transports and Structured Data Modeling, Practical Deep Learning in the Wild, Privacy-Preserving Artificial Intelligence, Reinforcement Learning for Education: Opportunities and Challenges, Reinforcement Learning in Games, Robust Artificial Intelligence System Assurance, Scientific Document Understanding, Self-Supervised Learning for Audio and Speech Processing, Trustable, Verifiable and Auditable Federated Learning, Trustworthy AI for Healthcare, Trustworthy Autonomous Systems Engineering, and Video Transcript Understanding. This report contains summaries of the workshops, which were submitted by most, but not all the workshop chairs. 

 

Adversarial Machine Learning and Beyond (W1) 

Although machine learning (ML) approaches have demonstrated impressive performance on various applications and made significant progress for AI, the potential vulnerabilities of ML models to malicious attacks (e.g., adversarial/poisoning attacks) have raised severe concerns in safety-critical applications. The adversarial ML could also result in potential data privacy and ethical issues when deploying ML techniques in real-world applications. Counter-intuitive behaviors of ML models will largely affect the public trust on AI techniques, while a revolution of machine learning/deep learning methods may be an urgent need. This workshop aimed to discuss important topics about adversarial ML to deepen our understanding of ML models in adversarial environments and build reliable ML systems in the real world. 

No formal report was filed by the organizers for this workshop. 

 

AI for Agriculture and Food Systems (W2)  

An increasing world population, coupled with finite arable land, changing diets, and the growing expense of agricultural inputs, is poised to stretch our agricultural systems to their limits. By the end of this century, the earth’s population is projected to increase by 45% with available arable land decreasing by 20% coupled with changes in what crops these arable lands can best support; this creates the urgent need to enhance agricultural productivity by 70% before 2050. Current rates of progress are insufficient, making it impossible to meet this goal without a technological paradigm shift. There is increasing evidence that enabling AI technology has the potential to aid in the aforementioned paradigm shift. This AAAI workshop aimed to bring together researchers from core AI/ML, robotics, sensing, cyber physical systems, agriculture engineering, plant sciences, genetics, and bioinformatics communities to facilitate the increasingly synergistic intersection of AI/ML with agriculture and food systems. Outcomes included outlining the main research challenges in this area, potential future directions, and cross-pollination between AI researchers and domain experts in agriculture and food systems. 

No formal report was filed by the organizers for this workshop. 

 

AI for Behavior Change (W3) 

This workshop built upon successes and learnings from last year’s successful AI for Behavior Change workshop, and focused on advances in AI and ML that aimed to (1) design and target optimal interventions; (2) explore bias and equity in the context of decision-making and (3) exploit datasets in domains spanning mobile health, social media use, electronic health records, college attendance records, fitness apps, etc. for causal estimation in behavioral science. 

No formal report was filed by the organizers for this workshop. 

 

AI for Decision Optimization (W4)  

The AAAI-22 Workshop on AI for Decision Optimization explored how AI could be used to enable much more widespread application of mathematical optimization to solve real-world decision-making problems. It consisted of a keynote, a special talk, three demonstrations of working systems and a panel-led brainstorming session. The talks and demos focused on novel approaches for enabling such widespread use and included: the use of reinforcement learning and automated generation of reinforcement learning pipelines; simplifying, through knowledge-driven approaches, the modeling of optimization problems; and using machine learning to learn the constraints and/or parameters of optimization models. The brainstorming session outlined some of the primary gaps in more widescale use of optimization. These discussed primary gaps included: the complexity of real-world decision- making scenarios; the quality of optimization models learnt from the data; the need for explainability; and structured methods for testing the recommendations of the optimization models. A primary outcome of the workshop will be a document summarizing the brainstorming session, outlining the opportunities and gaps in this area, and that we expect will be useful in driving future research and collaboration on this important topic  

The AAAI-22 Workshop on AI for Decision Optimization (see https://research.ibm.com/haifa/Workshops/AAAI-22-AI4DO/), which was held on February 28, 2022, as part of the AAAI-22 conference, explored how AI could be used to enable much more widespread application of mathematical optimization to solve real-world decision-making problems. 

The workshop had two submitted talk sessions in which nine peer-reviewed papers were presented, a keynote given by Pascal Van Hentenryck from the Georgia Institute of Technology, a special talk providing a survey of constraint learning approaches, a demo session in which three running demo systems were presented, and a brainstorming session, led by a panel. 

Topics of accepted papers included applying reinforcement learning to decision-making problems, automating the creation of reinforcement learning pipelines, explainability for decision optimization, using machine learning to derive constraints and objectives from data, end-to-end data to decision pipelines, and Bayesian optimization.
The demo session featured examples of several implementations which enable the creation of optimization models from a combination of machine learning and knowledge specification.  

The brainstorming session, titled “The Future of Decision Optimization”, was led by a panel consisting of David Bergman (University of Connecticut), Bistra Dilkina (USC), Pascal Van Hentenryck (Georgia Institute of Technology), Frank Hutter (Freiburg University), Michele Lombardi (University of Bologna), Segev Wasserkrug (IBM Research) and Holy Wiberg (MIT). The brainstorming discussion focused on the new opportunities arising from the better fusion of data, machine learning, reinforcement learning and optimization, and the major gaps that must be addressed for optimization to be more widely used in real-world decision making. The primary gaps discussed in the panel included:  

  • The complexity of real-world decision-making scenarios makes modeling and solving real-world problems difficult.  
  • Addressing the uncertainty associated with learning optimization models from data. Specifically, ensuring that the solutions obtained from such optimization models provide valuable recommendations, despite the uncertainty resulting from learning parts of the model.  
  • The need for explainability, so that users can understand the rationale behind the recommendations made by mathematical optimization models.  
  • The need for structured methodologies for testing the recommendations of the optimization models and understanding the implications of these recommendations.  

The workshop was co-chaired by Bistra Dilkina from USC and Segev Wasserkrug from IBM Research. Other members of the organizing committee were Andrea Lodi from Cornell-Tech and Dharmashankar Subramanian from IBM Research. Attendance was high, over 70 people at peak, and the talks were of high quality. In addition, the discussion in the brainstorming session was lively and constructive. It provided the opportunity for interaction and sharing different viewpoints despite the workshop being held in a virtual setting. Moreover, it was agreed that the directions and gaps from the brainstorming session will be summarized in depth as a basis for future discussion and potential collaborations.  

The workshop was extremely successful, and we expect it to lead to new work and collaborations in this promising and exciting area.  

Segev Wassekrug of IBM Research wrote this report.  

 

AI for Transportation (W5) 

The AAAI 2022 Workshop on AI for Transportation was in conjunction with the 36th AAAI Conference on Artificial Intelligence virtually held on February 28, 2022. This workshop focuses on both original research and review articles on various disciplines of intelligent transportation systems (ITS) applications. Twenty-three papers were accepted and presented at the workshop to discuss the issues of AI techniques for ITS spatio-temporal data analyses, advanced traffic management systems, advanced traveler information systems, advanced public transportation services, advanced information management services, etc. 

The workshop received 71 submissions, and 23 papers were reviewed and accepted. The acceptance rate is 32.39%. The authors’ affiliations are from 11 regions/countries including China, Hong Kong (China), France, India, Iran, Japan, Singapore, South Korea, Sweden, Turkey, and USA. All accepted papers were presented at two sessions in the workshop; twelve papers were presented in Session (I), and eleven papers were presented in Session (II). Furthermore, three high-quality papers were selected as the best papers. The titles and authors of the best papers are listed as follows. 

  1. The title of the 1st paper is “Regional Complementary Aggregation Network for Object Counting.” The work was done by Shubo Wang, Guoli Song, Jifan Zhang, and Jie Chen.  
  2. The title of the 2nd paper is “Deep Semi-supervised Learning with Double-Contrast of Features and Semantics.” The work was done by Quan Feng, Jiayu Yao, Zhisong Pan, and Guojun Zhou.  
  3. The title of the 3rd paper is “DADFNet: Dual Attention and Dual Frequency-Guided Dehazing Network for Video-Empowered Intelligent Transportation.” The work was done by Yu Guo, Wen Liu, Jiangtian Nie, Lingjuan Lyu, Zehui Xiong, Jiawen Kang, Han Yu, and Dusit Niyato.  

Session (I) included 12 papers and was hosted from 09:00 to 12:00. Session (I) was chaired by Ling Wu. The titles of papers in Session (I) are listed as follows. 

  1. A Review on Finger Vein Recognition 
  2. Spatial-Temporal Enhancement Network for Traffic Forecasting 
  3. GPR Based Traffic Rate Prediction and Coordination for High Efficient Congestion Control 
  4. Regional Complementary Aggregation Network for Object Counting 
  5. Solving Transportation Ontology Meta-Matching Problem Through Compact Fireworks Algorithm 
  6. D-BML: Dynamic Local Stochastic Gradient Descent for Decentralized Distributed Deep Learning 
  7. AI for Safe and Efficient Airport Surface Operations 
  8. Integrated In-vehicle Monitoring System Using 3D Human Pose Estimation and Seat Belt Segmentation 
  9. Deep Semi-supervised Learning with Double-Contrast of Features and Semantics 
  10. HC-TUS: Human Cognition-based Trust Updating Scheme for AI-enabled VANET 
  11. Unsupervised Driving Behavior Analysis using Representation Learning and Exploiting Group-based Training 
  12. DADFNet: Dual Attention and Dual Frequency-Guided Dehazing Network for Video-Empowered Intelligent Transportation 

Session (II) included 11 papers and was hosted from 13:00 to 15:40. Session (II) was chaired by Chia-Yu Lin. The titles of papers in Session (II) are listed as follows. 

  1. Dynamic Ambulance Redeployment via Multi-armed Bandits 
  2. Natural Language Generation for Transportation Domain 
  3. Generalized Nested Rollout Policy Adaptation with Dynamic Bias for Vehicle Routing 
  4. A Comparative Study on Basic Elements of Deep Learning Models for Spatial-Temporal Traffic Forecasting 
  5. mTransDial: Multilingual Dataset for Transport Domain Dialog Systems 
  6. Hemangioma segmentation based on small sample ultrasound images 
  7. A cellular signals based traffic density estimation method 
  8. PSANet: Pixel-Specific Attention Network for Image Semantic Segmentation 
  9. Water Level Prediction Model Based on STGCN 
  10. Car Following Model and Its Stability Analysis Based on Braking Safety Distance 
  11. Claim Prediction Using Two Dimensional Optimization Approach (TDOA) 

We thank authors who submitted their valuable papers to the workshop, entitled “AI for Transportation,” of AAAI 2022. Furthermore, we thank all chairs and reviewers of AAAI 2022 for their efforts and support. This work is partly supported by the Natural Science Foundation of China under Grant No. 61906043. 

The workshop was organized by Wenzhong Guo (Fuzhou University), Chin-Chen Chang (Feng Chia University), Chi-Hua Chen (Fuzhou University), Haishuai Wang (Fairfield University & Harvard University), Feng-Jang Hwang (University of Technology Sydney), Cheng Shi (Xi’an University of Technology), and Ching-Chun Chang (National Institute of Informatics, Tokyo). The workshop was co-chaired by Ling Wu (Fuzhou University) Chia-Yu Lin (Yuan Ze University, and Kit Qichun Zhang (University of Bradford). This report was written by Chi-Hua Chen, Ling Wu, and Chia-Yu Lin. 

 

AI in Financial Services: Adaptiveness, Resilience & Governance (W6) 

The financial services industry relies heavily on AI and Machine Learning solutions across all business functions and services. However, most models and AI systems are built with conservative operating environment assumptions due to regulatory compliance concerns. In recent months/years, major global shifts have occurred across the globe triggered by the Covid pandemic. These abrupt changes impacted the environmental assumptions used by AI/ML systems and their corresponding input data patterns. As a result, many AI/ML systems faced serious performance challenges and failures. Industry-wide reports highlight large-scale remediation efforts to fix the failures and performance issues. Yet, most of these efforts highlighted the challenges of model governance and compliance processes. 

No formal report was filed by the organizers for this workshop. 

 

AI to Accelerate Science and Engineering (W7) 

Scientists and engineers in diverse domains are increasingly relying on AI tools to accelerate scientific discovery and engineering design. This workshop aimed to bring together researchers from AI and diverse science/engineering communities to achieve the following goals: 

1) Identify and understand the challenges in applying AI to specific science and engineering problems
2) Develop, adapt, and refine AI tools for novel problem settings and challenges
3) Community-building and education to encourage collaboration between AI researchers and domain area experts 

No formal report was filed by the organizers for this workshop. 

 

AI for Design and Manufacturing (W8) 

The inaugural AI for Design and Manufacturing (ADAM) Workshop at AAAI-22 brought together researchers from core AI/machine learning, scientific computing, and design; cross-pollinate collaborations between AI researchers and domain experts; and identify open problems of common interest. The workshop was an unqualified success, with over 50 attendees participating in plenary/keynote sessions, lightning presentations, virtual poster sessions, and panel discussions. 

Cutting-edge advances in engineering applications such as manufacturing and materials science increasingly seek artificial intelligence (AI)-based solutions to enhance their design, development, and production processes. However, AI-based techniques are yet to fulfill their promise in achieving these advances. Key obstacles towards this goal include lack of high-quality data, algorithmic challenges of embedding domain knowledge into AI, and challenges in high-dimensional design space exploration. 

The first AI for Design and Manufacturing (ADAM) Workshop, conducted virtually as part of AAAI-22, was organized in order to bring together world experts in core AI, scientific computing, geometric modeling, design, and manufacturing. The primary objectives were: to outline the major research challenges in this rapidly growing sub-field of AI, cross-pollinate collaborations between AI researchers and domain experts in engineering design and manufacturing, and sketch open problems of common interest.  

This one-day workshop consisted of two (2) plenary talks, four (4) keynote talks, and twenty-four (24) lightning talks by authors of accepted papers. All papers accepted to the workshop were peer-reviewed by a technical program committee, and paper authors were invited to a 2-hour (virtual) poster session for in-depth discussions. The workshop concluded with a stimulating panel discussion involving experts in academia, industry, and government. 

The morning plenary talk was delivered by Professor Nathan Kutz (University of Washington), titled “The Future of Governing Equations.” Professor Kutz provided an enlightening overview of the emerging field of scientific machine learning and how data-driven strategies are increasingly being used to uncover the dynamics of complex multiscale systems. He also provided several recipes for systematic design and analysis of data-driven models for physical processes.  

The afternoon plenary talk was delivered by Professor Elizabeth Holm (Carnegie Mellon University), titled “Computer Vision in Material Science.” Professor Holm showcased a wide variety of applications in materials characterization (particularly at the microstructure scale) that can effectively leverage modern computer vision techniques. She made the case that AI advances such as transfer learning, data re-use, and physics-based modeling could likely be key in furthering progress in this domain. 

Other than the plenary talks, there were four keynotes by experts spanning various backgrounds: Peter Woolridge (Monolith AI), Adarsh Krishnamurthy (Iowa State), Benji Maruyama (AFRL), and Brian Giera (Lawrence Livermore). The talks were on topics ranging from enhanced learning procedures based on differentiable geometric modeling to AI-based monitoring of advanced manufacturing (AM) processes. 

The workshop also featured lightning (5-minute) talks from authors of all contributed papers. Video recordings of this workshop are available at this link. A lunchtime poster session was organized in GatherTown to facilitate lively conversations between participants. A plethora of ideas were also exchanged between participants during the presentations themselves (via the chat feature in Zoom). 

The workshop concluded with an exciting panel discussion with speakers from academia, government, and industry, during which several questions were debated: the role of high-quality datasets in design applications, how best to incorporate physical constraints and resource budgets within AI models, the challenges in bridging gaps between different fields, and grand challenges over the next decade. 

Overall, the workshop was an unqualified success. Over 50 attendees participated in the various sessions, and interest from participants was high throughout the duration of the day. If this type of momentum can be sustained, workshops such as ADAM will become a regular fixture in AAAI in the years to come. 

This workshop was organized by Aarti Singh, Aditya Balu, Baskar Ganapathysubramanian, Chinmay Hegde, Mark Fuge, Xian Yeow Lee, Olga Wodo, Payel Das, Soumalya Sarkar, and Zhanhong Jiang. This report was written by Chinmay Hegde.  

 

Artificial Intelligence for Cyber Security (W9)  

The AICS 2022 workshop focused on research and applications of AI to problems in cyber security, including machine learning, game theory, threat modeling, and anomaly detection. It was held during the thirty-sixth AAAI Conference on Artificial Intelligence. Talks emphasized the application of AI to operational problems and surveyed systems and basic research on techniques that enable resilience of cyber-enabled systems. A session was also devoted to a challenge problem. 

The workshop began with a keynote by Dr. Misty Blowers, entitled “Challenges facing Adoption of AI for the Department of Defense.” Dr. Blowers stressed the need for a practical look at problems in AI for cyber-systems to help address, among other things, data quality issues. She advocated for an end-to-end encompassing AI-cyber architecture that tackles all aspects of the AI pipeline, thus enabling mission fight-through and eliminating weak links. 

Within the anomaly detection session, the first paper focused on detecting anomalous behavior of wireless base stations in wireless transmissions. The behavior can be represented as a multivariate time series, and the authors presented a novel attention mechanism to detect anomalies both between and within the time series. A second paper discussed work in anomalies in fiber optics working, which can be caused by rare attacks. The approach used an auto-encoder based anomaly detection; this work is in early stages of being deployed 

The next session was on theoretical aspects, where the first paper showed a very novel potential application of topological representation of data (topological data analysis or TDA) for representing data in cyber-security problems such as event data from a cyber-system. The next paper analyzed the game where a malware is asked to run in a sandbox to safely detect the malicious nature of that executable. Interesting characteristics of the equilibrium were analyzed, and efficient computation of the equilibrium was proposed. Finally, a paper on community discovery using tail probabilities of a binomial model of cross community edge generation was presented. These could be used for network segmentation in order to make networks more resilient to attacks. 

Next was a keynote, “Machine Learning (for) Security: Lessons Learned and Future Challenges”, by Dr. Battista Biggio (University of Cagliari), a renowned expert on adversarial learning. As the leading expert in applying adversarial learning to malware detection, Dr. Biggio’s talk surveyed the history of adversarial attacks and progress since its inception, weaving the constraints and unique discontinuities caused by working with executable binaries. This talk included aspects of the computational difficulty in finding adversarial examples as an optimization problem, nuances in how performing the optimization can impact the success of adversarial attacks, and the careful balance to make a robust defense to build a robust attack.  

The next session was on practical applications of AI for cybersecurity. The session opened with a presentation of a Malware dataset, which is notable for its quantity of samples (3x larger than any prior dataset) and the quality of its labels, which elucidate both multiple aliases to malware families and links to threat reports. This was followed by the presentation of a learning paradigm in a multi-step security game in which an attacker learns strategies that effectively manipulate a defender’s learning and anticipation of the attacker’s behaviors throughout the course of the game, with experimental results indicating significant benefits for the attacker. Next, a continuous word vectorization model was presented that is much less sensitive to adversarial text perturbations, as applied to “engagement bait” classifiers on social media data. Finally, a relatively simple but reliable technique was proposed to assist in the detection of out-of-distribution data for dropout Bayesian neural networks and effectively demonstrated on multiple datasets, including malware detection.    

The 2022 AICS workshop ended with a Cybersecurity Challenge Problem session, during which the results of the First TTCP Cyber Autonomy GYM for Experimentation (CAGE) Cybersecurity Challenge were summarized.  The Technical Cooperation Program (TTCP) is an international organization that collaborates in defense scientific and technical information exchange and shared research activities in Australia, Canada, New Zealand, UK and USA. The CAGE challenge problem was introduced at the 2021 IJCAI Adaptive Cyber Defense workshop to encourage the AI research community to develop solutions to cybersecurity problems. During the AAAI AICS workshop session, Ahmad Ridley, AICS organizer, and session speaker, Damian Marriott, a researcher in the Australian Government Defense and Science Technology Group, introduced the CAGE problem. Mr. Marriott provided the public GitHub website (https://github.com/cage-challenge/cage-challenge-1)  containing the CAGE prototype. He also stated the CAGE challenge problem goal of developing RL solutions for distributed, adaptive, autonomous cyber defense, outlined the different aspects of the CAGE simulation environment, and summarized the different RL algorithm approaches submitted by each participant. Finally, he provided the final challenge rankings, based on the total reward achieved by each RL approach. An academic team from the UK, named Mindrake, won the challenge. At the end, Mr. Marriott advertised the Second TTCP CAGE Challenge problem (https://github.com/cage-challenge/cage-challenge-2) that will be officially announced during the 2022 ICML Machine Learning for Cybersecurity workshop in July 2022.   

AICS 2022 was the sixth AI for Cyber Security workshop. It has run continuously for the past years, except for 2021 where the workshop could not be organized due to COVID related difficulties. The workshop had a healthy attendance of 70-80 people on average. This year AICS was co-chaired by James Holt, Edward Raff, Ahmad Ridley, Dennis Ross, Arunesh Sinha, Diane Staheli, William W. Streilein, Milind Tambe, Yevgeniy Vorobeychik, and Allan Wollaber. Workshop proceedings are available at https://arxiv.org/html/2202.14010. This report was written by James Holt, Edward Raff, Ahmad Ridley, Dennis Ross, Arunesh Sinha, Diane Staheli, and Allan Wollaber.  

 

Artificial Intelligence for Education (W10)  

Technology has transformed over the last few years, turning from futuristic ideas into today’s reality. AI is one of these transformative technologies that is now achieving great successes in various real-world applications and making our life more convenient and safer. AI is now shaping the way businesses, governments, and educational institutions do things and is making its way into classrooms, schools and districts across many countries. 

In fact, the increasingly digitized education tools and the popularity of online learning have produced an unprecedented amount of data that provides us with invaluable opportunities for applying AI in education. Recent years have witnessed growing efforts from the AI research community devoted to advancing our education and promising results have been obtained in solving various critical problems in education.  

Despite gratifying achievements that have demonstrated the great potential and bright development prospect of introducing AI in education, developing and applying AI technologies to educational practice is fraught with its unique challenges, including, but not limited to, extreme data sparsity, lack of labeled data, and privacy issues. Hence, this workshop focused on introducing research progress on applying AI to education and discussed recent advances of handling challenges encountered in AI educational practice. 

No formal report was filed by the organizers for this workshop. 

 

Artificial Intelligence Safety (W11)  

The Fourth AAAI-22 Workshop on Artificial Intelligence Safety (SafeAI 2022, http://www.safeaiw.org) was co-located with the Thirty-Sixth AAAI Conference on Artificial Intelligence, held virtually on February 28th (half a day) and March 1st (full day). SafeAI focuses on the intersection of AI and safety, as well as broader strategic, ethical, and policy-oriented aspects of AI safety as a whole. 

The workshop is open to the exploration of safety in a wide range of AI paradigms, considering systems that are specific for a particular application, and those that are more general, which may lead to unanticipated risks. The workshop welcomes ideas that bridge the short-term with the long-term perspectives, idealistic goals with pragmatic solutions, operational with policy issues, and industry with academia, in order to build, evaluate, deploy, operate and maintain AI-based systems that are truly safe. 

In this fourth edition we received 53 submissions and accepted 18 full papers, 3 talks and 12 posters, resulting in a full-paper acceptance rate of 34.0% and an overall acceptance rate of 62.3%. 

The SafeAI 2022 program was organized in five thematic sessions. The thematic sessions followed a highly interactive format, structured into short pitches and a joint panel each to discuss questions and common issues. There were five full paper sessions: 

Session 1 discussed bias, fairness and value alignment, covering behavior and preference manipulation, bias detection and multiclass fairness. 

Session 2 explored interpretability and accountability issues, such as characterizing driver behavior, and identifying the legal culpability of side effects. 

Session 3 focused on robustness and uncertainty, including adversarial sequence generation, mitigating hard boundaries in decision-tree-based uncertainty estimates, quantifying the importance of latent factors and analyzing the robustness to outliers in uncertainty estimation. 

Session 4 covered several aspects of safe reinforcement learning. The session discussed imperfect safety-aware RL, as well as safety constraints, hierarchical frameworks and game-theoretic perspectives in RL. 

Session 5 dealt with AI testing and Assessment, exploring the effects of model compression on CNNs, the differential assessment of AI agents and the identification of ethical dilemmas in autonomous systems through adaptive stress testing.  

The two keynote speakers, Matthew Dwyver (University of Virgina) and Ganesh Pai (KBR/NASA) covered distribution-aware test adequacy for neural networks and certification of machine learning in aeronautical applications. There were also three invited talks, and poster pitches with open poster sessions on the virtual platform. Finally, this year the workshop had two special sessions:  

  • EnnCore: an UK EPSRC-funded research network that addresses the fundamental problem of guaranteeing safety, transparency, and robustness in neural-based architectures  
  • Confiance.AI: the largest initiative in Europe for developing a software platform for trustworthy AI engineering. 

Nine co-chairs served to SafeAI 2022: Gabriel Pedroza, Huáscar Espinoza, José Hernández-Orallo, Xin Cynthia Chen, Xiaowei Huang, Mauricio Castillo-Effen, John McDermid, Richard Mallah, and Seán S. Ó hÉigeartaigh. The papers were published at CEUR-WS, Vol. 3087: http://ceur-ws.org/Vol-3087/. The authors of the report are as follows: Gabriel Pedroza, Huáscar Espinoza, José Hernández-Orallo, Xin Cynthia Chen, Xiaowei Huang, Mauricio Castillo-Effen, John McDermid, Richard Mallah, Seán S. Ó hÉigeartaigh. 

 

Artificial Intelligence with Biased or Scarce Data (W12) 

The workshop on Artificial Intelligence with Bias or Scarce Data was held online in conjunction with AAAI 2022 on February 28, 2022. The goal of this workshop was to provide a focused venue for academic and industry researchers and practitioners to discuss research challenges and solutions associated with learning AI models with the overarching requirements of fairness, data efficiency, and trustworthiness. 

As the AI community makes rapid progress in producing algorithms with human-level performance, it is extremely critical that we take a step back and assess, and consequently promise to the consumer world, what this objective performance reported in academic literature means in the context of real-world systems and applications. As a concrete example, it is one thing for a social media organization to use an algorithm to automatically identify a person of interest in pictures uploaded to its platform. On the other hand, the use of algorithms in making life-changing decisions in areas such as healthcare (e.g., should a certain treatment be administered?) or jurisprudence (e.g., should this person be released from prison?) is a totally different ballgame. The goal of this workshop was to provide a unique platform for researchers to better understand and tackle the biased and scarce data problems in AI. 

The attendees of the workshop included academic and industry researchers and practitioners from diverse subfields of AI such as computer vision, machine learning, natural language processing, medical imaging, speech processing, etc. One key topic of the papers presented at the workshop was quantifying human-like or gender bias in face recognition or verification systems. The workshop also included three invited keynote talks. A talk given by Rama Chellappa (Johns Hopkins University) elaborated on recent development of mitigating bias in AI systems, which is a step further than just measuring the bias. David Crandall (Indiana University) also gave a talk explaining the training data bias through the eyes of a child. Another talk given by Bernt Schiele (Max Planck Institute for Informatics) addressed imbalance problems, robustness and interpretability of deep learning in computer vision. 

Another key topic was contrastive learning or pretraining with small amount of data and/or label, and the best paper winners Daniel Y. Fu (Stanford University) and Mayee F. Chen (Stanford University) researched along this topic and presented a novel method to prevent class collapsing in supervised contrastive learning. In addition to addressing contrastive learning with small amount of data, the methods to handle small-data regime are also discussed in a variety of applications, including semantic segmentation, super-resolution for magnetic resonance imaging, extracting salient facts from company reviews, text-to-speech synthesis, etc. How to achieve data efficient learning is also a key topic discussed in the context of RGB and infrared imagery classification. 

The workshop attendees gave us positive feedback and their appreciation of this workshop both during and after the event, expressing their strong interest if future events on similar topic are organized again, which shows the need and popularity in the AI community to discuss more about data scarcity and bias issues in AI systems. The workshop organizers also committed themselves to organizing better future events to serve the AI community. 

Kuan-Chuan Peng and Ziyan Wu served as the co-chairs of this workshop and wrote this report. The papers of the workshop are planned to be published with MDPI Computer Sciences and Mathematics Forum.  

 

Combining Learning and Reasoning: Programming Languages, Formalisms, and Representations (W13) 

This was the first workshop held with the current title abbreviated as CLeaR. However, CLeaR is a continuation of the previous efforts made by some of the current organizers and advising committee members in the holding,” Declarative Learning Based Programming (DeLBP)” workshop during the past years, co-located with IJCAI and AAAI conferences four times. Following the ideas of DeLBP, the focus of CLeaR is on AI’s integrative paradigms, highlighting the new innovative abstractions, languages, and formalisms that facilitate combining learning and reasoning. 

The workshop aims at bridging formalisms for learning and reasoning such as neural and symbolic approaches, probabilistic programming, differentiable programming, Statistical Relation Learning and using non-differentiable optimization in deep models. It highlights the importance of declarative languages that enable such integration for covering multiple formalisms at a high-level and points to the need for building a new generation of ML tools to help domain experts in designing complex models where they can declare their knowledge about the domain and use data-driven learning models based on various underlying formalisms. This workshop looks at the importance of the integrative paradigms from the lens of real-world problems to solve a new wave of AI applications and making AI accessible to domain experts by providing the means for 1) High-level and declarative expression of the problem (i.e. the user specifies what she wants to achieve rather than how to achieve it), 2) Incorporating prior knowledge (e.g. laws of physics, or certain biological properties), 3) Reasoning over uncertain data/predictions, 4) Dealing with complex structures such as graphs and relations (e.g. to represent a social network or molecule), and 5) Modularity and allowing the components of a program to be easily switched or reused. 

CLeaR workshop, successfully, brought together researchers from diverse backgrounds with different perspectives including statistical relational learning, symbolic AI, deep learning, cognitive science and human reasoning, optimization, formal methods and program synthesis, and applications such as natural language understanding and material science. The goal was to discuss languages, formalisms, representations, and techniques that are appropriate for combining learning and reasoning, highlight the challenges of real-world applications and point to new research directions. Seven speakers were invited to give talks, and fifteen papers among the submissions were accepted for presentation. 

Sriraam Natarajan from The University of Texas Dallas coming from the statistical relational learning background talked about the importance of combining symbolic reasoning with learning from complex and noisy data for interactive and human allied AI. 

Kevin Ellis from Cornell University talked about his DreamCoder which combines both symbolic and neural learning to solve a range of programming problems and mitigate the combinatorial search difficulties for program synthesis. 

Gary Marcus from Robust.AI, well-known for his debates on deep learning limitations, raised issues about relying on transformer-based language models for deep natural language understanding and questioned the capability of gigantic deep architectures as the foundation for general intelligence. He also promoted the direction of integrative paradigms for AI. 

Monireh Ebrahimi from IBM Research talked about techniques and architectures that help in injecting symbolic knowledge into neural models to enable them to perform deductive reasoning. 

Carla Gomes from Cornell University introduced the deep reasoning networks framework for the integration of deep learning and reasoning. She discussed techniques that can incorporate prior knowledge into deep models and form interpretable latent spaces. She pointed to their recent research on the application of deep reasoning networks to material discovery. 

Brian Wilder from Carnegie Mellon University presented his research work and ideas from an optimization perspective and discussed the type of techniques that can bridge various classes of discrete optimization problems to the continuous optimization that happens in deep learning. He discussed a framework for combining learning and symbolic reasoning while having end-to-end differentiable training. 

Vered Shwartz from the University of British Colombia discussed neural NLP models that are enhanced with symbolic knowledge to exploit both the generalizability of neural representations and the structure and precision of symbolic knowledge. She provided results on various NLP problems based on neuro-symbolic architectures. 

The accepted papers presented in the workshop in a spotlight session and in two separate GatherTown poster sessions. We held a discussion panel at the end of the workshop including our invited speakers and Guy van den Breock, a member of CLeaR advisory committee. The discussions centered around the functionalities of each paradigm, that is symbolic vs subsymbolic, why a combined paradigm is needed, the importance of knowledge representation and the kind of benchmark problems that can better express the needs and many other questions. 

Parisa Kordjamshidi, Behrouz Babaki, Sebastijan Dumančić, Hossein Rajaby Faghihi, Hamid Karimian and Alex Ratner were organizing committee and workshop co-chairs. Advisory Committee included Guy Van den Broeck and Dan Roth. CLeaR was supported by the Snorkel AI for registration of some of our participants. Parisa Kordjamshidi wrote this report.  

 

Deep Learning on Graphs: Methods and Applications (W14) 

Our program consisted of two sessions: academic session and industry session. The academic session focused on most recent research developments on GNNs in various application domains. The industry session emphasized practical industrial product developments using GNNs. We also had a panel discussion on the current and future of GNNs on both research and industry. In addition, several invited speakers with distinguished professional background gave talks related the frontier topics of GNN. 

No formal report was filed by the organizers for this workshop. 

 

DE-FACTIFY: Multi-Modal Fake News and Hate-Speech Detection (W15) 

Combating fake news is one of the burning societal crises. It is difficult to expose false claims before they create a lot of damage. Automatic fact/claim verification has recently become a topic of interest among diverse research communities. Research efforts and datasets on text fact verification could be found, but there is not much attention towards multi-modal or cross-modal fact-verification. This workshop encouraged researchers from interdisciplinary domains working on multi-modality and/or fact-checking to come together and work on multimodal (images, memes, videos etc.) fact-checking. At the same time, multimodal hate-speech detection is an important problem but has not received much attention. Lastly, learning joint modalities is of interest to both Natural Language Processing (NLP) and Computer Vision (CV) forums. 

No formal report was filed by the organizers for this workshop. 

 

Dialog System Technology Challenge (W16) 

This report details the tenth Dialog System Technology Challenge workshop held at AAAI-2022. The Dialog System Technology Challenge (DSTC) has been a premier research competition for dialog systems since its inception in 2013. This workshop marks the tenth time the challenge has been held. Like its predecessors, it has focused on end-to-end dialog tasks, to explore the issue of applying end-to-end technologies to dialog systems in a pragmatic way. This year the challenge focused on multiple important aspects of dialog and conversation AI systems, including utilization of diverse knowledge resources, incorporation of varied multimodal signals (memes, vision and audio), and automatic evaluation, as targeted by 5 parallel tracks.
For this workshop challenge, we received five track proposals and went through a formal peer review process focusing on each task’s potential for broad interest from the research community, practical impact of the task outcomes, and continuity from the previous challenges. Finally, we accepted all five tracks which included two newly introduced tasks and three follow-up tasks from previous challenges. 

The five tracks are as listed: Internet Meme Incorporated Open-domain Dialog, Knowledge-grounded Task-oriented Dialogue Modeling on Spoken Conversations, Situated Interactive Multimodal Conversational AI, Reasoning for Audio Visual Scene-Aware Dialog, and Automatic Evaluation and Moderation of Open-domain Dialogue Systems. 

The Internet Meme Incorporated Open-domain Dialog challenge explored dialog systems having conversations via text and internet memes. The Knowledge-grounded Task-oriented Dialogue Modeling on Spoken Conversations challenge benchmarked the robustness of knowledge-grounded conversational models against the gaps between written and spoken conversations. The Situated Interactive Multimodal Conversational AI challenge introduced the task of co-reference resolution and dialog state tracking for multi-modal task-oriented systems within a photorealistic virtual environment. The Reasoning for Audio Visual Scene-Aware Dialog challenge explored having multi-modal dialog systems generate responses given a video without the use of human annotated captions. Finally, the Automatic Evaluation and Moderation of Open-domain Dialogue Systems task aimed to develop robust automatic open-ended dialog evaluation metrics along with developing open-domain dialog systems that generate appropriate responses while avoiding offensive responses.
A total of 284 participants registered for DSTC10, and 48 participated in the final challenge. We had a two-day wrap-up workshop to review the state-of-the-art systems, share novel approaches to the Challenge tasks, and discuss future directions for dialog technology. We accepted 36 system papers reporting the systems submitted to the DSTC10, as well as five papers describing the different tracks. We had two invited speakers: Professor Verena Rieser from Heriot-Watt University and Jianfeng Gao from Microsoft Research. Additionally, we had a poster session and a panel discussion discussing the future of dialog research. We had two sponsors to support costs for the workshop: Amazon as a Gold sponsor & Meta as a Bronze sponsor. To initiate DSTC11, we had a session to introduce the eight new track proposals submitted.
Our session chairs included Paul Crook (Meta), Zekang Li (University of Chinese Academy of Sciences), Seokhwan Kim (Amazon), Yang Liu (Amazon), Satwik Kottur (Meta), Yun-Nung (Vivian) Chen (National Taiwan University) , Chiori Hori (Mitsubishi Electric Research Laboratories), Chen Zhang (National University of Singapore), Rafael Banchs (Intapp Inc) and Koichiro Yoshino (Nara Institute of Science and Technology). The DSTC organizing committee included Koichiro Yoshino (general chair), Yun-Nung (Vivian) Chen, Paul Crook (workshop chairs), Satwik Kottur, Jinchao Li (publication chairs), and Behnam Hedayatnia, Seungwhan (Shane) Moon (publicity chairs). This report was written by Behnam Hedayatnia from Amazon and Shane Moon from Meta. 

 

Engineering Dependable and Secure Machine Learning Systems (W17) 

The AAAI-2022 Workshop on Engineering Dependable and Secure Machine Learning Systems (EDSMLS-2022) is the fourth edition of the EDSMLS series. It was held virtually on March 1, 2022, as part of AAAI 2022. Although virtual, the event was well attended by researchers from academia and industry and inspired lively discussions, addressing adversarial, reliable and secure machine learning (ML). The program included 6 presentations of peer-reviewed, accepted papers. 

Machine learning components have become prevalent across software systems. The quality of such components affects the quality of the software system. Hence, ML-based systems must meet dependability, security and quality requirements.  

Standard software quality assurance notions and practices, e.g., functional correctness, code coverage, etc., may be applicable to some parts of the system, however they might become irrelevant for ML components for several reasons. Firstly, ML systems are non-deterministic by nature. Secondly, in large part, they re-use high quality implementations of ML algorithms. Thirdly, the semantics of the learned models are typically incomprehensible, especially when deep learning methods are applied. Thus, novel methods and new methodologies and tools are needed to address quality and reliability challenges of ML systems.  

The ubiquity of highly connected ML software inevitably exposes that software to attacks. Classical security vulnerabilities are relevant; however, ML techniques introduce additional weaknesses, e.g., sensitivity to data manipulation. Hence, there is a need for research as well as practical solutions to ML adversarial attacks. 

The EDSMLS-2022 workshop focused on such topics. It comprised original contributions presenting problems and suggesting solutions related to dependability, quality assurance of, and adversarial attacks on, ML systems. Among others, topics such as data and concept drift, adversarial attacks, data perturbations, causality analysis, and domain adaptation were presented. The presentations were followed by discussions that produced new insights in a quest for novel, practical solutions. 

EDSMLS-2022 was organized by Eitan Farchi (IBM Research), Onn Shehory (Bar Ilan University) and Guy Barash (Western Digital). Links to workshop papers are available on the workshops website. Revised versions are aimed for publication in a journal special issue. Eitan Farchi, Onn Shehory, and Guy Barash wrote this report.  

 

Explainable Agency in Artificial Intelligence (W18) 

The AAAI-22 Workshop on Explainable Agency in Artificial Intelligence was held virtually during February 28 and March 1, 2022. This workshop’s objective was to discuss the topic of explainable agency and bring together researchers and practitioners from diverse backgrounds to share challenges, discuss new directions, and present recent research in the field. 

As Artificial Intelligence (AI) begins to impact our everyday lives, industry, government, and society with tangible consequences, it becomes increasingly important to support a two-way dialogue between humans and automated systems. Explainable Agency captures the idea that AI systems will need to be trusted by human agents and, as autonomous agents themselves, “must be able to explain their decisions and the reasoning that produced their choices” (Langley et al., 2017). This workshop was aimed at bringing together researchers and practitioners working on different facets of these problems, and from diverse backgrounds, to share challenges, new directions, and recent research in the field. We especially welcomed research from disciplines including but not limited to artificial intelligence, human-computer interaction, human-robot interaction, cognitive science, human factors, and philosophy. 

To guarantee high quality (paper) presentations, every submitted paper received at least two reviews and one meta review. We had a diverse program committee of 40 members from all over the world, and from different disciplines (not only computer science). We received 34 submissions and accepted 15 of them. The authors of the accepted papers had 13 minutes each to present their work. The presentations focused on different aspects of explainable agency, including: counterfactuals, fairness, human evaluations, and iterative and active communication among agents. Several methods were proposed for generating the explanations of AI agents. These methods included an interactive Constraint-Based Reasoning (C-BR) system for generating counterfactual explanations, an Inverse Reinforcement Learning (IRL) approach for modelling human knowledge and providing relevant counterfactual examples, algorithms for learning interpretable decision tree policies for multi-agent systems, and the estimation and communication of uncertainty by Deep Learning (DL) Agents. Research results were also presented that described the results of human subject studies, investigated human annotations for interpretable Natural Language Processing (NLP), or measured human task prediction performance when receiving a description of the capability of a black-box AI agent. In addition, a live demonstration was given of an explainable AI system that performs human-machine teaming in a real-time strategy game. 

Paper presentations were interspersed with invited talks and panel discussions. This workshop included three invited speakers who are experts in their fields. Cynthia Rudin, Professor of Computer Science and Engineering at Duke University, leader of Duke’s Interpretable Machine Learning Lab and recipient of the AAAI Squirrel AI Award, described her group’s work on inherently interpretable models, decision tree-splitting criteria more specifically, and global tree optimization methods such as hierarchical objective lower bound, leaf bound, and equivalent points bound. Chenhao Tan, Assistant Professor at the Department of Computer Science at the University of Chicago and leader of the Chicago Human+AI lab (CHAI), discussed his work on human interactions with explanations of AI classifiers and how these systems should be evaluated in user studies. Eric Ragan, Assistant Professor and leader of the Interactive Data and Immersive Environments (Indie) Lab, introduced work on how humans provide explanations for sequential decision-making tasks and the human perception of intelligent systems requiring humans’ explanations to improve their performance. 

This workshop also included two panel discussions. The first panel (which included panelists Cristina Conati (University of British Columbia), Mark Neerincx (TU Delft), and Nava Tintarev (Maastricht University), on Interactive Explainability, focused on human-agent interaction in explanation scenarios and addressed the best ways to approach this problem to build interactive explainable agents. The second panel, which included panelists Subbarao Kambhampati (Arizona State University), Prashan Madumal (University of Melbourne), and Laurie Paul (Yale University), discussed associational and causal modelling methods for explainable agency as well as the benefits and limitations of research employing causal methods. Both panels highlighted that the recent literature on explainability has forged ahead, without perhaps sufficiently drawing on the insights that have been gained from other areas of AI with a longer track record on these problems (e.g., early expert systems and planning research on explanations, case-based explanation, intelligent tutoring systems, and recommender systems). The panelists also argued that current algorithmic development on explainability does not include informative user studies, a factor that impedes downstream deployment in real-world applications. The panel on Explainability and Causality discussed the definition of causal explanations in sequential decision-making agents and the difference between having causal knowledge and being able to transfer this knowledge through explanations. Approaches that provide explanations from observational data and from data collected through direct interaction with the environment were also discussed through the lens of the three levels of the causal hierarchy (Pearl, 2019). 

The workshop ended with an analysis of the presented papers and a discussion on the lessons learned from the invited talks and panels. The discussion also focused on the need for a unified evaluation framework for explainable agency. Silvia Tulli, Prashan Madumal, Mark Keane, and David W. Aha served as co-chairs of this workshop and wrote this report. The proceedings of this workshop can be found on the workshop’s website.  

 

Graphs and More Complex Structures for Learning and Reasoning (W19) 

The second workshop on Graphs and more Complex structures for Learning and Reasoning (GCLR) was conducted to stimulate interdisciplinary discussions among researchers from varied disciplines such as computer science, mathematics, statistics, physics, etc. The workshop received overwhelming participation from several parts of the world. 

In the opening keynote, Professor Bruno Ribeiro, assistant professor at Purdue University, delivered a talk on the relationships between higher-order structures such as hypergraphs and graph representation learning. The opening talk helped set up the momentum of discussions among the speaker and participants, continuing until the end of the workshop. Discussions on hypergraphs continued in the talk delivered by Jamie Haddock, CAM Assistant Professor at UCLA. Dr. Haddock gave insights into the problem of community detection in hypergraphs by using a spectral method that utilizes information from the eigenvectors of the nonbacktracking or Hashimoto matrix. 

The workshop held four exciting talks on the deep learning-based approaches used with different complex graphical structures. Niloy Ganguly, professor at IIT Kharagpur, delivered a talk on the modeling of molecules and crystals using graphs and generating new molecules, and predicting crystal properties when combined with deep learning-based models. Professor Ganguly also highlighted the key challenges such as DFT error bias, lack of interpretability, and algorithmic transparency, which are not only specific to the application at hand but occur in general while working with the graph neural networks (GNNs). Continuing the discussions on graph neural networks (GNNs), Stefanie Jegelka, associate professor at MIT presented her work on Improving the expressive power of GNNs by using the power of recursion and looking at graphs from a spectral perspective. In practice, models that can be scaled to learn embeddings for very large graphs are needed. Srinivasan Parthasarthy, professor at The Ohio State University presented his work on scaling graph representation learning algorithms in an implementation agnostic fashion. Another variant of neural network architecture, based on algebraic topology was presented by our keynote speaker, Santiago Segarra, Assistant professor at RICE University. Professor Segarra demonstrated the effectiveness of this architecture in extrapolating trajectories on synthetic and real datasets, with particular emphasis on the gains in generalizability to unseen trajectories. 

In addition to the talks, there were many high-quality submissions to the workshop. Our program committee consisted of more than 60 researchers with diverse areas of expertise. All the paper submissions received at least three, and many of them got five constructive reviews. Based on the reviews, 14 high-quality papers were accepted. Authors of full papers presented their works, and authors of short papers/extended abstracts presented their work in the poster session. 

The workshop concluded with a panel discussion among the keynote speakers on “Learning and Reasoning with Complex graphs – a multi-disciplinary challenge.” The discussion brought up major challenges in the area, such as learning-based vs. model-driven approaches and their applications in complex networks. The panelists shared their perspectives on such topics, which sparked interesting debates. The panelists also gave suggestions for inspiring researchers working on interdisciplinary problems. All the keynote talks and panel discussion are uploaded on our Youtube channel 

The audience was very attentive and asked some interesting questions during the keynote talks and panel discussion which made the virtual event very interactive. We believe some of the attendees made new friends at the GCLR workshop, which may lead to future collaborations.  

The GCLR workshop was co-organized by Balaraman Ravindran (IIT Madras), Kristian Kersting (TU Darmstadt), Sriraam Natarajan (Univ. of Texas Dallas), Ginestra Bianconi (Queen Mary University of London), Philip S. Chodrow (UCLA), Tarun Kumar (IIT Madras), Deepak Maurya (Purdue University), Shreya Goyal (IIT Madras). Deepak Maurya, Tarun Kumar, and Balaraman Ravindran wrote this report. 

 

Health Intelligence (W20) 

The 6th International Workshop on Health Intelligence was held virtually on February 28th and March 1st, 2022. This workshop brought together a wide range of computer scientists, clinical and health informaticians, researchers, students, industry professionals, national and international health and public health agencies, and NGOs interested in the theory and practice of computational models of population health intelligence and personalized healthcare to highlight the latest achievements in the field. 

Population health intelligence includes a set of activities to extract, capture, and analyze multi-dimensional socio-economic, behavioral, environmental and health data to support decision-making to improve the health of different populations. Advances in artificial intelligence tools and techniques and internet technologies are dramatically changing the ways that scientists collect data and how people interact with each other, and with their environment. The Internet is also increasingly used to collect, analyze, and monitor health-related reports and activities and to facilitate health-promotion programs and preventive interventions. In addition, to tackle and overcome several issues in personalized healthcare, information technology will need to evolve to improve communication, collaboration, and teamwork between patients, their families, healthcare communities, and care teams involving practitioners from different fields and specialties. 

This workshop follows the success of previous health-related AAAI workshops including the ones focused on personalized (HIAI 2013-16) and population (W3PHI 2014-16) healthcare, and the five subsequent joint workshops held at AAAI-17 through AAAI-21 (W3PHIAI-17 – W3PHIAI-21). This year’s workshop brought together a wide range of participants from the multidisciplinary field of medical and health informatics. Participants were interested in the theory and practice of computational models of web-based public health intelligence as well as personalized healthcare delivery. The papers (full and short) and the posters presented at the workshop covered a broad range of disciplines within Artificial Intelligence including Natural Language Processing, Prediction, Deep Learning, Computer Vision, Knowledge Discovery, and COVID. 

The workshop included three invited talks: (1) Eran Halperin (Optum Labs, UCLA), spoke on using whole-genome methylation patterns as a biomarker for EHR imputation, (2) Irene Chen (MIT), described how to leverage machine learning towards equitable healthcare, and (3) Michal Rosen-Zvi (IBM Research), presented promising results from the acceleration of biomarker discovery in multimodal data of cancer patients. The workshop program also included a hackallenge (hackathon + challenge) focused on developing automated methods to diagnose dementia based on language samples. In this challenge, teams developed standardized analysis pipelines for two publicly available datasets (the hackathon). The Pittsburgh and Wisconsin Longitudinal Study corpora were provided for this hackallenge by the Dementia Bank consortium. Additionally, each team provided a sound basis for meaningful comparative evaluation in the context of a shared task (the challenge). The workshop participants engaged in discussions around many cutting-edge topics affecting the way evidence is produced for and delivered in healthcare to improve patient outcomes. 

Martin Michalowski, Arash Shaban-Nejad, and Simone Bianco served as co-chairs of this workshop and all the workshop papers are published by Springer in their “Studies in Computational Intelligence” series. The organizing committee consisted of Martin Michalowski, Arash Shaban-Nejad, Simone Bianco, Szymon Wilk, David L. Buckeridge, and John S. Brownstein. Martin Michalowski, Arash Shaban-Nejad, and Simone Bianco wrote this report.  

 

Human-Centric Self-Supervised Learning (W21) 

The main goal of the HC-SSL workshop was to promote the exchange of ideas, recent methods, and findings in the area of self-supervised learning for human-centric data. While the area of self-supervised representation learning has witnessed tremendous advances in recent years in computer vision and natural language processing, the workshop aimed to shed more light on specific areas such as (1) pre-text tasks and loss functions that pertain to human-centric computing (such as face and gesture analysis, wearable time-series, speech analysis, and health analytics) for self-supervised learning; (2) the broader umbrella of responsible AI for human-centric self-supervised representation learning, namely implications on ethics, explainability, fairness, robustness, and security. 

The workshop successfully brought together researchers from academia and industry in the areas of self-supervised learning and human-centric computing. It was a very well-attended and successful event. A total of 10 submitted papers were accepted after peer-review and presented as posters and short, 5-minute, pre-recorded videos. A total of 8 invited talks were presented by renowned researchers in the area, in the format of 30-minute talks followed by 10 minutes of Q/A and discussions. The workshop took place in two sessions, morning and afternoon.
The morning session kicked off with opening remarks by Ali Etemad (workshop general co-chair and morning session chair), followed by 4 exciting, invited talks. As the first invited talk, Christoph Lütge (Technical University of Munich) discussed the “Ethical Aspects of Human-Centric AI.” This was followed by Hatice Gunes’s (University of Cambridge) insights into “Fairness, Explainability, and Facial Affect.” Next, Björn W. Schuller (Imperial College London) presented his latest work on state-of-the-art “Self-Supervision for Audio.” The final invited talk of the morning session was presented by Abhinav Gupta (Carnegie Mellon University & Facebook AI), who described his work on “Self-Supervised Learning: Towards Rich Representations in the Wild?” To wrap up the morning session, short videos of 5 of the accepted papers were played for the audience.
The afternoon session was chaired by Ahmad Beirami (workshop general co-chair and afternoon session chair). In this session, 4 invited talks were presented. First, Kristen Grauman (University of Texas at Austin) presented her work on “Audio-Visual Self-Supervision.” This was followed by a talk by Shalini De Mello (Nvidia) on “Self-Supervision for Face and Gaze Understanding.” Next, Natasha Jaques (Google Brain) described her work on “Reinforcement Learning from Human Affective Cues.” The final invited talk was presented by Graham Taylor (University of Guelph & Vector Institute), who discussed his recent work on “Understanding the Impacts of Diverse Training Sets in Self-Supervised Learning and Predicting Parameters for Unseen Deep Architectures.” Following these 4 talks, the short videos of the remaining 5 accepted papers were played for the audience. Following a short break, the poster session was hosted in Virtual Chair, where authors got a chance to further discuss their work with interested attendees. Finally, the closing remarks were given by Pritam Sarkar (workshop workflow chair) who gave a brief summary of the workshop’s highlights. 

The organizing team consisted of Aaqib Saeed (publicity chair), Ahmad Beirami (general co-chair), Akane Sano, Ali Etemad (general co-chair), Alireza Sepas-Moghaddam (program co-chair), Huiyuan Yang (program co-chair), Mathilde Caron, and Pritam Sarkar (workflow chair). The proceedings of the workshop are available on the workshop website (https://hcssl.github.io/AAAI-22/), and the recordings of the invited talks will be made public to the community. Ali Etemad, Ahmad Beirami, and Pritam Sarkar wrote this report.  

 

Information-Theoretic Methods for Casual Inference and Discovery (W22) 

Causal inference is one of the main areas of focus in artificial intelligence (AI) and machine learning (ML) communities. Causality has received significant interest in ML in recent years in part due to its utility for generalization and robustness. It is also central for tackling decision-making problems such as reinforcement learning, policy or experimental design. Information-theoretic approaches provide a novel set of tools that can expand the scope of classical approaches to causal inference and discovery problems in a variety of applications. Some examples of the success of information theory in causal inference are: the use of directed information, minimum entropy couplings and common entropy for bivariate causal discovery; the use of the information bottleneck principle with applications in the generalization of machine learning models; analyzing causal structures of deep neural networks with information theory; among others. 

The goal of ITCI’22 was to bring together researchers working at the intersection of information theory, causal inference and machine learning in order to foster new collaborations and provide a venue to brainstorm new ideas, exemplify to the information theory community causal inference and discovery as an application area and highlight important technical challenges motivated by practical ML problems, draw the attention of the wider machine learning community to the problems at the intersection of causal inference and information theory, and demonstrate to the community the utility of information-theoretic tools to tackle causal ML problems. 

No formal report was filed by the organizers for this workshop. 

 

Information Theory for Deep Learning, Interactive Machine Learning (W23) 

With the rapid development of advanced techniques on the intersection between information theory and machine learning, such as neural network-based or matrix-based mutual information estimator, tighter generalization bounds by information theory, deep generative models and causal representation learning, information theoretic methods can provide new perspectives and methods to deep learning on the central issues of generalization, robustness, explainability, and offer new solutions to different deep learning related AI applications.
This workshop aimed to bring together both academic researchers and industrial practitioners to share visions on the intersection between information theory and deep learning, and their practical usages in different AI applications. 

No formal report was filed by the organizers for this workshop. 

 

Interactive Machine Learning (W24) 

With the increasing popularity and pervasiveness of AI, interest has recently grown in Interactive Machine Learning (IML). The workshop accepted 13 high-quality contributions on IML, including 11 regular papers and 2 long abstracts, and brought together about 50 researchers and practitioners from AI and related areas. Three renowned experts – Cynthia Rudin, Simone Stumpf, and Andreas Holzinger – have contributed thought-provoking presentations on recent developments and lessons learned. Together with the other presentations, these sparked discussion on various IML topics, from reinforcement learning to explainability, to white-box vs. black-box models, to various forms of supervision. Overall, the workshop has contributed to raising awareness of the centrality of interaction in ML and AI and explored directions for applying IML to impactful and real-world problems. 

Recent years have witnessed growing interest in the interface between human endeavors and AI systems with the increasing realization that machines can indeed meet objectives specified – but the real question is, have they been given the right objectives? A central topic in this area is Interactive Machine Learning (IML), which is concerned with the development of algorithms for enabling machines to cooperate with human agents for the purpose of solving a shared prediction, learning, or teaching task. 

Despite its potential, knowledge transfer between different subtopics of IML, research and applications has been limited. Given the recent advancements in explainable technologies and the growing attention to the issue of interaction between human and artificial agents, it seems that now is the time to fill this gap by bringing together researchers from industry and academia and from different disciplines in AI and surrounding areas. 

The workshop contained several exciting keynote talks from leading researchers in the field. Simone Stumpf (University of Glasgow) presented some key lessons from several years of pioneering research in this field. Her research indicates that it is in fact important to allow users to ‘edit’ what was learnt via the process of giving feedback. In his keynote talk, Andreas Holzinger (Medical University Graz) presented motivations, previous and recent important works and challenges around Human-Centered AI to foster Explainability and Robustness for Trustworthy AI. Finally, in the last keynote talk, Cynthia Rudin (Duke University) presented her latest research on interpretable neural networks, which argues against a commonly found opinion that leveraging deep learning with interpretability does not need to come at a cost of reduced accuracy. 

The workshop contained an excellent set of accepted publications, covering the wide spectrum of IML in the context of ML, continual learning and reinforcement learning. User feedback and interaction were examined in the forms of specifying preferences, natural language, human demonstrations for tricky situations and even facial emotion recognition. 

During the workshop, several important issues were raised and discussed in the questions following paper presentations as well as the interactive session. One of these issues is the topic of trust in IML. For fruitful interactions between human users and an AI model, the explanations of the model must be trustworthy, i.e., the explanations should truly depict the decision process of the model. Another issue that remains a somewhat open topic is the faithfulness of the user’s feedback. In other words, how can and how should a model handle noisy or maybe even biased feedback from the user? Apart from this, designing explanations that users can easily understand and provide feedback on remains one of the challenges in IML. This specific challenge requires designing explanations and user interfaces that consider the domain specifics, level of end users’ expertise, and methods to infuse feedback into the ML models decision making process. Finally, an open topic remains what class of models is best suited for IML, i.e., white box versus concept-based versus black box models, in terms of both explainability and supported interaction protocols. 

In conclusion, this workshop brought together researchers from both industry and academia with the realization that IML has a significant role to play in making AI truly impactful in real world problems. Given the success of the workshop, we plan to organize a follow up workshop on the topic of IML next year. All workshop chairs contributed to this report: Öznur Alkan, Research Scientist at IBM Research; Elizabeth Dally, STSM at IBM Research, Ireland; Wolfgang Stammer, PhD Student at the Technical University of Darmstadt; and Stefano Teso, Assistant Professor at the University of Trento. 

 

Knowledge Discovery from Unstructured Data in Financial Services (W25) 

Knowledge discovery from various data sources has gained the attention of many practitioners in recent decades. Its capabilities have expanded from processing structured data (e.g., DB transactions) to unstructured data (e.g., text, images, and videos). Despite substantial research focusing on discovery from news, web, and social media data, its applications to datasets in professional settings such as legal documents, financial filings, and government reports, still present huge challenges. Possible reasons are that the precision and recall requirements for extracted knowledge to be used in business processes are fastidious, and signals gathered from these knowledge discovery tasks are usually very sparse and thus the generation of supervision signals is quite challenging. In the financial services industry particularly, a large amount of financial analysts’ work requires knowledge discovery and extraction from different data sources, such as SEC filings, loan documents, industry reports, etc., before they can conduct any analysis.  

This workshop focuses on research into the use of AI techniques to extract knowledge from unstructured data in financial services. The workshop featured three keynotes from both academia and industry perspectives. The first keynote was given by Dr. William Wang from UC Santa Barbara. Dr. Wang presented technical challenges of deep question answering over textual and numerical data in financial statements. To advance this research direction, his team created a new large scale FinQA dataset, which was annotated by financial experts. Dr. Xiaodan Zhu from Queen’s University further discussed collecting facts and evidence from semi-structured tables besides unstructured texts. He also highlighted the recent trend of leveraging complementary strengths of neural networks and symbolic models for this task. Lastly, Dr. Prabhanjan Kambadur presented the NLP ecosystem at Bloomberg, which supports instantaneous discovery of financial information. He also discussed the practical designs to train, deploy and maintain a large suite of NLP models.  

Seven research papers were accepted and presented during the workshop. There are three main themes in these articles. (1) Applications to semi-structure data. Most of research reported in the past two KDF workshops focused on processing unstructured data like text. From this year, we observed the emergence of studies on semi-structured data, such as clickstream data and natural languages with semi-structured representations like JSON. This trend also echoed two of our keynotes that stepped in the domain of reasoning from both texts and tables. (2) Graph based modeling. After detecting and extracting information, a further step of knowledge discovery is to create relationships by connecting the information like in knowledge graphs. There were several presentations discussing novel models of creating knowledge graphs from financial documents and reasoning-based graphs. (3) Multi-modal learning. To automate some analytical tasks of financial professionals, relying on single modality data is not sufficient. One paper incorporated trading data (numerical) and social media signals (textual) for stock prediction. This is a good example of fusion of multiple data types. We look forward to seeing fusions of different AI model genres like vision and language in future literature.  

This workshop was the 3rd event in this series at AAAI since 2020. We continue observing strong research interests and great progress in knowledge discovery in financial services. Xiaomo Liu, Zhiqiang Ma, Sameena Shah, Armineh Nourbakhsh, Gerard de Melo and Le Song co-chaired this workshop, and Xiaomo Liu and Zhiqiang Ma wrote this report.  

 

Learning Network Architecture during Training (W26) 

The goal of this workshop was to share ideas about how to choose or evolve a network architecture suitable for the learning problem at hand. We specifically wanted to explore methods that evolve a suitable network during a training run, or that adjust the architecture between one training run and the next.  

A fundamental problem in present-day use of artificial neural networks is that, given a problem (an objective and a collection of training data), the first step is generally to guess what network architecture is likely to produce a good-enough result, without too much wasted structure and training effort. There is little relevant theory to guide this choice. Often many candidate networks must be tried, trained, and discarded, at great computational cost, before a good architecture is found. 

This workshop, a follow-up to our workshop on the same topic at the AAAI-2021 conference, explores ideas for addressing this problem more effectively by developing an appropriate architecture during training, or using knowledge derived from each training run to suggest a better architecture to try next. Both workshops highlighted some approaches that are qualitatively different from currently popular Network Architecture Search (NAS) methods.  

In addition to sharing ideas and results, one of our goals was to help build a more-connected community of researchers in this area. 

This workshop featured three invited talks, plus five talks chosen from submissions.  Due to the virtual nature of the conference, some of the talks were submitted as pre-recorded videos.  The invited talks were by Lars Eidnes (NTNU) “Training Neural Networks with Local Error Signals”; Sarkhan Badirli (Purdue University) “Gradient Boosting Neural Networks: GrowNet”; and Edouard Oyallon (Sobonne University, CNRS ISIR) “Two training sins: greedy and lazy?” 

Among the ideas addressed: 

  • Variations and theoretical analysis of the original Cascade-Correlation architecture (Fahlman & Lebiere, NeurIPS 1990), which begins with no hidden units, adding and freezing hidden units one at a time, as needed during training. 
  • Training the network incrementally, with local error signals, freezing each layer once it is trained, and adding new ones as needed. 
  • Assembling a network gradually from weak learners, reducing residual error at each step – a form of boosting. 
  • Greedy training of individual layers with explicitly supplied objective functions for each layer. 
  • Several variations of Network Architecture Search (NAS) to structure the search space, making the search more efficient and effective.  Some of these make use of the results of previous training to guide the search for the next candidate architecture. 

The organizers of the workshop were Scott E. Fahlman (Carnegie Mellon University), Edouard Oyallon (Sorbonne University), and Dean Alderucci (Carnegie Mellon University). Scott E. Fahlman and Dean Alderucci wrote this report.  

Papers, slides, and/or videos of the talks are posted on the conference web page: https://www.cmu.edu/epp/patents/events/aaai22/. 

 

Papers, slides, and/or videos from the AAAI-21 workshop on the same topic are here: https://www.cs.cmu.edu/~sef/AAAI-2021-Workshop.htm. 

 

Machine Learning for Operations Research (W27) 

The AAAI Workshop on Machine Learning for Operations Research (ML4OR) built on the momentum that has been directed over the past 5 years, in both the OR and ML communities, towards establishing modern ML methods as a “first-class citizen” at all levels of the OR toolkit. ML4OR will serve as an interdisciplinary forum for researchers in both fields to discuss technical issues at this interface and present ML approaches that apply to basic OR building blocks (e.g., integer programming solvers) or specific applications. 

No formal report was filed by the organizers for this workshop. 

 

Optimal Transport and Structured Data Modeling (W28) 

The last few years have seen the rapid development of mathematical methods for modeling structured data coming from biology, chemistry, network science, natural language processing, and computer vision applications. Recently developed tools and cutting-edge methodologies coming from the theory of optimal transport have proved to be particularly successful for these tasks. A striking feature of much of this recent work is the application of new theoretical and computational techniques for comparing probability distributions defined on spaces with complex structures, such as graphs, Riemannian manifolds and more general metric spaces. The 1st Workshop on Optimal Transport and Structured Data Modeling (OTSDM, co-organized with AAAI 2022) provided a premier interdisciplinary forum for researchers in different communities to discuss the most recent trends, innovations, applications, and challenges of optimal transport and structured data modeling. More details of the workshop can be found at https://ot-sdm.github.io. 

This workshop invited six top-tier researchers to introduce their research in the field of optimal transport and structured data modeling, which leads to three long talks and three short talks respectively. 

 

Long Talks 

–  Optimal Transport in Single-Cell Biology: Challenges and Opportunities. Dr. Caroline Uhler from MIT introduced her team’s work on optimal transport- based single-cell modeling and analysis, especially the OT-based solutions to the tracking and the similarity comparison of evolutionary cells. 

–  Scaling Optimal Transport for High Dimensional Learning. Dr. Gabriel Peyré from CNRS and Ecole Normale Supérieure introduced the recent advances on computational optimal transport, including the methods reducing the computational complexity and improving the scalability of optimal transport problems. 

– Weisfeiler-Lehman meets Gromov-Wasserstein. Dr. Yusu Wang from UCSD shared her recent work on computational topology and graph analysis, which connects the classic Weisfeiler-Lehman test on the isomorphism of graphs with the Gromov-Wasserstein distance. 

 

Short Talks 

–  The (Fused) Gromov-Wasserstein Framework as a Tool for Learning on Structured Data. Dr. Titouan Vayer from ENS Lyon introduced his work on fused Gromov-Wasserstein (FGW) distance, which provides a new algorithmic framework for attributed graph analysis. He introduced the theory of the FGW distance and its applications on graph clustering and classification. 

 

–  On Scalability of Optimal Transport with Tree/- Graph Metric. Dr. Tam Le from RIKEN shared his systematic work of scalable optimal transport distance based on tree-structured data. A detailed introduction of the method and various applications are presented. 

 

–  Differentiable Hierarchical Optimal Transport for Robust Multi-view Learning. Dr. Dixin Luo from Beijing Institute of Technology introduced her recent work on applying hierarchical optimal transport to achieve multi-modal learning for unaligned multi-modal data, which improves the feasibility of multi-modality learning in practical scenarios. 

 

Besides the above invited talks, this workshop accepted seven submissions which can be found on our website here https://ot-sdm.github.io/. 

In summary, this workshop brings together leading computer scientists, mathematicians, AI researchers, and practitioners from theoretical and applied communities to share ideas, promote advanced work, and foster collaboration. We would like to thank all the presenters, the reviewers, and the organizers for their contributions to the OTSDM workshop. The organizers of the OTSDM workshop includes: Dr. Hongteng Xu from Renmin University of China, Dr. Julie Delon from Université de Paris, Dr. Facundo Mémoli from Ohio State University, and Dr. Tom Needham from Florida State University. Hongteng Xu, Facundo Mémoli, Julie Delon, and Tom Needham wrote this report.  

 

Practical Deep Learning in the Wild (W29) 

Deep learning has achieved significant success for artificial intelligence (AI) in multiple fields. However, research in the AI field also shows that their performance in the wild is far from practical due to the lack of model efficiency and robustness towards open-world data and scenarios. Regarding efficiency, it is impractical to train a neural network containing billions of parameters and then deploy it to an edge device in practice. And considering robustness, input data with noises frequently occur in open-world scenarios, which presents critical challenges for the building of robust AI systems in practice. Some existing research also presents that there is a trade-off between the robustness and accuracy of deep learning models.  

No formal report was filed by the organizers for this workshop. 

 

Privacy-Preserving Artificial Intelligence (W30) 

The availability of massive amounts of data, coupled with high-performance cloud computing platforms, has driven significant progress in artificial intelligence (AI) systems. It has profoundly impacted several areas, including computer vision, natural language processing, and transportation. However, the use of rich data sets also raises significant privacy concerns: They often reveal personal, sensitive, information that can be exploited, without the knowledge and/or consent of the involved individuals, for various purposes including monitoring, discrimination, and illegal activities.  

The goal of PPAI-22 was to provide a platform for researchers, AI practitioners, and policymakers to discuss technical and societal issues and present solutions related to privacy in AI applications. The workshop focused on both the theoretical and practical challenges related to the design of privacy-preserving AI systems. 

PPAI-22 was a full-day event that included a rich collection of contributed and invited talks, a tutorial, poster presentations, and a panel discussion. The workshop brought together researchers from a variety of subfields of AI and security and privacy, including optimization, machine learning (ML), differential privacy, and secure computation. 

The predominant theme of the contributed and invited talks was the development of privacy-preserving algorithms, often based on the framework of differential privacy, for private data release or private machine learning. The workshop accepted six spotlight talks and sixteen poster presentations.  

PPAI-22 included three invited talks on this research theme. Adam Smith (Boston University) discussed the phenomena of memorization in machine learning models and its connections with privacy, Claire McKay Bowen (Urban Institute) provided insights on how policymakers view and adopt privacy-preserving technologies, and Damien Desfontaines (Tumult Labs) outlined an approach to bring differential privacy to a widespread audience.  

The workshop also featured a tutorial: “Differentially Private Deep Learning, Theory, Attacks, and PyTorch Implementation,” by Ilya Mironov, Alexandre Sablayrolles, and Igor Shilov (all from Responsible AI, Meta), which discussed the topic of differential privacy in deep learning, from the fundamentals up to its implementations.
The workshop panel, served by Gerome Miklau (University of Massachusetts, Amherst and Tumult Labs), John M. Abowd (US Census Bureau), Steven Wu (Carnegie Mellon University), and Claire McKay Bowen (Urban Institute), focused on the theme: “Differential Privacy and disparate impacts in downstream decisions and learning tasks.” The panelists discussed the importance to raise awareness of the privacy risks associated with deployments of differential privacy algorithms, the pressure that companies are facing to use privacy-preserving technologies, and the need to better communicate goals and unintended effects of privacy-preserving algorithms. 

PPAI-22 was extremely engaging and featured an outstanding program. The recordings of all contributed talks, invited talks, the tutorial, and the panel discussion are available online at https://aaai-ppai22.github.io/. PPAI-22 was organized by Ferdinando Fioretto, Alexandra Korolova, and Pascal Van Hentenryck. This report was written by Ferdinando Fioretto. 

 

Reinforcement Learning for Education: Opportunities and Challenges (W31) 

RL4ED workshop was organized to facilitate tighter connections between researchers and practitioners interested in the broad areas of reinforcement learning (RL) and education (ED). The workshop focused on two thrusts: 1) Exploring how we can leverage recent advances in RL methods to improve state-of-the-art technology for ED; 2) Identifying unique challenges in ED that can help nurture technical innovations and next breakthroughs in RL. 

Reinforcement learning (RL) is a computational framework for modeling and automating goal-directed learning and sequential decision-making. Given the centrality of sequential student-teacher interactions in education (ED), there has been a surge of interest in applying RL to improve the state-of-the-art technology for ED. While promising, it is typically very challenging to apply out-of-the-box RL methods to ED. Further, many problem settings in ED have unique challenges that make the current RL methods inapplicable. To this end, we organized this workshop to facilitate tighter connections between researchers and practitioners interested in the broad areas of RL and ED. 

The workshop focused on two thrusts, namely RL->ED and ED->RL, each covering several topics of interest. The topics in RL->ED thrust focused on leveraging recent advances in RL methods for ED problem settings, including: (i) survey papers summarizing recent advances in RL with applicability to ED; (ii) developing toolkits, datasets, and challenges for applying RL methods to ED; (iii) using RL for online evaluation and A/B testing of different intervention strategies in ED; and (iv) novel applications of RL for ED problem settings. The topics in ED->RL thrust focused on unique challenges in ED problem settings for nurturing next breakthroughs in RL methods, including: (i) using pedagogical theories to narrow the policy space of RL methods; (ii) using RL framework as a computational model of students in open-ended domains; (iii) developing novel offline RL methods that can efficiently leverage historical student data; and (iv) combining statistical power of RL with symbolic reasoning to ensure the robustness for ED. 

The workshop was structured around invited talks, contributed papers, and spotlight presentations. We invited a set of people from academia and industry to cover various topics of interest and achieve a balance across different perspectives and disciplines. In total, the workshop had 6 invited talks: each of these talks being about 25 minutes long. Given the workshop’s focus on community-building and networking, we solicited submissions of two types. The first type (“Research Track”) included papers reporting the results of ongoing or new research that had not been published before. The second type (“Encore Track”) included papers that had been recently published or accepted for publication in a conference or journal. In total, the workshop had 13 contributed papers that were presented by authors as spotlight presentations. 

The Q/A time after the invited talks and spotlight presentations provided an ample opportunity for discussions among workshop participants. The talks and discussions at the workshop highlighted excitement in the community around different topics in RL for ED area. To further increase the accessibility of the workshop content and support community building, the video recordings are available on the workshop website (https://rl4ed.org/aaai2022/). 

RL for ED is an important research area that may lead to new advances in reinforcement learning and practical improvements in education. The need for multiple perspectives and the unique challenges raised by educational applications requires continued fostering of community in this area through events similar to our RL4ED workshop. 

Neil T. Heffernan, Andrew S. Lan, Anna N. Rafferty, and Adish Singla organized this workshop. Adish Singla wrote this report.  

 

Reinforcement Learning in Games (W32) 

The goal of the workshop series is to develop and support a community of researchers interested in theoretical and practical aspects of reinforcement learning with multiple agents interacting in a shared environment. This year, the workshop brought a very diverse range of topics from novel bots for specific games, through equilibrium computation, to a study of emergent communication.

Games provide an abstract and formal model of environments in which multiple agents interact: each player has a well-defined goal and rules to describe the effects of interactions among the players. The first achievements in playing these games at super-human level were attained with methods that relied on and exploited domain expertise that was designed manually (e.g. chess, checkers). In recent years, we have seen examples of general approaches that learn to play these games via self-play reinforcement learning (RL), as first demonstrated in Backgammon and later in Go, Poker or Real-Time Strategy Games. While progress has been impressive, we believe we have just scratched the surface of what is capable, and much work remains to be done in order to truly understand the algorithms and learning processes within these environments.

The objective of this workshop series is to bring together researchers to discuss ideas, preliminary results, and ongoing research in the field of reinforcement learning in games, the related disciplines of computational game theory, opponent modeling and their practical applications. Due to uncertainties caused by the Covid-19 pandemic, the workshop was fully virtual.

Interesting highlights of the workshop were the invited talks by Karl Tuyls on applying the recent results from reinforcement learning in sports analytics, Peter Stone on outracing champion gran turismo drivers with deep reinforcement learning, and Fei Fang on applying reinforcement learning in real-world security games.

The main technical program was composed of four oral paper presentations, thirty nine posters and a discussion about the future of Hidden Information Games Competition. The interest in the competition is limited; hence, we will likely not organize it next year.

The technical papers covered a wide range of topics, including equilibrium computation, theoretical analysis of solution concepts, learning efficient communication, opponent modeling, sports analytics, but also a few papers on single-agent reinforcement learning, since it is the basis of many MARL algorithms.

Viliam Lisy (CTU), Noam Brown (FAIR), and Martin Schmid (DeepMind) served as co-chairs of this workshop.

Robust Artificial Intelligence System Assurance (W33) 

The AAAI-22 workshop on Robust Artificial Intelligence System Assurance (RAISA) brought together researchers and practitioners from government, academia, and industry to explore ways in which the robustness of Artificial Intelligence (AI) can be assured at the system architecture level. 

RAISA took place as a half-day virtual workshop on February 28, 2022. This workshop represented an evolution of a prior workshop that was held at the European Conference on AI in 2020 on the topic of Robustness of Artificial Intelligence Systems Against Adversarial Attacks (RAISA3). AAAI-22’s RAISA opened the aperture to consider not only adversarial attacks, but also robustness to natural changes in data, as well as the broader ecosystem in which AI systems are developed and deployed. 

The workshop featured five paper talks and three keynote presentations. Discussions covered a variety of application areas (e.g., computer vision, natural language processing, game playing, robotics), learning tasks (e.g., supervised, reinforcement, federated, path planning), and robustness concerns (e.g., adversarial examples, natural perturbations, environmental changes, distribution shift), underscoring the widespread need for AI assurance solutions across all domains in which AI and machine learning (ML) are being applied. Proposed solutions spanned the AI development pipeline, and included frameworks for better understanding how training data and feature extractors may impact performance and robustness, techniques for designing and training models to exhibit desired robustness properties, and methods to ensure systems operate properly during deployment (e.g., out-of-distribution detection, contingent planning). A few of the talks also touched on aspects of AI ethics, including methods to enable fair incentivization and preserve the privacy of user data.  

While many of the paper talks focused on specific algorithms, the keynote speakers provided a broader perspective and reinforced the workshop’s key theme of robustness assurance at the system architecture level. Dr. Sarah H. Miller from the Office of the Director of National Intelligence kicked off the workshop with a high-level call to action, emphasizing that AI system assurance is critical for responsible AI adoption. A key takeaway from her talk was the importance of considering trade-offs between state-of-the-art AI that has not yet matured, with more traditional ML approaches that are often better understood and potentially more robust. The need for “mission-salient benchmarks,” which closely align to real applications, was another topic from Dr. Miller’s talk that resonated with workshop attendees.  

In the afternoon keynote, Professor Aleksander Madry from MIT argued that while much of robustness and explainable AI research has been focused on understanding AI models, we are neglecting a key piece of the puzzle: the training data. He introduced a new framework, called “datamodels,” which aims to answer the question, how do training data and learning algorithms combine to make predictions? Such a framework opens new possibilities for uncovering potential model vulnerabilities and issues with the training data, and supports RAISA’s theme of considering the broader ecosystem in which AI models are developed. 

The workshop concluded with a closing keynote by Dr. Rachel Dzombak from Carnegie Mellon University’s Software Engineering Institute, which pivoted the discussion beyond research and into the field of AI Engineering. Dr. Dzombak reminded us that the core task for engineers is to understand how and why systems fail, and her talk highlighted a variety of challenges that must be overcome to engineer robust and secure AI systems. Key takeaways included a need to move away from static benchmarks and establish principled processes for test and evaluation, and a need to cultivate a better understanding of the trade-offs between various design goals. All of this, she claimed, should also be accompanied by workforce-level shifts in mindsets and skillsets. 

Overall, the RAISA workshop represented a snapshot of the current state of robust AI and touched on directions for the future. Research efforts continue to progress to address isolated robustness vulnerabilities of individual algorithms, but the community is also gaining a greater awareness of the complex engineering needs and open challenges that must be tackled to enable robust AI system assurance in real-world applications. 

The RAISA workshop was organized by Olivia Brown (Chair, MIT Lincoln Laboratory), Rajmonda Caceres (MIT Lincoln Laboratory), Tina Eliassi-Rad (Northeastern), Sanjeev Mohindra (MIT Lincoln Laboratory), and Elham Tabassi (National Institute of Standards and Technology). The workshop proceedings can be found at https://arxiv.org/abs/2202.04787. Olivia Brown wrote this report.  

 

Scientific Document Understanding (W34) 

Scientific documents such as books, papers, and patents are one of the major sources of human knowledge that are being published at a growing rate. In this workshop, we studied some of the major challenges and methods to process scientific documents using artificial intelligence. 

The Scientific Document Understanding (SDU) workshop was held in two tracks: (1) Research: Authors submitted their work on various topics including Information Extraction, Document Classification, Information Veracity, Influence Analysis and summarization; (2) Shared-Task: SDU organized two shared-tasks on multi-domain and multilingual acronym extraction and disambiguation. For both tasks, in total, 20 teams participated. The SDU workshop received 42 submissions, 31 papers were accepted, of which, 14 system papers were accepted in the shared-task track. 

In the research track, SDU has accepted 10 long papers and 7 short papers. The major topics covered by the submitted papers include (1) Information Extraction: 5 papers studied different tasks of information extraction (IE) in the scientific domain. These tasks include key-phrase extraction, named entity recognition, co-reference resolution, knowledge base enrichment, and biomedical relation extraction. For example, in the scientific domain it could be helpful to directly enrich a knowledge base with the new information provided in the scientific text or it could be used to build an automatic librarian (e.g., by mapping key-phrases with concepts in a schema); (2) Document Classification: 4 papers at SDU studied the task of classification of scientific documents. These papers employ various deep learning methods to identify the related category for a given document; (3) Information Veracity and Influence Analysis: In SDU, 5 submitted papers studied how the AI-based methods could be used to verify the information provided in scientific papers or to assess their importance in the field; (4) Summarization and Recommendation: 2 research papers at SDU proposed novel methods for summarizing or recommending scientific documents using transformer-based models. These methods can be helpful for end-users to organize a huge set of scientific documents; (4) Scientific Image Processing: While many of the research papers focused on the textual part of scientific documents, 1 of the SDU accepted papers studied the visual aspect of the documents. In particular, segmenting figures in US patents has been studied in this work 

In addition to the research track, in SDU, two shared-task on acronym extraction and disambiguation were proposed. These tasks have been conducted in multi-domain and multilingual settings. Concretely, two domains of research articles and legal documents and 6 languages: English, Spanish, French, Danish, Persian and Vietnamese have been studied. For the acronym extraction, the goal is to recognize the mentions of acronyms and their long-forms in texts. To this end, organizers prepared a dataset of 27,200 sentences in 2 domains and 6 languages. In this task, 9 teams participated. For the second task, i.e., acronym disambiguation, the goal is to identify the correct expanded form of a given ambiguous acronym (i.e., an acronym with multiple long-forms). For this task, a dataset of 2562 unique acronyms in three languages, English, French and Spanish, was prepared. In this task, 11 teams participated. 

Scientific document understanding is an integral requirement for knowledge acquisition and could be fundamentally helpful for fostering future research in various domains. SDU 2022 provided a venue for introduction and discussion on important tasks for this topic. Amir Pouran Ben Veyseh, Thien Huu Nguyen, Franck Dernoncourt, Walter Chang, and Viet Dac Lai organized this workshop. Amir Pouran Ben Veyseh wrote this report. 

 

Self-Supervised Learning for Audio and Speech Processing (W35) 

Babies learn their first language through listening, talking, and interacting with adults. Can AI achieve the same goal without much low-level supervision? Inspired by the question, there is a trend in the machine learning community to adopt self-supervised approaches to pre-train deep networks. Self-supervised learning (SSL) utilizes proxy supervised learning tasks, for example, distinguishing parts of the input signal from distractors, or generating masked input segments conditioned on the unmasked ones, to obtain training data from unlabeled corpora. These approaches make it possible to use a tremendous amount of unlabeled data available on the web to train large networks and solve complicated tasks. BERT and GPT in NLP and SimCLR and BYOL in CV are famous examples in this direction. Recently SSL for speech/audio processing are also gaining attention. There were two workshops on similar topics hosted at ICML 2020 (https://icml-sas.gitlab.io/) and NeurIPS 2020 (https://neurips-sas-2020.github.io/), where overwhelming participation was observed. We were excited to host this workshop to continue promoting innovation in self-supervision for the speech/audio processing fields and inspiring the fields to contribute to the general machine learning community. 

  

There are 15 accepted papers contributed by the community. Below are some highlights of the findings from the papers presented in the workshop.  

  • New Applications: SSL speech models are used in more applications than before, including code-switching speech recognition, voice conversion, depression detection from speech.  
  • Research of Pretext Tasks: New pretext tasks and data augmentation approaches are proposed, and a method for selecting a group of pretext tasks among a set of candidates is introduced.   
  • More Understanding: The impact of data bias on SSL is investigated. The SSL models show a preference toward a slower speech rate, and some of them seem to develop a universal speech perception space that is not language-specific. 
  • Beyond Accuracy: It is widely known that the SSL models improve the performance of downstream tasks. Now researchers care about factors other than accuracy, including their vulnerability to adversarial attacks and privacy attacks. The researchers also try to compress the SSL models to make them greener.   
  • SUPERB Benchmark: A new benchmark for speech SSL models, SUPERB, is introduced. The SUPERB benchmark evaluates the performance of the SSL speech model from different aspects on a wide range of speech processing tasks.  

For an overall survey of SSL speech models, one can refer to Lasse Borgholt et al.’s overview paper presented in the workshop. 

There are 11 invited talks. James Glass published “Towards Unsupervised Speech Processing” more than 10 years ago, foreseeing advances in unsupervised speech processing technology. He outlined the history and current state of learning from unlabeled speech data. Because human babies learn not only from hearing but also from sight, Kristen Grauman and Wei-Ning Hsu talked about how audio and video data can improve understanding of each other. Speech data has a hierarchical structure but is not directly available from unlabeled data. Speech data has a hierarchical structure but cannot be obtained directly from unlabeled data. Herman Kamper and Jan Chorowski talked about finding and leveraging hierarchies. Odette Scharenborg talked about unsupervised subword modeling, i.e., learning acoustic feature representations that can distinguish subword units of a target language. Sakriani Sakti presented a series of work on speech chains, where machine learning listens while speaking. Yu Wu introduced WavLM, the state-of-the-art model on the SUPERB leaderboard. Yu Zhang presented a novel approach for representation learning. Danqi Chen talked about how to train SSL models more efficiently. Alexei Baevski presented a general representation learning framework for text, image and speech. 

The workshop brought researchers interested in SSL for audio/speech processing together and reviewed recent progress. Benefiting from the recent popularity of SSL and the diverse audience in AAAI, the workshop observed larger participation from the community as compared to previous ones. We have an average audience of 30 throughout the entire day. Based on the community’s active participation and positive feedback, we will continue the workshop efforts to foster more connection, collaboration, and innovation in the area. The contribution of organizers and chairs (Abdelrahman Mohamed, Hung-yi Lee, Shinji Watanabe, Tara Sainath, Karen Livescu, Shang-Wen Li, Ewan Dunbar, and Emmanuel Dupoux) is essential for the success of the workshop. Hung-yi Lee wrote this report. 

 

Trustable, Verifiable and Auditable Federated Learning (W36) 

The International Workshop on Trustable, Verifiable, and Auditable Federated Learning in Conjunction with AAAI 2022 (FL-AAAI-22) on March 2, 2022, was held virtually due to the extraordinary condition generated by COVID-19. This workshop aimed to bring together Federated Learning (FL) researchers and practitioners to address the additional security and privacy threats and challenges in FL to make its mass adoption and widespread acceptance in the community. 

Federated learning is one promising machine learning approach that trains a collective machine learning model using shared data owned by various parties. It leverages many emerging privacy-preserving technologies (SMC, Homomorphic Encryption, differential privacy, etc.) to protect data owner privacy in FL. It has gained popularity in some domains such as image classification, speech recognition, smart city, and healthcare. However, FL also faces multiple challenges that may limit its applications in real-world use scenarios. For example, privacy-specific threats in FL, training/inference phase attacks; data poisoning, model poisoning, how to handle Non-IID data without affecting the model performance, lack trust from the FL users, how to gain confidence by interpreting FL model, scheme of contributions and rewards to FL users for improving an FL model, social and corporate responsibility towards the adoption of FL, imbalance data among FL users, methods to verify and proof the correctness of FL computation, and many more. This workshop aims to bring FL researchers and practitioners together to address the additional security and privacy threats and challenges in FL to make its mass adoption and widespread acceptance in the community. 

The accepted papers presented at the workshop have covered applied and theoretical research in many areas, as discussed before. The workshop award committees had selected three articles for the best paper, best student paper, and best application paper. The best research paper award was given to Chaoyang He et al., “SSFL: Tackling Label Deficiency in Federated Learning via Personalized Self-Supervision.” The paper proposed a self-supervised learning framework with a series of algorithms to tackle two challenges in FL, data heterogeneity and label deficiency. The best student paper award was given to Chen Chen et al., “GEAR: A Margin-based Federated Adversarial Training Approach.” This student best paper proposed GEAR to tackle the problem of adversarial robustness in FL. The GEAR can maintain both natural accuracy and robust accuracy in FL. Finally, the best application paper award was given to Chengyi Yang e al., “WT-Shapley: Efficient and Effective Incentive Mechanism in Federated Learning for Intelligent Safety Inspection.” This application paper proposed an FL framework to enable several natural gas companies from different cities to train an object jointly detection computer vision deep learning model to identify potential hazards. 

The workshop also included five invited talks on the workshop theme. A discussion was given by Dusit Niyato (Nanyang Technological University, Singapore) focused on reliable Federated Learning for mobile networks. Bingsheng He (National University of Singapore) gave a talk on “Federated Learning Systems: A New Holy Grail for System Research in Data Privacy and Protection?” Lingjuan Lyu, the speaker from Sony AI, Japan, focused on the discussion on how to build a private, robust and fair Federated Learning System. A talk by Dacheng Tao (JD.com, China) outlined a sparse training approach for communication-efficient personalized Federated Learning. The last address given by Nicholas Carlini (Google Brain, USA) discussed the challenges in privately distributing training data. 

The workshop participants discussed how interest in Federated Learning is growing in academia and industries, but many open problems in FL still need to be addressed. However, the workshop participants agreed that this workshop helped bring FL researchers and practitioners together to address the security and privacy threats and challenges in FL. The workshop participants shared how to implement FL solutions that are more accurate, robust, and interpretable, gaining the FL users’ trust. Many workshop participants would like to attend a future workshop with the same focus.  

Qiang Yang ([email protected]) served as the steering chair of this workshop, and Sin G. Teo ([email protected]), Han Yu ([email protected]), and Lixin Fan ([email protected]) served as co-chairs. The workshop papers were invited to submit to a special issue of IEEE Transactions on Big Data. Sin G. Teo, Han Yu, and Lixin Fan authored this report.  

 

Trustworthy AI for Healthcare (W37) 

In this workshop, we aimed to address the trustworthy issues of clinical AI solutions. We aimed to bring together researchers in AI, healthcare, medicine, NLP, social science, etc. and facilitated discussions and collaborations in developing trustworthy AI methods that are reliable and more acceptable to physicians. Previous healthcare-related workshops focused on how to develop AI methods to improve the accuracy and efficiency of clinical decision-making, including diagnosis, treatment, triage. The trustworthy issues of clinical AI methods were not discussed. In our workshop, we specifically focused on the trustworthy issues in AI for healthcare, aiming to make clinical AI methods more reliable in real clinical settings and be willingly used by physicians. 

No formal report was filed by the organizers for this workshop. 

 

Trustworthy Autonomous Systems Engineering (W37) 

Advances in AI technology, particularly perception and planning, have enabled unprecedented advances in autonomy, with autonomous systems playing an increasingly important role in day-to-day lives, with applications including IoT, drones, and autonomous vehicles. In nearly all applications, reliability, safety, and security of such systems is a critical consideration. While there have been extensive independent research threads on the subject of safety and reliability of specific sub-problems in autonomy, such as the problem of robust control, as well as recent considerations of robust AI-based perception, there has been considerably less research on investigating robustness and trust in end-to-end autonomy, where AI-based perception is integrated with planning and control in an open loop. The workshop on Trustworthy Autonomous Systems Engineering (TRASE) offered an opportunity to highlight state of the art research in trustworthy autonomous systems, as well as provide a vision for future foundational and applied advances in this critical area at the intersection of AI and Cyber-Physical Systems, by bringing together researchers from multiple engineering disciplines working in this broad area. 

241 AAAI attendees registered for the TRASE workshop, which featured two distinguished keynote speakers and twelve 15-minute presentations of papers that were accepted into the workshop program. The first keynote speaker, Professor George Pappas from the University of Pennsylvania, offered a compelling research vision in trustworthy control involving deep neural network components for perception. A central issue in such systems is assuring robustness even as we have complex neural network components, and an important first step is to improve our ability to verify such components. To this end, Professor Pappas presented scalable approaches for robustness verification of deep neural networks, as well as important advances in robust robotic control and including semantic information in localization and mapping. The second keynote speaker, Professor Ben Zhao, explored his research in attacks and defenses on machine learning systems. His broader observation was that at the moment, the attackers appear to have the upper hand. He therefore offered an interesting alternative of considering post-attack forensics, where we start with the assumption that the attacker has already been detected, and then use the attack instance to subsequently identify the method. In the context of poisoning attacks, Professor Zhao showed that an observed attack instance can be used to successfully identify poisoned inputs in training data. 

In addition to the two keynote talks, the workshop program was divided into four technical sessions. The first technical session, comprised of five talks, focused on trustworthy planning and control. This session was bookended by two vision talks: the first describing the research challenges for combining autonomy, AI, and real-time assurance, and the second presenting reference architectures for resilient on-orbit servicing, assembly, and manufacturing. The three presentations in the technical portion of this session focused on dealing with learning components in robust model predictive control and robust nonlinear control, as well as multi-attribute planning. 

The second technical session, comprised of three talks, was devoted to adversarial testing and robustness certification in autonomous systems. In one talk, the authors presented an approach for adversarial testing of autonomous cars in simulation using a gradient-based approximation. The two remaining talks in this session focused on certifying robustness of reinforcement learning. 

The third technical session featured three talks focusing on ethics, fairness, and regulation. The first talk explored the issue of ensuring fairness of dynamic ride-matching algorithms in terms of geographic distribution of the quality of service. The second talk discussed explainability and regulation in autonomous driving. The third talk in this session presented an approach for using adaptive stress testing of autonomous driving with the goal of avoiding ethical dilemma situations. The final talk of the program, which was the lone talk in the computer vision session, considered the problem of seatbelt detection and usage monitoring in real-time driving scenarios. 

The TRASE-22 workshop was co-chaired by Yevgeniy Vorobeychik, Bruno Sinopoli, and Jinghan Yang from Washington University in St. Louis, Atul Prakash from the University of Michigan, and Bo Li from the University of Illinois Urbana Champaign. Yevgeniy Vorobeychik wrote this report.  

 

Video Transcript Understanding (W39) 

The AAAI Workshop on Video Transcript Understanding was held online on Feb 28th, 2022. The goal of this workshop was to facilitate the study of video transcripts in future intelligent systems, especially focusing on remote working, education, and social media. 

Videos have become an omnipresent source of knowledge: courses, presentations, conferences, documentaries, live streams, meeting recordings, and vlogs. This has created a strong demand for transcript understanding while transcripts contain a substantial amount of error and noise from spoken languages. E.g.: how can we make the best of the knowledge that all these videos contain? How can we optimally present the transcript to the user, summarize the content of the videos, extract the main events, and answer questions from the user on the video. To address all these major challenges, it is critical that we develop more reliable video transcript understanding systems that can handle the inherent imperfections of transcripts.  

The AAAI Workshop on Video Transcript Understanding brought together researchers from various domains such as computer vision, speech recognition, text processing, and education, and explored artificial intelligence solutions for this emerging medium. The workshop starts with an application of video transcripts in helping students improve their reading skill, presented by Dr. Beigman Klebanov, a senior researcher from Educational Testing Service (ETS). It is very promising that transcript understanding has helped beginner readers improve their reading ability. 

Since video transcript understanding has not caught much attention of the community yet, fundamental resources for research are either poor or non-of-existence. One major theme of the papers of the workshop is the development of resources for transcript studies. These papers described an annotation tool for creating datasets for various tasks such as text classification and text segmentation for transcripts with audio/video assisted. Moreover, many datasets for transcript in fundamental were first introduced at the workshop to standardize and facilitate the research of both human speeches and texts. The second major theme of the workshop is the development of machine learning-based models for other transcript-based applications such as tutorial video recommendation, key phrase extraction, and video recommendation from live streaming videos on the Internet. These papers reminded the audiences that understanding videos can bring benefits to their lives in many ways such as displaying video subtitles in a more meaningful way and giving short summaries of the videos. 

The workshop participants discussed the practical difficulty of the studies of video transcripts and spoken languages. The participants shared a positive belief that the workshop was useful in promoting this underrepresented field.  

Franck Dernoncourt and Viet Dac Lai served as co-chairs of this workshop. The papers of the symposium were published on CEUR workshop proceedings. Viet Dac Lai wrote this report.  

 

Biographies 

 

David W. Aha is at the Navy Center for Applied Research in AI.  

 

Dean Alderucci is a Ph.D. Student at Carnegie Mellon University. 

 

Öznur Alkan is a Research Scientist at IBM Research. 

 

Guy Barash is at Western Digital.  

 

Ahmad Beirami is at Facebook AI.  

 

Simone Bianco is a PI and Director at BAI Computational Innovation Hub.  

 

Olivia Brown is a member of the Technical Staff in MIT Lincoln Laboratory’s AI Technology Group.  

 

Mauricio Castillo-Effen is a Senior Researcher and the Technical Leader of the Trustworthy Autonomous Systems Focus Area at Lockheed Martin’s Advanced Technology Laboratories in Arlington, VA. 

 

Chi-Hua Chen is a Professor at Fuzhou University. 

 

Xin Cynthia Chen is a Researcher on AI Safety at the University of Hong Kong, China.  

 

Elizabeth Daly is at STSM at IBM Research.  

 

Julie Delon is at Université de Paris.  

 

Huáscar Espinoza is a Project Officer at KDT Joint Undertaking, in the European Commission, Belgium. 

 

Ali Etemad is at Queen’s University.  

 

Scott E. Fahlman is a Professor Emeritus at Carnegie Mellon University.  

 

Lixin Fan is a Principal Scientist in WebBank, China.  

 

Eitan Farchi, DE, IBM Research, Haifa.  

 

Ferdinando Fioretto is an Assistant Professor at Syracuse University.  

 

Behnam Hedayatnia is at Amazon.  

 

Chinmay Hegde is an Assistant Professor at New York University.  

 

José Hernández-Orallo is a Professor at the Universitat Politècnica de València, Spain. 

 

Seán S. Ó hÉigeartaigh is the Executive Director of Cambridge University’s Centre for the Study of Existential Risk and Programme Director at the Leverhulme Centre for the Future of Intelligence, UK. 

 

James Holt is at the Laboratory for Physical Sciences, USA. 

 

Xiaowei Huang is a Professor of Computer Science at the University of Liverpool, UK.  

 

Mark Keane is at the University College Dublin, Dublin, Ireland.  

 

Parisa Kordjamshidi is an Assistant Professor at Michigan State University. 

 

Tarun Kumar is at IIT Madras.  

 

Viet Dac Lai is a Ph.D. Student at the University of Oregon. 

 

Hung-yi Lee is an Associate Professor of National Taiwan University.  

 

Chia-Yu Lin is an Assistant Professor at Yuan Ze University. 

 

Viliam Lisy is an associate professor at the Department of Computer Science, FEE, Czech Technical University in Prague.

 

Xiaomo Liu is at JP Morgan AI Research. 

 

Zhiqiang Ma is at JP Morgan AI Research. 

 

Prashan Madumal is at University of Melbourne, Parkville, Australia.  

  

Richard Mallah is Director of AI Projects at the Future of Life Institute, US. 

 

Deepak Maurya is at Purdue University.  

 

John McDermid is Professor of Software Engineering at the University of York and Director of the Assuring Autonomy International Programme (AAIP), UK. 

 

Martin Michalowski is at the University of Minnesota.  

 

Facundo Mémoli is at The Ohio State University. 

 

Shane Moon is at Meta.  

 

Tom Needham is at Florida State University.  

 

Gabriel Pedroza is a Senior Research Engineer and Project Manager at CEA. 

 

Kuan-Chuan Peng is a Research Scientist at the Mitsubishi Electric Research Laboratories (MERL) in Cambridge, MA.  

 

Edward Raff is at Booz Allen Hamilton, USA. 

 

Balaraman Ravindran is at IIT Madras.  

 

Ahmad Ridley is at the National Security Agency.  

 

Dennis Ross is at the MIT Lincoln Laboratory, USA.  

 

Pritam Sarkar is at Queen’s University.  

 

Arash Shaban-Nejad is at the University of Tennessee Health Science Center-Oak Ridge National Laboratory. 

 

Onn Shehory is at Bar Ilan University.  

 

Adish Singla is at the Max Planck Institute for Software Systems.  

 

Arunesh Sinha is at the Singapore Management University, Singapore.  

 

Diane Staheli is at the MIT Lincoln Laboratory, USA.  

 

Wolfgang Stammer is a Ph.D. Student at the Technical University of Darmstadt. 

 

Sin G. Teo is a Research Scientist at the Institute for Infocomm Research, A*STAR, Singapore  

 

Stefano Teso is an Assistant Professor at the University of Trento.  

 

Silvia Tulli is at Sorbonne University, Paris, France.  

 

Amir Pouran Ben Veyseh is at the University of Oregon. 

 

Yevgeniy Vorobeychik is at Washington University in St. Louis. 

 

Segev Wassekrug is at IBM Research. 

 

Allan Wollaber is at the MIT Lincoln Laboratory, USA. 

 

Ling Wu is an Associate Professor at Fuzhou University. 

 

Ziyan Wu is a Principal Expert Scientist at the United Imaging Intelligence in Cambridge, MA. 

 

Hongteng Xu is at Renmin University of China.  

 

Han Yu is a Nanyang assistant professor at the University Technological University, Singapore.