Goonmeet Bajaj, Tarun Kumar, Zitao Liu, Deepak Maurya, Nitendra Rajput, Balaraman Ravindran, Maneet Singh, Biplav Srivastava, Shuohang Wang, Wenhao Yu
The Workshop Program of the Association for the Advancement of Artificial Intelligence’s 37th Conference on Artificial Intelligence (AAAI-23) was held in Washington, DC, USA on February 13-14, 2023. There were 32 workshops in the program: AI for Agriculture and Food Systems, AI for Behavior Change, AI for Credible Elections: A Call to Action with Trusted AI, AI for Energy Innovation, AI for Web Advertising, AI to Accelerate Science and Engineering, AI4EDU: AI for Education, Artificial Intelligence and Diplomacy, Artificial Intelligence for Cyber Security (AICS), Artificial Intelligence for Social Good (AI4SG), Artificial Intelligence Safety (SafeAI), Creative AI Across Modalities, Deep Learning on Graphs: Methods and Applications (DLG-AAAI’23), DEFACTIFY: Multimodal Fact-Checking and Hate Speech Detection, Deployable AI (DAI), DL-Hardware Co-Design for AI Acceleration, Energy Efficient Training and Inference of Transformer Based Models, Graphs and More Complex Structures for Learning and Reasoning (GCLR), Health Intelligence (W3PHIAI-23), Knowledge-Augmented Methods for Natural Language Processing, Modelling Uncertainty in the Financial World (MUFin’23), Multi-Agent Path Finding, Multimodal AI for Financial Forecasting (Muffin), Multimodal AI for Financial Forecasting (Muffin), Privacy-Preserving Artificial Intelligence, Recent Trends in Human-Centric AI, Reinforcement Learning Ready for Production, Scientific Document Understanding, Systems Neuroscience Approach to General Intelligence, Uncertainty Reasoning and Quantification in Decision Making (UDM’23), User-Centric Artificial Intelligence for Assistance in At-Home Tasks, and When Machine Learning Meets Dynamical Systems: Theory and Applications. This report contains summaries of the workshops, which were submitted by some, but not all of the workshop chairs.
AI for Agriculture and Food Systems (W1)
An increasing world population, coupled with finite arable land, changing diets, and the growing expense of agricultural inputs, is poised to stretch our agricultural systems to their limits. By the end of this century, the earth’s population is projected to increase by 45% with available arable land decreasing by 20% coupled with changes in what crops these arable lands can best support; this creates the urgent need to enhance agricultural productivity by 70% before 2050. Current rates of progress are insufficient, making it impossible to meet this goal without a technological paradigm shift. There is increasing evidence that enabling AI technology has the potential to aid in the aforementioned paradigm shift. This AAAI workshop aims to bring together researchers from core AI/ML, robotics, sensing, cyber physical systems, agriculture engineering, plant sciences, genetics, and bioinformatics communities to facilitate the increasingly synergistic intersection of AI/ML with agriculture and food systems. Outcomes include outlining the main research challenges in this area, potential future directions, and cross-pollination between AI researchers and domain experts in agriculture and food systems.
AI for Behavior Change (W2)
In decision-making domains as wide-ranging as medication adherence, vaccination uptake, college enrollment, financial savings, and energy consumption, behavioral interventions have been shown to encourage people towards making better choices. AI can play an important, and in some cases crucial, role in these areas to motivate and help people take actions that maximize welfare. It is also important to be cognizant of any unintended consequences of leveraging AI in these fields, such as problems of bias that algorithmic approaches can introduce, replicate, and/or exacerbate in complex social systems. A number of research trends are informing insights in this field. First, large data sources, both those conventionally used in social sciences (EHRs, health claims, credit card use, college attendance records) and the relatively unconventional (social networks, wearables, mobile devices), are now available, and are increasingly used to personalize interventions. These datasets can be leveraged to learn individuals’ behavioral patterns, identify individuals at risk of making sub-optimal or harmful choices, and target them with behavioral interventions to prevent harm or improve well-being. Second, psychological experiments in laboratories and in the field, in partnership with technology companies, to measure behavioral outcomes are increasingly used for informing intervention design. Third, there is an increasing interest in AI in moving beyond traditional supervised learning approaches towards learning causal models, which can support the identification of targeted behavioral interventions and flexible estimation of their effects. At the intersection of these trends is also the question of fairness – how to design or evaluate interventions fairly. These research trends inform the need to explore the intersection of AI with behavioral science and causal inference, and how they can come together for applications in the social and health sciences. This workshop will build upon the success of the last two editions of the AI for Behavior Change workshop, and will focus on advances in AI and ML that aim to (1) study equitable exploration for unbiased behavioral interventions, (2) design and target optimal interventions, and (3) exploit datasets in domains spanning mobile health, social media use, electronic health records, college attendance records, fitness apps, etc. for causal estimation in behavioral science.
AI for Credible Elections: A Call to Action with Trusted AI (W3)
This brief report presents highlights from the day-long workshop at AAAI- 2023 on the how Artificial Intelligence (AI) -related technologies can be used to address challenges in conducting elections by exploring issues at the intersection of AI, cybersecurity and journalism.
The second workshop on “AI for Credible Elections: A Call to Action ” [1] was held at AAAI 2023 conference on February 14, 2023 in physical-first hybrid mode. This was exactly 15 months after the first workshop at Neurips 2021 on December 14, 2021 held virtually [2]. The workshop explored the challenges of credible elections globally in an academic setting with apolitical discussion on significant issues at the intersection of Artificial Intelligence (AI), Security and Journalism. The invited speakers (3), panels (3) and peer-reviewed papers (6) discussed current and best practices, gaps, and likely AI- based interventions with speakers from the United States (US – representing one of the world’s oldest democracies), India (representing the largest democracy), Ireland, Canada and Brazil. The event also drew interest and participation from the Election Commissions of India and Ireland, and US’s NIST – National Institute of Standards and Technology and NSF – National Science Foundation.
A few highlights of the day are discussed below; see recording for details [1]. In talks, Prof. Anupam Joshi from UMBC, USA, in his invited speech, reviewed that although misinformation has been an essential part of warfare for long, influential operations today try to mix facts, opinions, and mis/disinformation to influence a certain narrative. Detecting such narratives help us tackle and respond to adversaries with direct attempt to “hearts and minds”. Prof. Nicole Goodman of Brock University, Canada gave an insightful talk about digital elections in Canada, how issues and risks around them are managed and how related incidents impact people’s perception of the trustworthiness of elections. The third invited talk was by Prof. Marcos Antonio Simplicio Junior of Universidade de São Paulo, Brazil. He described the election process in the country and how electronic voting has been questioned despite efforts to make the systems transparent including making source code available for review.
In panel discussions, the first panel discussion was moderated by Prof. Anita Nikolich of UIUC, USA and featured Dr. Jeremy Epstein from NSF, Dr. Ashish Kundu from Cisco Research and Prof Uwe Serdult from Ritsumeikan University, Japan and University of Zurich, Switzerland, Switzerland. They discussed the AI trends, security gaps in elections and the lack of a standard secure stack to build trusted data-driven applications for elections. In particular, Uwe talked about voting advice applications (VAAs) that can help voters decide on a candidate, Jeremy pointed out that NSF has been funding more technology efforts like AI helping in designing ballots and Ashish discussed election voting machine supply chain security. The second panel discussion was moderated by Prof. Biplav Srivastava of University of South Carolina, USA and featured Dr. Neeta Verma from India’s Election Commission, Dr. Deepak P. from Queen’s University in Belfast, Northern Ireland and Prof Dan Wallach from Rice University, USA. They discussed how AI and technology are being used by people in making the election process work and how to improve. Neeta talked about the challenges of holding elections and keeping information up to date, Deepak argued that AI should not undermine human agency and Dan pointed to DEFCON where events happen related to voting security. In the audience, Mary Clare from Ireland’s Election Commission
requested a report on election integrity and technology that was mentioned. The third panel discussion was moderated by Prof. Andrea Hickerson of the University of Mississippi, USA and featured Prof. Sagar Samtani, Indiana University, USA, Maurice Turner of Turner Consulting, USA and Dave Levinthal, Raw Story, USA. They discussed the changing role of journalists with AI and what policy steps are needed to adopt technology for a better informed citizen. Sagar compared the impact of AI on journalism to other professions involved with content generation, Maurice discussed processes that help when content is wrong and how this needs to be further strengthened with AI, and Dave raised the issue of plagiarism and need for accountability.
There were six peer-reviewed papers presented covering types like the role of AI in election processes, for validating votes, detecting political affiliation and disseminating official information. Some are accepted for forthcoming AI Magazine special issue on the theme of the workshop series.
It is instructive here to look at the lessons from the first workshop. There, research gaps that were identified included creating precise definitions for credible elections, having more studies on how people should be informed of election information, having transparency in assessment and standardization of the election process is needed, and building decision support tools for helping voters navigate candidates and issues. There has been some progress in these areas since the first workshop and the discussion in the papers and talks reflected it.
We now combine the lessons of the two workshops to synthesize a common message. Lot of effort is needed in improving a democracy and elections are a key part of that. Democracy means empowering the voter with a right to choose and provide multiple capabilities, including knowing about candidates, campaign finance, voting, processing votes, etc. People around the world are worried about their democracy. In this regard, the workshop considered challenges and opportunities of using technology, especially AI, from multiple perspectives around the world.
● A Multi-pronged solution is needed: process, people, technology.
● Many problems are already tackled at one scale (voter identification, voting technologies) and can be easily reused across geographies.
● How a technology is designed and how issues around their usage are handled can affect voter’s trust in them. Electronic voting has the problem of transparency and a paper trail helps mitigate it somewhat.
● The different jurisdictions in the US have a lot of freedom in organizing elections, but as a result, also chaos from that. It is an open question whether this is desirable for a credible election.
● AI can specifically help elections by disseminating official information (e.g., about candidates, electoral process and candidates) personalized to a voter’s cognitive needs at scale, in their language and format. When AI helps prevent mis- and dis-information, the impact can be far reaching.
Biplav Srivastava (University of South Carolina), Anita Nikolich (University of Illinois- Urbana Champaign), Andrea Hickerson (University of Mississippi), Tarmo Koppel (Tallinn University), Chris Dawes (New York University) and Sachindra Joshi (IBM Research) served as cochairs of the workshop. The recording, photos and papers, of the
workshop are available at workshop site (https://sites.google.com/view/aielections/)
AI for Energy Innovation (W4)
In light of pressing and transformative global needs for equitable and secure access to clean, affordable, and sustainable energy, as well as of the significant investment provided from governments and industries, the alignment of R&D efforts on automation and AI across the entire spectrum is timelier than ever, from fundamental to applied energy sciences. Despite recent monumental AI progress and widespread interest, there may be disconnects between the AI frontier and energy-focused research. We envision a near future where energy systems will be equally intelligent as the most adept AI systems in existence, with energy resources equipped with smart functionalities to effectively operate under uncertainty, volatility, and threats, where communities empower their lives with reliable and sustainable energy, and where the entire AI community undertakes the challenge of providing solutions and inspiration for sustained energy innovation. This workshop will invite AAAI-23 attendees, researchers, practitioners, sponsors, and vendors from academia, government agencies, and the industry who will present diverse views and engage in fruitful conversations on how innovation in all aspects of AI may support and propel further energy innovation.
AI for Web Advertising (W5)
With the popularity of various forms of E-commerce, web advertising has become one prominent channel that businesses use to reach out to the customers. It leverages the Internet to promote products and services to audiences, which also has been an important revenue source of many Internet companies such as online social media platforms and search engines.
AI techniques have been extensively used in the pipeline of a web advertising system, such as retrieval, ranking and bidding. Despite the remarkable progress, there are still many unsolved and emerging issues about applying the state-of-the-art AI techniques to Web Advertising, such as the “cold-start” problem; trade-off between online AI systems serving accuracy and efficiency; data privacy protection and big data management.
This workshop is targeted on the above and other relevant issues, aiming to create a platform for people from academia and industry to communicate their insights and recent results.
AI to Accelerate Science and Engineering (W6)
Scientists and engineers in diverse application domains are increasingly relying on using computational and artificial intelligence (AI) tools to accelerate scientific discovery and engineering design. AI, machine learning, and reasoning algorithms are useful in building models and decision-making towards this goal. We have already seen several success stories of AI in applications such as materials discovery, ecology, wildlife conservation, and molecule design optimization. This workshop aims to bring together researchers from AI and diverse science/engineering communities to achieve the following goals:
- Identify and understand the challenges in applying AI to specific science and engineering problems.
- Develop, adapt, and refine AI tools for novel problem settings and challenges.
- Community-building and education to encourage collaboration between AI researchers and domain area experts.
AI4EDU: AI for Education (W7)
The AI4EDU workshop aims to discuss recent research progress and advances in handling AI challenges encountered in education. We invited AI in Education (AIED) enthusiasts through keynote talks, workshop paper presentations, and a worldwide AIED challenge to encourage the sharing of insights and the development of practical and large-scale AIED methods. More than 100 academic researchers and industrial practitioners from well-known universities and companies have participated in the workshop online and offline.
The AI4EDU workshop aims to present the latest research progress in the application of AI to education and to discuss recent advances in handling challenges that arise in AI educational practice. Specifically, it aims to bring together members of the AI community to discuss the potential of AI in education and the unique challenges it poses, such as data sparsity, lack of labeled data, and privacy issues. The workshop was built upon past successful AAAI workshops, symposiums, and tutorials on AI for Education.
AI4EDU workshop utilizes three distinct channels to invite AI in Education (AIED) enthusiasts from all over the world. First, we invited established researchers from the AIED community to give talks that described their vision for connecting AIED communities, summarized a well-developed AIED research area, or presented promising ideas for new AIED research directions. Second, we called for workshop paper submissions and cross-submissions related to a wide range of AI domains for education. Finally, we hosted a global challenge on Codalab to enable a fair comparison of state-of-the-art Knowledge Tracing models, and technical reports from the winning teams were requested. These initiatives provided a platform for researchers to share their cutting-edge insights on AIED and promote the development of practical and large-scale AIED methods that have a lasting impact.
Specifically, we had the honor of inviting two distinguished AI pioneers, namely Professor Tom Mitchell and Professor Sidney D’Mello as our keynote speakers. Professor Tom Mitchell is the Dean of the School of Computer Science at Carnegie Mellon University in the United States, and he gave an insightful keynote speech on the topic of “Where Can AI Take Education by 2030.” Professor Sidney D’Mello is the director of the “Teacher-Computer-Student” collaborative AI research institute at the University of Colorado Boulder in the United States, and he presented on the topic of “From Autonomy to Synergy: Envisioning Next Generation Human-AI Partnerships.” Both speakers delivered excellent presentations, leaving the audience with valuable insights and ideas to ponder. In addition to these insightful keynote talks, we invited some academic experts from top universities and practical experts from the industry to share their cutting-edge AI progress in education scenarios. Furthermore, we accepted 15 out of 28 papers this year. All submissions were peer-reviewed and they are orally presented during the workshop.
Meanwhile, we hosted a global knowledge tracing challenge that called for researchers and practitioners worldwide to investigate the opportunities of improving student assessment performance via knowledge tracing approaches with rich side information. The competition attracted 37 different organizations from 8 countries, including the United States, the United Kingdom, Australia, and Singapore, with a total of 116 participants. This AAAI2023 Global Knowledge Tracking Challenge took the first step in the exploration of personalized learning of artificial intelligence and set a successful demonstration for future AI-related events.
In conclusion, the AAAI2023 AI4EDU workshop provided an excellent platform for AIED enthusiasts to share their latest research progress and practical experiences. The workshop brought together a diverse group of academic and industry experts from around the world, who shared valuable insights on the opportunities and challenges of applying AI to education. The success of this workshop sets a precedent for future events related to AI in education and reinforces the importance of continued research and development in this field.
Zitao Liu, Weiqi Luo, Shaghayegh Sahebi, Yu Lu, Richard Tong, Jiahao Chen, and Qiongqiong Liu served as cochairs of this workshop. This report was written by Zitao Liu from the Guangdong Institute of Smart Education, Jinan University, China.
This workshop was supported in part by the National Key R&D Program of China, under Grant No. 2020AAA0104500; in part by Beijing Nova Program (Z201100006820068) from Beijing Municipal Science & Technology Commission; in part by Key Laboratory of Smart Education of Guangdong Higher Education Institutes, Jinan University (2022LSYS003); and in part by the United States National Science Foundation, under Grant Numbers 204750 and 1917949.
Artificial Intelligence and Diplomacy (W8)
Advances in AI and advanced data analytics are having considerable policy-related, geopolitical, economic, societal, legal, and security impacts. Recent global challenges such as the COVID19 pandemic, concerns related to representative governments and associated democratic processes, as well as the importance of advanced data analytics and the potential use of AI-enabled systems in conflicts such as the war in Ukraine, motivate the importance of the topic of AI and diplomacy. There may be scenarios where diplomats, ambassadors, and other government representatives lack the technical understanding of AI and advanced data analytics to address challenges in all these domains, while the technical AI and data communities often lack a sophisticated understanding of the diplomatic processes and opportunities necessary for addressing AI challenges internationally. This workshop will explore the impact of advances both in artificial intelligence as well as advanced data analytics. This includes considering the broad impact of AI as well as data collection and curation globally, focusing especially on the impact that AI and data have on the conduct and practice of diplomacy.
Artificial Intelligence for Cyber Security (AICS) (W9)
The workshop will focus on the application of AI to problems in cyber-security. Cyber systems generate large volumes of data and utilizing this effectively is beyond human capabilities. Additionally, adversaries continue to develop new attacks. The workshop will address AI technologies and their security applications, such as machine learning, game theory, natural language processing, knowledge representation, automated and assistive reasoning and human machine interactions.
This year the AICS will emphasize practical considerations in the real world with a special focus on social attacks, that is, attacking the human in the loop to gain access to critical systems.
In general, AI techniques are still not widely adopted in many real world cyber security situations. There are many reasons for this including practical constraints (power, memory, etc.), lack of formal guarantees within a practical real world model, and lack of meaningful, trustworthy explanations. Moreover, in response to improved automated systems security (better hardware security, better cryptographic solutions), cyber criminals have amplified their efforts with social attacks such as phishing attacks and spreading misinformation. These large-scale attacks are cheap and need only succeed for a tiny fraction of all attempts to be effective. Thus, AI assistive techniques robust to human errors and insusceptible to manipulations can be very beneficial.
Artificial Intelligence for Social Good (AI4SG) (W10)
The field of Artificial Intelligence stands at an inflection point, and there could be many different directions in which the future of AI research could unfold. Accordingly, there is a growing interest to ensure that current and future AI research is used in a responsible manner for the benefit of humanity (i.e., for social good). To achieve this goal, a wide range of perspectives and contributions are needed, spanning the full spectrum from fundamental research to sustained deployments in the real-world.
This workshop will explore how AI research can contribute to solving challenging problems faced by current-day societies. For example, what role can AI research play in promoting health, sustainable development and infrastructure security? How can AI initiatives be used to achieve consensus among a set of negotiating self-interested entities (e.g., finding resolutions to trade talks between countries)? To address such questions, this workshop will bring together researchers and practitioners across different strands of AI research and a wide range of important real-world application domains. The objective is to share the current state of research and practice, explore directions for future work, and create opportunities for collaboration. The workshop will be a very nice complement to the AAAI Special Track on AI for Social Impact as it will provide a forum where researchers interested in this area can connect in a more direct way.
The proposed workshop complements the objectives of the main conference by providing a forum for AI algorithm designers, such as those working in the areas of agent-based modelling, machine learning, spatio-temporal models, deep learning, explainable AI, fairness, social choice, non-cooperative and cooperative game theory, convex optimization, and planning under uncertainty on innovative and impactful real-world applications. Specifically, the proposed workshop serves two purposes. First, the workshop will provide an opportunity to showcase real-world deployments of AI research. More often than not, unexpected practical challenges emerge when solutions developed in the lab are deployed in the real world, which makes it challenging to utilize complex and well thought out computational/modeling advances. Learning about the challenges faced in these deployments during the workshop will help us understand lessons of moving from the lab to the real world. Second, the workshop will provide opportunities to showcase AI systems which dynamically adapt to changing environments, are robust to errors in execution and planning, and handle uncertainties of different kinds that are common in the real world. Addressing these challenges requires collaboration from different communities including machine learning, game theory, operations research, social science, and psychology. This workshop is structured to encourage a lively exchange of ideas between members from these communities. We encourage submissions to the workshop from: (i) computer scientists who have used (or are currently using) their AI research to solve important real-world problems for society’s benefit in a measurable manner; (ii) interdisciplinary researchers combining AI research with various disciplines (e.g., social science, ecology, climate, health, psychology and criminology); and (iii) engineers and scientists from organizations who aim for social good, and look to build real systems.
Artificial Intelligence Safety (SafeAI) (W11)
The accelerated developments in the field of Artificial Intelligence (AI) hint at the need for considering Safety as a design principle rather than an option. However, theoreticians and practitioners of AI and Safety are confronted with different levels of safety, different ethical standards and values, and different degrees of liability, that force them to examine a multitude of trade-offs and alternative solutions. These choices can only be analyzed holistically if the technological and ethical perspectives are integrated into the engineering problem, while considering both the theoretical and practical challenges of AI safety. A new and comprehensive view of AI Safety must cover a wide range of AI paradigms, including systems that are application-specific as well as those that are more general, considering potentially unanticipated risks. In this workshop, we want to explore ways to bridge short-term with long-term issues, idealistic with pragmatic solutions, operational with policy issues, and industry with academia, to build, evaluate, deploy, operate and maintain AI-based systems that are demonstrably safe.
This workshop seeks to explore new ideas on AI safety with particular focus on addressing the following questions:
- What is the status of existing approaches in ensuring AI and Machine Learning (ML) safety, and what are the gaps?
- How can we engineer trustable AI software architectures?
- How can we make AI-based systems more ethically aligned?
- What safety engineering considerations are required to develop safe human-machine interaction?
- What AI safety considerations and experiences are relevant from industry?
- How can we characterize or evaluate AI systems according to their potential risks and vulnerabilities?
- How can we develop solid technical visions and new paradigms about AI Safety?
- How do metrics of capability and generality, and the trade-offs with performance affect safety?
- The main interest of the proposed workshop is to look at a new perspective of system engineering where multiple disciplines such as AI and safety engineering are viewed as a larger whole, while considering ethical and legal issues, in order to build trustable intelligent autonomy.
Creative AI Across Modalities (W12)
For the past few years, we have witnessed eye-opening generation results from AI foundation models such as GPT-3, and DALL-E2. These models have set up great infrastructures for new types of creative generation across various modalities such as language (e.g. story generation), images (e.g. text-to-image generation, fashion design), and audio (e.g. lyrics-to-music generation). Researchers in these fields encounter many similar challenges such as how to use AI to help professional creators, how to evaluate creativity for an AI system, how to boost the creativity of AI, how to avoid negative social impact, and so on. There have been various workshops that focus on some aspects of AI generation. This workshop aims to bridge researchers and practitioners from NLP, computer vision, music, ML, and other computational fields to create the 1st workshop on “Creative AI across Modalities”.
Deep Learning on Graphs: Methods and Applications (DLG-AAAI’23) (W13)
Deep Learning models are at the core of research in Artificial Intelligence research today. It is well- known that deep learning techniques that were disruptive for Euclidean data such as images or sequence data such as text are not immediately applicable to graph-structured data. This gap has driven a tide in research for deep learning on graphs on various tasks such as graph representation learning, graph generation, and graph classification. New neural network architectures on graph-structured data have achieved remarkable performance in these tasks when applied to domains such as social networks, bioinformatics and medical informatics.
This one-day workshop aims to bring together both academic researchers and industrial practitioners from different backgrounds and perspectives to the above challenges. The workshop will consist of contributed talks, contributed posters, and invited talks on a wide variety of the methods and applications. Work-in-progress papers, demos, and visionary papers are also welcome. This workshop intends to share visions of investigating new approaches and methods at the intersection of Graph Neural Networks and real-world applications. It aims to bring together both academic researchers and industrial practitioners from different backgrounds to discuss a wide range of topics of emerging importance for GNN.
DEFACTIFY: Multimodal Fact-Checking and Hate Speech Detection (W14)
Deep Learning models are at the core of research in Artificial Intelligence research today. It is well- known that deep learning techniques that were disruptive for Euclidean data such as images or sequence data such as text are not immediately applicable to graph-structured data. This gap has driven a tide in research for deep learning on graphs on various tasks such as graph representation learning, graph generation, and graph classification. New neural network architectures on graph-structured data have achieved remarkable performance in these tasks when applied to domains such as social networks, bioinformatics and medical informatics.
This one-day workshop aims to bring together both academic researchers and industrial practitioners from different backgrounds and perspectives to the above challenges. The workshop will consist of contributed talks, contributed posters, and invited talks on a wide variety of the methods and applications. Work-in-progress papers, demos, and visionary papers are also welcome. This workshop intends to share visions of investigating new approaches and methods at the intersection of Graph Neural Networks and real-world applications. It aims to bring together both academic researchers and industrial practitioners from different backgrounds to discuss a wide range of topics of emerging importance for GNN.
Deployable AI (DAI) (W15)
Deployment of AI models into the real world requires several fundamental research questions and issues involving algorithmic, systemic and societal aspects to be addressed. It is crucial to carry out progressive research in this domain and study the various deployability aspects with respect to AI models that can ensure positive impacts on society. In this workshop, we intend to focus on research works that propose models that can be used as real-world solutions and implement techniques/strategies that enable and ensure the ideal deployment of AI models while adhering to various standards.
DL-Hardware Co-Design for AI Acceleration (W16)
As deep learning (DL) continues to permeate all areas of computing, algorithm engineers are increasingly relying on hardware system design solutions to improve the efficiency and performance of deep learning models. However, the vast majority of DL studies rarely consider limitations such as power/energy, memory footprint, and model size of real-world computing platforms, and even lessconsider the computational speed of hardware systems and their own computational characteristics. Addressing all of these metrics is critical if advances in DL are to be widely used on real device platforms and scenarios, especially those with high requirements for computational efficiencies, such as mobile devices and AR/VR. Therefore, it is desirable to design and optimize both the DL models and the hardware computing platforms. The workshop provides a great venue for the international research community to share mutual challenges and solutions between deep neural network learning and computing system platforms, with a focus on accelerating AI technologies on real system platforms through DL-hardware co-design.
Energy Efficient Training and Inference of Transformer Based Models (W17)
Transformers are the foundational principles of large deep learning language models. Recent successes of Transformer-based models in image classification and action prediction use cases indicate their wide applicability. In this workshop, we want to focus on the leading ideas using Transformer models such as PALM from Google. We will learn what have been their key observations on performance of the model, optimizations for inference and power consumption of both mixed-precision inference and training.
The goal of this Workshop is to provide a forum for researchers and industry experts who are exploring novel ideas, tools, and techniques to improve the energy efficiency of machine learning and deep learning as it is practiced today and would evolve in the next decade. We envision that only through close collaboration between industry and the academia we will be able to address the difficult challenges and opportunities of reducing the carbon footprint of AI and its uses. We have tailored our program to best serve the participants in a fully digital setting. Our forum facilitates active exchange of ideas through
- Keynotes, invited talks and discussion panels by leading researchers from industry and academia
- Peer-reviewed papers on latest solutions including works-in-progress to seek directed feedback from experts
- Independent publication of proceedings through IEEE CPS
Graphs and More Complex Structures for Learning and Reasoning (GCLR) (W18)
The third workshop on Graphs and more Complex structures for Learning and Reasoning (GCLR) was conducted with the objective of promoting interdisciplinary discussions among researchers from diverse fields, such as computer science, mathematics, statistics, physics, and beyond. The event was met with a positive response from participants around the globe, demonstrating the importance of the subject matter.
In the opening keynote, Prof. Madhav Marathe, endowed Distinguished Professor in Biocomplexity, Director of the Network Systems Science and Advanced Computing (NSSAC) Division, Biocomplexity Institute and Initiative, and a tenured Professor of Computer Science at the University of Virginia, delivered a talk on Graphical Dynamical Systems (GDS) to model and represent large co-evolving bio-social habitats. These bio-social habitats are often represented as co-evolving complex networks and their size, and co-evolutionary nature makes reasoning complicated. The opening talk helped set up the momentum of discussions among the speaker and participants, continuing until the end of the workshop. Prof. Aditya Prakash, Associate Professor of Computing at the Georgia Institute of Technology, Atlanta, continued the discussion on complex networks by presenting models that can learn latent graphs from multi-variate time series and went into depth of autoregressive models for graph generation and stochastic algorithms for network design. Prof. Nitesh Chawla from the University of Notre Dame, Indiana, shared his group’s work on representing higher-order dependencies in different types of networks. Specically, he shared work on Higher-Order Network representation learning to embed various orders of dependencies in a structured network. Hence, most representation learning methods are limited to first-order Markovian processes, HONs can discover trends at different orders of a network. He also shared many real-world examples where HONs lead to new discoveries that existing network analysis methods could not accomplish.
Dr. Nesreen Ahmed, a senior member of the research staff at Intel Labs, presented her work on reasoning about relationships in network data, with applications in knowledge graphs, social networks, and system problems. Reasoning about these relationships is central to understanding of these systems and have a huge economic impact with many applications.
The workshop held two exciting talks on the deep learning-based approaches used with different complex graphical structures. Dr. Sanjukta Krishnagopal, a UC presidential postdoctoral fellow at UC Berkeley and UCLA presented her work on learning mechanisms on graph neural networks. Her work on the use of neural tangent kernel unwrapped the weight dynamics of a wide graph neural network during learning. Dr. Saket Gurukar, a staff researcher at Samsung Research, presented his work on the ways to obtain embeddings for large-scale heterogeneous graphs with a case study of Pin-Board graph at Pinterest. His work on breaking heterogeneous graph into multiple disjoint bipartite graphs and then developing a novel data-efficient MultiBiSage model that combines the signals from them, is the state-of-art methodology for such tasks.
In addition to the talks, there were many high-quality submissions to the workshop. Our program committee consisted of more than 60 researchers with diverse areas of expertise. All the paper submissions received at least three, and many of them got five constructive reviews. Based on the reviews, 10 high-quality papers were accepted. Authors of these papers presented their works in the poster session.
The audience was very attentive and asked several interesting questions during the keynote talks and poster presentation session, which made this hybrid event very interactive. We believe some of the attendees made new friends at the GCLR workshop, which may lead to future collaborations.
The GCLR workshop was co-organized by Balaraman Ravindran (IIT Madras), Ginestra Bianconi (Queen Mary University of London), Philip S. Chodrow (Middlebury College), Srinivasan Parthasarathy (Ohio State University), Tarun Kumar (Hewlett-Packard Labs), Deepak Maurya (Purdue University), Anasua Mitra (Eli Lilly & Co.), M S B Roshan (IIT Madras) and Goonmeet Bajaj (Ohio State University).
Health Intelligence (W3PHIAI-23) (W19)
The integration of information from now widely available -omics and imaging modalities at multiple time and spatial scales with personal health records has become the standard of disease care in modern public health. Moreover, given the ever-increasing role of the World Wide Web as a source of information in many domains including healthcare, accessing, managing, and analyzing its content has brought new opportunities and challenges. The advances in web science and technology for data management, integration, mining, classification, filtering, and visualization has given rise to a variety of applications representing real-time data on epidemics.
Furthermore, to tackle and overcome several issues in personalized healthcare, information technology will need to evolve to improve communication, collaboration, and teamwork among patients, their families, healthcare communities, and care teams involving practitioners from different fields and specialties. All these changes require novel solutions, and the AI community is well-positioned to provide both theoretical- and application-based methods and frameworks.
Knowledge-Augmented Methods for Natural Language Processing (W20)
Knowledge is vital for intelligent NLP, but current large-scale models like ChatGPT often hallucinate because they only learn from language modeling and ignore external knowledge (e.g., world facts, news). This workshop brought together researchers from academia and industry to discuss how to augment NLP with knowledge, and drew over 50 in-person and 20 virtual attendees, making it a popular AAAI workshop.
The field of NLP has seen remarkable advancements in recent years, as demonstrated by larger-scale models such as ChatGPT. Large language models have proven to effectively capture linguistic patterns in text and generate context-aware representations of high quality. However, their training method, which relies solely on input-output pairs, limits their ability to incorporate external knowledge (e.g., updated world facts, trending news), often leading to hallucinations in generated contents. To reach higher levels of intelligence, knowledge is a crucial component that cannot be obtained through statistical learning of input text patterns alone.
In AAAI 2023 (Feb 7th – Feb 14, 2023), six researchers (Chenguang Zhu, Shuohang Wang, Meng Jiang, Wenhao Yu, Lu Wang, Huan Sun) from four institutions (Microsoft Cognitive Research, University of Notre Dame, University of Michigan, Ohio State University) held the first workshop on knowledge-augmented methods for NLP. The workshop attracted more than 50 people to join us in person and 20 virtually, making it one of the most popular events at AAAI!
The workshop invited four keynote speakers from academia and industry, including Dr. Scott Wen-tau Yih (Meta AI – FAIR), Prof. Amit Sheth (University of South Carolina), Prof. Jordan Boyd-Graber (University of Maryland) and Prof. Chandan Reddy (Virginia Tech University). In addition, we have a panel discussion on various topics of knowledge, NLP and large language models. Five panelists shared their insights and experiences on how to incorporate knowledge into NLP models to make them more efficient, scalable, and intelligent.
The workshop received 35 submissions from 50 institutions (top-3 are Amazon, UIUC and Stanford), and 26 papers [link to papers] were accepted, covering a wide range of topics including retrieval, knowledge graph and commonsense augmented models, knowledge-enhanced language model pre-training, new benchmark datasets and survey papers.
At the start of the event, Prof. Amit Sheth delivered a keynote speech on the topic of “From NLP to NLU: Why we need varied, comprehensive, and stratified knowledge, and how to use it for Neuro-symbolic AI”. In his talk, he emphasized the three crucial dimensions of Why, What, and How in the utilization of knowledge in neuro-symbolic AI systems.
Next, Prof. Chandan Reddy’s presentation focused on “Deep Learning for Code Understanding and Generation: Challenges and Opportunities”. He discussed the potential of pre-trained programming language models (PLMs) on large code repositories for various code-related tasks.
Prof. Jordan Boyd-Graber then gave a talk “Raw Knowledge vs. Understanding: What Adversarial QA Reveals about the Limits of AI”. He examined recent research on adversarial QA systems, putting professional writers in front of the system for question writing and fact-checking.
Later in the afternoon, Dr. Scott Yih gave a presentation on “Efficient & Scalable NLP through Retrieval-Augmented Language Models”. He emphasized that information retrieval is a crucial aspect in developing the next generation of AI, which can efficiently access and incorporate various forms of knowledge through a compact core model and retrieval system.
The panel discussion was hosted by Chenguang Zhu and five panelists, including Rachel Rudinger (University of Maryland), Niket Tandon (Allen Institute for AI), Rui Zhang (Penn State University), Thomas Pellissier Tanon (Lexistems), Emily Ching (Microsoft) shared their perspectives on five questions related to knowledge-augmented NLP models, including the usefulness of additional knowledge in the era of large language models and the benefits of multimodal knowledge.
Overall, the workshop provided a comprehensive overview of the current state-of-the-art in knowledge-augmented natural language processing and highlighted future directions for research in this field. The presentations and discussions provided a valuable platform for researchers and practitioners to exchange ideas and collaborate on future work.
Co-chairs: Chenguang Zhu, Shuohang Wang, Meng Jiang, Wenhao Yu, Hun Sun, Lu Wang served as co-chairs of this workshop. This report was written by Wenhao Yu (Phd student at the Department of Computer Science and Engineering, University of Notre Dame) and Shuohang Wang (Senior researcher at Knowledge and Language Team, Microsoft Cognitive Services Research).
Modelling Uncertainty in the Financial World (MUFin’23) (W21)
Of many things, Covid-19 has provided a stark proof that uncertainty is real, and it is here to stay. Perhaps nothing is more sensitve to uncertainty than the Financial World. To couple with it, while Artificial Intelligence techniques are used to predict the future state of events, their performance is significantly impacted by disruptions not captured in the past. It is thus imperative for the research community to explore, identify, analyse, and address such uncertain>es to develop robust models applicable in real-world scenarios. To this effect, the goal of this workshop is to bring academics and industry experts together to discuss on this important, >mely and yet-unsolved area of modelling uncertain>es in the financial world.
MUFin’23 was a full day workshop. The program included the nine accepted paper presentations, two keynotes from experts in the financial domain (both industry and academia). The day started with first seeing the context to define the need for such a workshop. While we have been dealing with the pandemic since the past few years, are way past the pandemic, it helped us realize that uncertainty is real and can be very disrupting. There were many examples where the performance of AI systems was impacted as spending patterns in the consumer world and churning in the merchant world changed significantly. This setting provides us an opportunity where the workshop can bring a focus to problems and solutions in the financial world that are related to uncertainty.
Continuing with this setting, the first talk was a keynote by Tucker Bach from J. P. Morgan Research. He provided a glimpse into the uncertainty related AI work in his group. As a large investment and retail bank, his organization deals with quite a bit of uncertainty. In his talk, he provided solutions using time-series prediction methods that account for uncertainty, with synthetic data approaches, with models of behavior, and finally with simulation. He also mentioned that conducting comparative research in the financial world is restrictive owing to lack of publicly available data. In that context he informed of the availability of several datasets that his team has released for researchers to use. This data set is available at hKps://www.jpmorgan.com/syntheBc-data.
This was followed by the first set of accepted paper presentations. There were 5 papers that were presented in this session and they covered a broad range of topics from fraud detaction to price prediction and also bias mitigation. They all had a flavour of uncertainty and AI techniques to handle them. The second set of 4 accepted paper presentations looked at uncertainty arising out of misinformation, economic conditions, general stock market and global uncertainty. The accepted papers provided a strong proof to the fact that uncertainty, AI and finance form a combination that is important and is already being solved for.
The workshop concluded with the second invited talk by Prof. Anita Raja from the Hunter College at City University of New York. She talked on mitigating uncertainty in mission-critical AI systems. She highlighted sources of uncertainty and bias inherent in domains such as finance and health. She then discussed the ethical concerns, types of bias and uncertainty in objective functions in AI systems that are designed to handle these applications at scale. This motivated the need for Responsible AI as it relates to accountability, liability and culpability.
The workshop was very engaging through insightful comments and questions from the audiences.
Bonnie Buchanan, Karamjit Singh, Maneet Singh, Nitendra Rajput, Shraddha Pandey, Srijan Kumar served as co-chairs of this workshop. This report was written by Nitendra Rajput and Maneet Singh.
Multi-Agent Path Finding (W22)
Multi-Agent Path Finding (MAPF) requires computing collision-free paths for multiple agents from their current locations to given destinations in a known environment. Example applications vary from robot coordination to traffic management. In recent years, researchers from artificial intelligence, robotics, and theoretical computer science explore different variants of the MAPF problem as well as various approaches with different properties. The purpose of this workshop is to bring these researchers together to present their research, discuss future research directions, and cross-fertilize the different communities.
Multimodal AI for Financial Forecasting (Muffin) (W23)
Financial forecasting is an essential task that helps investors make sound investment decisions and wealth creation. With increasing public interest in trading stocks, cryptocurrencies, bonds, commodities, currencies, crypto coins and non-fungible tokens (NFTs), there have been several attempts to utilize unstructured data for financial forecasting. Unparalleled advances in multimodal deep learning have made it possible to utilize multimedia such as textual reports, news articles, streaming video content, audio conference calls, user social media posts, customer web searches, etc for identifying profit creation opportunities in the market. E.g., how can we leverage new and better information to predict movements in stocks and cryptocurrencies well before others? However, there are several hurdles towards realizing this goal (1) large volumes of chaotic data, (2) combining text, audio, video, social media posts, and other modalities is non-trivial, (3) long context of media spanning multiple hours, days or even months, (4) user sentiment and media hype-driven stock/crypto price movement and volatility, (5) difficulty in automatically capturing market moving events using traditional statistical methods (6) misinformation and non-interpretability of financial systems leading to massive losses and bankruptcies.
To address all these major challenges, this workshop on Multimodal AI for Financial Forecasting (Muffin) at AAAI 2023 aims to bring together researchers from natural language processing, computer vision, speech recognition, machine learning, statistics and quantitative trading communities to expand research on the intersection of AI and financial time series forecasting. To further motivate and direct attention to unsolved problems in this domain, this workshop is organizing two shared tasks in this workshop – (1) Stock Price and Volatility Prediction post Monetary Conference Calls and (2) Cryptocurrency Bubble Detection.
Practical Deep Learning in the Wild (Practical-DL) (W24)
Deep learning has achieved great success for artificial intelligence (AI) in many advanced tasks, such as computer vision, natural language processing, and robotics. However, research in the AI field also shows that their performance in the wild is far from practical towards open-world data and scenarios. Besides the accuracy that is widely concerned in deep learning, the phenomena are significantly related to the studies about model efficiency and robustness, which we abstract as Practical Deep Learning in the Wild (Practical-DL).
Regarding model efficiency, in contrast to the ideal environment, it is impractical to train a huge neural network containing billions of parameters using a large-scale high-quality dataset and then deploy it to an edge device in practice. Meanwhile, considering model robustness, input data with noises frequently occur in open-world scenarios, which presents critical challenges for the building of robust AI systems in practice. Moreover, existing research presents that there is a trade-off between the robustness and accuracy of deep learning models, while in the context of efficient deep learning with limited resources, it is more challenging to achieve a better trade-off under the premise of satisfying efficiency. These complex demands would bring profound implications and an explosion of interest for research into the topic of this Practical-DL workshop in AAAI 2023, namely building practical AI with efficient and robust deep learning models.
Privacy-Preserving Artificial Intelligence (W25)
The availability of massive amounts of data, coupled with high-performance cloud computing platforms, has driven significant progress in artificial intelligence and, in particular, machine learning and optimization. It has profoundly impacted several areas, including computer vision, natural language processing, and transportation. However, the use of rich data sets also raises significant privacy concerns: They often reveal personal sensitive information that can be exploited, without the knowledge and/or consent of the involved individuals, for various purposes including monitoring, discrimination, and illegal activities. In its fourth edition, the AAAI Workshop on Privacy-Preserving Artificial Intelligence (PPAI-23) provides a platform for researchers, AI practitioners, and policymakers to discuss technical and societal issues and present solutions related to privacy in AI applications. The workshop will focus on both the theoretical and practical challenges related to the design of privacy-preserving AI systems and algorithms and will have strong multidisciplinary components, including soliciting contributions about policy, legal issues, and the societal impact of privacy in AI.
Recent Trends in Human-Centric AI (W26)
Human-Centric Artificial Intelligence is the notion of developing and using AI systems to help enhance, augment, and improve the quality of human life. Naturally, this paradigm involves two major components: human-centered computing and representation learning and responsible AI in human-centric applications.
The first component revolves around tasks such as user authentication, activity recognition, pose estimation, affective computing, health analytics, and others, which often rely on modeling data with specific spatiotemporal properties, for instance human activity images/videos, audio signals,
sensor-based time-series (e.g., PPG, ECG, EEG, IMU, clinical/medical data), and more. In recent years, learning effective representations for computer vision and natural language has revolutionized the effectiveness of solutions in these domains. Nonetheless, other data modalities, especially human-centric ones, have been largely under-served in terms of research and development. For these under-served domains, the general attitude has been to take advances from the ‘vision’ or ‘NLP’ communities and adapt them where possible. We argue, however, that a more original and stand-alone perspective on human-centric data can be highly beneficial and can lead to new and exciting advancements in the area. While the first component of this workshop mostly covers interpretation of people by AI, the second key component of the workshop is centered around interpretation of AI by people. This means aiding humans to investigate AI systems to facilitate responsible development, prioritizing concepts such as explainability, fairness, robustness, and security. We argue that identifying potential failure points and devising actionable directions for improvement is imperative for responsible AI and can benefit from translating model complexities into a language that humans can interpret and act on. Hence, this workshop also aims to cover recent advances in the area of responsible AI in human-centric applications.
In the R2HCAI workshop, we aim to bring together researchers broadly interested in Representation Learning for Responsible Human-Centric AI to discuss recent and novel findings in the intersection of these communities.
Reinforcement Learning Ready for Production (W27)
The 1st Reinforcement Learning Ready for Production workshop, held at AAAI 2023, focuses on understanding reinforcement learning trends and algorithmic developments that bridge the gap between theoretical reinforcement learning and production environments.
Scientific Document Understanding (W28)
Scientific documents such as research papers, patents, books, or technical reports are some of the most valuable resources of human knowledge. At the AAAI-23 Workshop on Scientific Document Understanding (SDU@AAAI-23), we aim to gather insights into the recent advances and remaining challenges in scientific document understanding. Researchers from related fields are invited to submit papers on the recent advances, resources, tools, and upcoming challenges for SDU.
Systems Neuroscience Approach to General Intelligence (W29)
AI technology and neuroscience have progressed such that it’s again prudent to look to the brain as a model for AI. Examining current artificial neural networks, theoretical computer science, and systems neuroscience, this workshop will uncover gaps in knowledge about the brain and models of intelligence.
Bernard Baars modeled the brain’s cognitive processes as a Global Workspace. This was elaborated in network neuroscience as the Global Neuronal Workspace, and in theoretical computer science as the Conscious Turing Machine (CTM) [1]. The CTM is a substrate independent model for consciousness. AI researchers have proposed variations and extensions of the Global Workspace, connecting the CTM to Transformers [2] and using them to communicate among specialist modules [3].
Meanwhile, neuroscience has identified large-scale brain circuits brain that bear striking resemblance to patterns found in contemporary AI architectures such as Transformers. This workshop will aim to map the Global Workspace and CTM to AI systems using the brain’s architecture as a guide. We hypothesize that this approach can achieve general intelligence and that high resolution recordings from the brain can be used to validate its models.
The goal of this workshop is to bring together a multi-disciplinary group comprising AI researchers, systems neuroscientists, algorithmic information theorists, and physicists to understand gaps in this larger agenda and to determine what’s known about what’s needed to build thinking machines.
References:
[1] https://doi.org/10.1073/pnas.2115934119
[2] https://researcher.draco.res.ibm.com/researcher/view_group.php?id=11044
[3] https://arxiv.org/abs/2103.01197
Uncertainty Reasoning and Quantification in Decision Making (UDM’23) (W30)
Deep neural networks (DNNs) have received tremendous attention and achieved great success in various applications, such as image and video analysis, natural language processing, recommendation systems, and drug discovery. However, inherent uncertainties derived from different root causes have been serious hurdles for DNNs to find robust and trustworthy solutions for real-world problems. A lack of consideration of such uncertainties may lead to unnecessary risk. For example, a self-driving autonomous car can misclassify a human on the road. A deep learning-based medical assistant may misdiagnose cancer as a benign tumor. Uncertainty has become increasingly important, and it has been attracting attention from academia and industry due to its increased popularity in real-world applications with uncertain concerns. It also emphasizes decision-making problems, such as autonomous driving and diagnosis systems. Therefore, the wave of research at the intersection of uncertainty reasoning and quantification in data mining and machine learning has also influenced other fields of science, including computer vision, natural language processing, reinforcement learning, and social science.
User-Centric Artificial Intelligence for Assistance in At-Home Tasks (W31)
Recent advancements in AI and ML have enabled these technologies to enhance and improve our daily lives; however, these solutions are often based on simplified formulations and abstracted datasets that make them challenging to be applied in complex and personalized household domains. Furthermore, any household solution will require not only expertise across algorithmic AI but also experts in interaction, socio-technical issues, and problem space. Since the solutions touch on so many different fields, its research community is spread across different conferences. The workshop is designed to bring together interested AI experts who, while coming from different subfields, share the vision of using AI technologies to solve user problems at home. Participants of the workshop will have the opportunity to share their experience and progress in using AI technologies to assist and empower users at home as well as learn and engage with our expert speakers/panelists. More information and submission details can be found on our website: https://ai4athome.github.io/
When Machine Learning Meets Dynamical Systems: Theory and Applications (W32)
The recent wave of using machine learning to analyze and manipulate real-world systems has inspired many research topics in the joint interface of machine learning and dynamical systems. However, the real world applications are diverse and complex with vulnerabilities such as simulation divergence or violation of certain prior knowledge. As ML-based dynamical models are implemented in real world systems, it generates a series of challenges including scalability, stability and trustworthiness.
Through this workshop, we aim to provide an informal and cutting-edge platform for research and discussion on the co-development between machine models and dynamical systems. We welcome all the contributions related to ML based application/theory on dynamical systems and solution to ML problem from dynamical system perspective.
Author Bios
Goonmeet Bajaj is affiliated with Ohio State University.
Zitao Liu is from the Guangdong Institute of Smart Education, Jinan University, China.
Deepak Maurya is affiliated with Purdue University.
Nitendra Rajput is SVP at AI Garage, Mastercard.
Balaraman Ravindran is affiliated with Indian Institute of Technology, Madras.
Maneet Singh is Director at AI Garage, Mastercard.
Biplav Srivastava is a Professor in the AI Institute at the University of South Carolica, where he works on goal-oriented human-machine collaboration via natural interfaces using domain and user models, learning and planning.
Shuohang Wang is a senior researcher at Knowledge and Language Teamm at Microsoft Cognitive Services Research.
Wenhao Yu is a Phd student at the Department of Computer Science and Engineering at the University of Notre Dame.