The online interactive magazine of the Association for the Advancement of Artificial Intelligence

Latest from AI Magazine

Winter 2021: Innovative Applications of AI

Winter 2021: Innovative Applications of AI

Vol 42 No 4: Winter 2021| Published: 2022-01-12


Video Summaries

Article Previews

Display None
Will AI Write Scientific Papers in the Future?

Yolanda Gil

In this presidential address, I would like to start with a personal reflection on the field and then share with you the research directions I am pursuing and my excitement about the future of AI. In my personal research to advance AI while advancing scientific discoveries, one question that I have been pondering for some years now is whether AI will write scientific papers in the future. I want to reflect on this question, and look back at the many accomplishments in our field that can make us very hopeful that the answer will be yes, and that it may happen sooner than we might expect.


Large Scale Multilingual Sticker Recommendation In Messaging Apps

Abhishek Laddha, Mohamed Hanoosh, Debdoot Mukherjee, Parth Patwa, Ankur Narang

Stickers are popularly used while messaging to visually express nuanced thoughts. We describe a real-time sticker recommendation (SR) system. We decompose SR into two steps: predict the message that is likely to be sent, and substitute that message with an appropriate sticker. To address the challenges caused by transliteration of message from users’ native language to the Roman script, we learn message embeddings by employing character-level CNN in an unsupervised manner. We use them to cluster semantically similar messages. Next, we predict the message cluster instead of the message. Except for validation, our system does not require human labeled data, leading to a fully auto-matic tuning pipeline. We propose a hybrid message prediction model, which can easily run on low-end phones. We discuss message cluster to sticker mapping, addressing the multilingual needs of our users, automated tuning of the system and also propose a novel application of community detection algorithm. As of November 2020, our system contains 100k+ stickers, has been deployed for 15+ months, and is being used by millions of users.


On the Care and Feeding of Virtual Assistants: Automating Conversation Review with AI

Ian Beaver, Abdullah Mueen

With the rise of intelligent virtual assistants (IVAs), there is a necessary rise in human effort to identify conversations containing misunderstood user inputs. These conversations uncover error in natural language understanding and help prioritize improvements to the IVA. As human analysis is time consuming and expensive, prioritizing the conversations where misunderstanding has likely occurred reduces costs and speeds IVA improvement. In addition, less conversations reviewed by humans mean less user data are exposed, increasing privacy. We describe Trace AI, a scalable system for automated conversation review based on the detection of conversational features that can identify potential miscommunications. Trace AI provides IVA designers with suggested actions to correct understanding errors, prioritizes areas of language model repair, and can automate the review of conversations. We discuss the system design and report its performance at identifying errors in IVA understanding compared to that of human reviewers. Trace AI has been commercially deployed for over 4 years and is responsible for significant savings in human annotation costs as well as accelerating the refinement cycle of deployed enterprise IVAs.


Feedback-Based Self-Learning in Large-Scale Conversational AI Agents

Pragaash Ponnusamy, Alireza Roshan Ghias, Yi Yi, Benjamin Yao, Chenlei Guo, Ruhi Sarikaya

Today, most of the large-scale conversational AI agents such as Alexa, Siri, or Google Assistant are built using manually annotated data to train the different components of the system including automatic speech recognition (ASR), natural language understanding (NLU), and entity resolution (ER). Typically, the accuracy of the machine learning models in these components are improved by manually transcribing and annotating data. As the scope of these systems increase to cover more scenarios and domains, manual annotation to improve the accuracy of these components becomes prohibitively costly and time con-suming. In this paper, we propose a system that leverages customer/system interaction feedback signals to automate learning without any manual annotation. Users of these systems tend to modify a previous query in hopes of fixing an error in the previous turn to get the right results. These reformulations, which are often preceded by defective experiences caused by either errors in ASR, NLU, ER, or the application. In some cases, users may not properly formulate their requests (e.g., providing partial title of a song), but gleaning across a wider pool of users and sessions reveals the underlying recurrent patterns. Our proposed self-learning system automatically detects the errors, generates reformulations, and deploys fixes to the runtime system to correct different types of errors occurring in different components of the system. In particular, we propose leveraging an absorbing Markov Chain model as a collaborative filtering mechanism in a novel attempt to mine these patterns, and coupling it with a guardrail rewrite selection mechanism that reactively evaluates these fixes using feedback friction data. We show that our approach is highly scalable, and able to learn reformulations that reduce Alexa-user errors by pooling anonymized data across millions of customers. The proposed self-learning system achieves a win-loss ratio of 11.8 and effectively reduces the defect rate by more than 30 percent on utterance level reformulations in our production A/B tests. To the best of our knowledge, this is the first self-learning large-scale conversational AI system in production.


Do We Need a Hippocratic Oath for Artificial Intelligence Scientists?

Nikolaos M. Siafakas

Artificial intelligence (AI) has been beneficial for humanity, improving many human activities. However, there are now significant dangers that may increase when AI reaches a human level of intelligence or superintelligence. It is paramount to focus on ensuring that AI is designed in a manner that is robustly beneficial for humans. The ethics and personal responsibilities of AI scientists could play an important role in continuing the constructive use of AI in the future. Lessons can be learnt from the long and successful history of medical ethics. Therefore, a Hippocratic Oath for AI scientists may increase awareness of the potential lethal threats of AI, enhance efforts to develop safe and beneficial AI to prevent corrupt practices and manipulations and invigorate ethical codes. The Hippocratic Oath in medicine, using simple universal principles, is a basis of human ethics, and in an analogous way, the proposed oath for AI scientists could enhance morality beyond biological consciousness and spread ethics across the universe.


Avoiding Negative Side Effects Due to Incomplete Knowledge of AI Systems

Sandhya Saisubramanian, Shlomo Zilberstein, Ece Kamar

Autonomous agents acting in the real-world often operate based on models that ignore certain aspects of the environment. The incompleteness of any given model – handcrafted or machine acquired – is inevitable due to practical limitations of any modeling technique for complex real-world settings. Due to the limited fidelity of its model, an agent’s actions may have unexpected, undesirable consequences during execution. Learning to recognize and avoid such negative side effects (NSEs) of an agent’s actions is critical to improve the safety and reliability of autonomous systems. Mitigating NSEs is an emerging research topic that is attracting increased attention due to the rapid growth in the deployment of AI systems and their broad societal impacts. This article provides a comprehensive overview of different forms of NSEs and the recent research efforts to address them. We identify key characteristics of NSEs, highlight the challenges in avoiding NSEs, and discuss recently developed approaches, contrasting their benefits and limitations. The article concludes with a discussion of open questions and suggestions for future research directions.


Agents of Exploration and Discovery

Pat Langley

Autonomous agents have many applications in familiar situations, but they also have great potential to help us understand novel settings. In this paper, I propose a new challenge for the AI research community: developing embodied systems that not only explore new environments but also characterize them in scientific terms. Illustrative examples include autonomous rovers on planetary surfaces and unmanned vehicles on undersea missions. I review two relevant paradigms: robotic agents that explore unknown areas and computational systems that discover scientific models. In each case, I specify the problem, identify component functions, describe current abilities, and note remaining limitations. Finally, I discuss obstacles that the community must overcome before it can develop integrated agents of exploration and discovery.


Looking Back, Looking Ahead: Symbolic versus Connectionist AI

Ashok K. Goel

The ongoing debate between symbolic and connectionist AI attends to some of the most fundamental issues in the field. In this column, I briefly review the evolution of the unfolding discussion. I also point out that there is a lot more to intelligence than the symbolic and connectionist views of AI.


AI-Alerts @

    Recent Posts

    The Role of Open-Source Software in Artificial Intelligence

    By Jim Spohrer

    With this publication, we launch a new column for AI Magazine on the role of open-source software in artificial intelligence. As the column editor, I would like to extend my welcome and invite AI Magazine readers to send short articles for future columns, which may appear in the traditional print version of AI Magazine, or on the AI Magazine interactive site currently under development. This introductory column serves to highlight my interests in open-source software and to propose a few topics for future columns.

    The Case Against Registered Reports

    By Odd Erik Gundersen, Norwegian University of Science and Technology

    Registered reports have been proposed as a way to move from eye-catching and surprising results and toward methodologically sound practices and interesting research questions. However, none of the top-twenty artificial intelligence journals support registered reports, and no traces of registered reports can be found in the field of artificial intelligence. Is this because they do not provide value for the type of research that is conducted in the field of artificial intelligence?

    Betting on Bets

    Chris Welty, Google Research, USA
    Praveen Paritosh, Google Research
    Kurt Bollacker, LongNow Foundation

    The AI bookies have spent a lot of time and energy collecting scientific bets from AI researchers since the birth of this column three years ago. While we have met with universal approval of the idea of scientific betting we have likewise met with nearly universal silence in our acquisition of bets. We have collected only a very few in this column over the past two years. In our first column we published the “will voice interfaces become the standard” bet, as well as a set of 10 predictions from Eric Horvitz that we proposed as bets awaiting challengers. No challengers have emerged.

    Engagement During Pandemic Teaching

    By Michael Wollowski, Rose-Hulman Institute of Technology, USA

    In this panel, AI faculty with experience teaching online and blended classes were asked to share their experiences teaching online classes. The panel was composed of Ashok Goel, Georgia Institute of Technology, Ansaf Salleb-Aouissi, Columbia University and Mehran Sahami, Stanford University. The panelists were asked to describe which tools and methods work well to help instructors engage and bond with students online. They were furthermore asked to share their insights into which components of a course can be done best online and which ones are best accomplished in person. The panel took place as part of the 2021 Symposium on Educational Advances of AI, which was collocated with AAAI-21. The panel was attended by about 55 people and it included a vigorous Q/A portion.

    Remembering Jaime Carbonell

    By Yolanda Gil

    Joining the incoming PhD class at Carnegie Mellon in the late 1980s, I was lucky to have incredible opportunities for faculty advisors and mentors in AI. Jaime Carbonell was among the more junior faculty, continuing the research that he started in his PhD combining natural language, planning, and machine learning.


    AI experts answer your questions! See all of our AMAs. Your questions will be submitted to these guests and a video will be recorded with their answers and posted on the Interactive AI Magazine and in the weekly AI Alert.


    For our fourth AMA, we have Dr. David Leake, Professor of Computer Science in the Luddy School of Informatics, Computing and Engineering at Indiana University, where he served as Executive Associate Dean from 2012-2021. He received his PhD from Yale University in 1990. His research is in artificial intelligence and cognitive science, including contributions in case-based reasoning, explanation, intelligent information systems, intelligent user interfaces, and introspective learning. He has authored/edited over 200 publications with over 8,500 Google Scholar citations. He played a key role in developing the field of case-based reasoning and is a five-time winner of best paper awards at the International Conference on Case-Based Reasoning (ICCBR). He is Editor in Chief Emeritus of AI Magazine, the official magazine of the Association for the Advancement of Artificial Intelligence (AAAI), after 17 years as Editor in Chief. In 2014 he received the AAAI Distinguished Service Award. He is a Senior Member of AAAI.

    Ask your questions here! We will record the interview in a couple weeks.

    AAAI members who have achieved significant accomplishments within the field of artificial intelligence can be recognized as senior members. Learn about the requirements and apply here:
    #ai #AAAIMembers #AAAISeniorMembers

    #AAAI recognizes as Fellows those who have made significant, sustained contributions to the field of #AI. Learn more and nominate:

    The Summer 2022 issue of #AIMagazine is here. Recommender systems, as used by Netflix, YouTube, and Amazon, are one of the most compelling stories of #AI. Research has led to a continuous improvement of recommendation techniques over the years. Learn more: