The online interactive magazine of the Association for the Advancement of Artificial Intelligence

Vol 42 No 4: Winter 2021| Published: 2022-01-12

 

Video Summaries

Article Previews

Display None
Will AI Write Scientific Papers in the Future?

Yolanda Gil

In this presidential address, I would like to start with a personal reflection on the field and then share with you the research directions I am pursuing and my excitement about the future of AI. In my personal research to advance AI while advancing scientific discoveries, one question that I have been pondering for some years now is whether AI will write scientific papers in the future. I want to reflect on this question, and look back at the many accomplishments in our field that can make us very hopeful that the answer will be yes, and that it may happen sooner than we might expect.

PDF

Large Scale Multilingual Sticker Recommendation In Messaging Apps

Abhishek Laddha, Mohamed Hanoosh, Debdoot Mukherjee, Parth Patwa, Ankur Narang

Stickers are popularly used while messaging to visually express nuanced thoughts. We describe a real-time sticker recommendation (SR) system. We decompose SR into two steps: predict the message that is likely to be sent, and substitute that message with an appropriate sticker. To address the challenges caused by transliteration of message from users’ native language to the Roman script, we learn message embeddings by employing character-level CNN in an unsupervised manner. We use them to cluster semantically similar messages. Next, we predict the message cluster instead of the message. Except for validation, our system does not require human labeled data, leading to a fully auto-matic tuning pipeline. We propose a hybrid message prediction model, which can easily run on low-end phones. We discuss message cluster to sticker mapping, addressing the multilingual needs of our users, automated tuning of the system and also propose a novel application of community detection algorithm. As of November 2020, our system contains 100k+ stickers, has been deployed for 15+ months, and is being used by millions of users.

PDF

On the Care and Feeding of Virtual Assistants: Automating Conversation Review with AI

Ian Beaver, Abdullah Mueen

With the rise of intelligent virtual assistants (IVAs), there is a necessary rise in human effort to identify conversations containing misunderstood user inputs. These conversations uncover error in natural language understanding and help prioritize improvements to the IVA. As human analysis is time consuming and expensive, prioritizing the conversations where misunderstanding has likely occurred reduces costs and speeds IVA improvement. In addition, less conversations reviewed by humans mean less user data are exposed, increasing privacy. We describe Trace AI, a scalable system for automated conversation review based on the detection of conversational features that can identify potential miscommunications. Trace AI provides IVA designers with suggested actions to correct understanding errors, prioritizes areas of language model repair, and can automate the review of conversations. We discuss the system design and report its performance at identifying errors in IVA understanding compared to that of human reviewers. Trace AI has been commercially deployed for over 4 years and is responsible for significant savings in human annotation costs as well as accelerating the refinement cycle of deployed enterprise IVAs.

PDF

Feedback-Based Self-Learning in Large-Scale Conversational AI Agents

Pragaash Ponnusamy, Alireza Roshan Ghias, Yi Yi, Benjamin Yao, Chenlei Guo, Ruhi Sarikaya

Today, most of the large-scale conversational AI agents such as Alexa, Siri, or Google Assistant are built using manually annotated data to train the different components of the system including automatic speech recognition (ASR), natural language understanding (NLU), and entity resolution (ER). Typically, the accuracy of the machine learning models in these components are improved by manually transcribing and annotating data. As the scope of these systems increase to cover more scenarios and domains, manual annotation to improve the accuracy of these components becomes prohibitively costly and time con-suming. In this paper, we propose a system that leverages customer/system interaction feedback signals to automate learning without any manual annotation. Users of these systems tend to modify a previous query in hopes of fixing an error in the previous turn to get the right results. These reformulations, which are often preceded by defective experiences caused by either errors in ASR, NLU, ER, or the application. In some cases, users may not properly formulate their requests (e.g., providing partial title of a song), but gleaning across a wider pool of users and sessions reveals the underlying recurrent patterns. Our proposed self-learning system automatically detects the errors, generates reformulations, and deploys fixes to the runtime system to correct different types of errors occurring in different components of the system. In particular, we propose leveraging an absorbing Markov Chain model as a collaborative filtering mechanism in a novel attempt to mine these patterns, and coupling it with a guardrail rewrite selection mechanism that reactively evaluates these fixes using feedback friction data. We show that our approach is highly scalable, and able to learn reformulations that reduce Alexa-user errors by pooling anonymized data across millions of customers. The proposed self-learning system achieves a win-loss ratio of 11.8 and effectively reduces the defect rate by more than 30 percent on utterance level reformulations in our production A/B tests. To the best of our knowledge, this is the first self-learning large-scale conversational AI system in production.

PDF

Do We Need a Hippocratic Oath for Artificial Intelligence Scientists?

Nikolaos M. Siafakas

Artificial intelligence (AI) has been beneficial for humanity, improving many human activities. However, there are now significant dangers that may increase when AI reaches a human level of intelligence or superintelligence. It is paramount to focus on ensuring that AI is designed in a manner that is robustly beneficial for humans. The ethics and personal responsibilities of AI scientists could play an important role in continuing the constructive use of AI in the future. Lessons can be learnt from the long and successful history of medical ethics. Therefore, a Hippocratic Oath for AI scientists may increase awareness of the potential lethal threats of AI, enhance efforts to develop safe and beneficial AI to prevent corrupt practices and manipulations and invigorate ethical codes. The Hippocratic Oath in medicine, using simple universal principles, is a basis of human ethics, and in an analogous way, the proposed oath for AI scientists could enhance morality beyond biological consciousness and spread ethics across the universe.

PDF

Avoiding Negative Side Effects Due to Incomplete Knowledge of AI Systems

Sandhya Saisubramanian, Shlomo Zilberstein, Ece Kamar

Autonomous agents acting in the real-world often operate based on models that ignore certain aspects of the environment. The incompleteness of any given model – handcrafted or machine acquired – is inevitable due to practical limitations of any modeling technique for complex real-world settings. Due to the limited fidelity of its model, an agent’s actions may have unexpected, undesirable consequences during execution. Learning to recognize and avoid such negative side effects (NSEs) of an agent’s actions is critical to improve the safety and reliability of autonomous systems. Mitigating NSEs is an emerging research topic that is attracting increased attention due to the rapid growth in the deployment of AI systems and their broad societal impacts. This article provides a comprehensive overview of different forms of NSEs and the recent research efforts to address them. We identify key characteristics of NSEs, highlight the challenges in avoiding NSEs, and discuss recently developed approaches, contrasting their benefits and limitations. The article concludes with a discussion of open questions and suggestions for future research directions.

PDF

Agents of Exploration and Discovery

Pat Langley

Autonomous agents have many applications in familiar situations, but they also have great potential to help us understand novel settings. In this paper, I propose a new challenge for the AI research community: developing embodied systems that not only explore new environments but also characterize them in scientific terms. Illustrative examples include autonomous rovers on planetary surfaces and unmanned vehicles on undersea missions. I review two relevant paradigms: robotic agents that explore unknown areas and computational systems that discover scientific models. In each case, I specify the problem, identify component functions, describe current abilities, and note remaining limitations. Finally, I discuss obstacles that the community must overcome before it can develop integrated agents of exploration and discovery.

PDF

Looking Back, Looking Ahead: Symbolic versus Connectionist AI

Ashok K. Goel

The ongoing debate between symbolic and connectionist AI attends to some of the most fundamental issues in the field. In this column, I briefly review the evolution of the unfolding discussion. I also point out that there is a lot more to intelligence than the symbolic and connectionist views of AI.

PDF