The online interactive magazine of the Association for the Advancement of Artificial Intelligence

Latest from AI Magazine

Winter 2021: Innovative Applications of AI

Winter 2021: Innovative Applications of AI

Vol 42 No 4: Winter 2021| Published: 2022-01-12

 

Video Summaries

Article Previews

Display None
Will AI Write Scientific Papers in the Future?

Yolanda Gil

In this presidential address, I would like to start with a personal reflection on the field and then share with you the research directions I am pursuing and my excitement about the future of AI. In my personal research to advance AI while advancing scientific discoveries, one question that I have been pondering for some years now is whether AI will write scientific papers in the future. I want to reflect on this question, and look back at the many accomplishments in our field that can make us very hopeful that the answer will be yes, and that it may happen sooner than we might expect.

PDF

Large Scale Multilingual Sticker Recommendation In Messaging Apps

Abhishek Laddha, Mohamed Hanoosh, Debdoot Mukherjee, Parth Patwa, Ankur Narang

Stickers are popularly used while messaging to visually express nuanced thoughts. We describe a real-time sticker recommendation (SR) system. We decompose SR into two steps: predict the message that is likely to be sent, and substitute that message with an appropriate sticker. To address the challenges caused by transliteration of message from users’ native language to the Roman script, we learn message embeddings by employing character-level CNN in an unsupervised manner. We use them to cluster semantically similar messages. Next, we predict the message cluster instead of the message. Except for validation, our system does not require human labeled data, leading to a fully auto-matic tuning pipeline. We propose a hybrid message prediction model, which can easily run on low-end phones. We discuss message cluster to sticker mapping, addressing the multilingual needs of our users, automated tuning of the system and also propose a novel application of community detection algorithm. As of November 2020, our system contains 100k+ stickers, has been deployed for 15+ months, and is being used by millions of users.

PDF

On the Care and Feeding of Virtual Assistants: Automating Conversation Review with AI

Ian Beaver, Abdullah Mueen

With the rise of intelligent virtual assistants (IVAs), there is a necessary rise in human effort to identify conversations containing misunderstood user inputs. These conversations uncover error in natural language understanding and help prioritize improvements to the IVA. As human analysis is time consuming and expensive, prioritizing the conversations where misunderstanding has likely occurred reduces costs and speeds IVA improvement. In addition, less conversations reviewed by humans mean less user data are exposed, increasing privacy. We describe Trace AI, a scalable system for automated conversation review based on the detection of conversational features that can identify potential miscommunications. Trace AI provides IVA designers with suggested actions to correct understanding errors, prioritizes areas of language model repair, and can automate the review of conversations. We discuss the system design and report its performance at identifying errors in IVA understanding compared to that of human reviewers. Trace AI has been commercially deployed for over 4 years and is responsible for significant savings in human annotation costs as well as accelerating the refinement cycle of deployed enterprise IVAs.

PDF

Feedback-Based Self-Learning in Large-Scale Conversational AI Agents

Pragaash Ponnusamy, Alireza Roshan Ghias, Yi Yi, Benjamin Yao, Chenlei Guo, Ruhi Sarikaya

Today, most of the large-scale conversational AI agents such as Alexa, Siri, or Google Assistant are built using manually annotated data to train the different components of the system including automatic speech recognition (ASR), natural language understanding (NLU), and entity resolution (ER). Typically, the accuracy of the machine learning models in these components are improved by manually transcribing and annotating data. As the scope of these systems increase to cover more scenarios and domains, manual annotation to improve the accuracy of these components becomes prohibitively costly and time con-suming. In this paper, we propose a system that leverages customer/system interaction feedback signals to automate learning without any manual annotation. Users of these systems tend to modify a previous query in hopes of fixing an error in the previous turn to get the right results. These reformulations, which are often preceded by defective experiences caused by either errors in ASR, NLU, ER, or the application. In some cases, users may not properly formulate their requests (e.g., providing partial title of a song), but gleaning across a wider pool of users and sessions reveals the underlying recurrent patterns. Our proposed self-learning system automatically detects the errors, generates reformulations, and deploys fixes to the runtime system to correct different types of errors occurring in different components of the system. In particular, we propose leveraging an absorbing Markov Chain model as a collaborative filtering mechanism in a novel attempt to mine these patterns, and coupling it with a guardrail rewrite selection mechanism that reactively evaluates these fixes using feedback friction data. We show that our approach is highly scalable, and able to learn reformulations that reduce Alexa-user errors by pooling anonymized data across millions of customers. The proposed self-learning system achieves a win-loss ratio of 11.8 and effectively reduces the defect rate by more than 30 percent on utterance level reformulations in our production A/B tests. To the best of our knowledge, this is the first self-learning large-scale conversational AI system in production.

PDF

Do We Need a Hippocratic Oath for Artificial Intelligence Scientists?

Nikolaos M. Siafakas

Artificial intelligence (AI) has been beneficial for humanity, improving many human activities. However, there are now significant dangers that may increase when AI reaches a human level of intelligence or superintelligence. It is paramount to focus on ensuring that AI is designed in a manner that is robustly beneficial for humans. The ethics and personal responsibilities of AI scientists could play an important role in continuing the constructive use of AI in the future. Lessons can be learnt from the long and successful history of medical ethics. Therefore, a Hippocratic Oath for AI scientists may increase awareness of the potential lethal threats of AI, enhance efforts to develop safe and beneficial AI to prevent corrupt practices and manipulations and invigorate ethical codes. The Hippocratic Oath in medicine, using simple universal principles, is a basis of human ethics, and in an analogous way, the proposed oath for AI scientists could enhance morality beyond biological consciousness and spread ethics across the universe.

PDF

Avoiding Negative Side Effects Due to Incomplete Knowledge of AI Systems

Sandhya Saisubramanian, Shlomo Zilberstein, Ece Kamar

Autonomous agents acting in the real-world often operate based on models that ignore certain aspects of the environment. The incompleteness of any given model – handcrafted or machine acquired – is inevitable due to practical limitations of any modeling technique for complex real-world settings. Due to the limited fidelity of its model, an agent’s actions may have unexpected, undesirable consequences during execution. Learning to recognize and avoid such negative side effects (NSEs) of an agent’s actions is critical to improve the safety and reliability of autonomous systems. Mitigating NSEs is an emerging research topic that is attracting increased attention due to the rapid growth in the deployment of AI systems and their broad societal impacts. This article provides a comprehensive overview of different forms of NSEs and the recent research efforts to address them. We identify key characteristics of NSEs, highlight the challenges in avoiding NSEs, and discuss recently developed approaches, contrasting their benefits and limitations. The article concludes with a discussion of open questions and suggestions for future research directions.

PDF

Agents of Exploration and Discovery

Pat Langley

Autonomous agents have many applications in familiar situations, but they also have great potential to help us understand novel settings. In this paper, I propose a new challenge for the AI research community: developing embodied systems that not only explore new environments but also characterize them in scientific terms. Illustrative examples include autonomous rovers on planetary surfaces and unmanned vehicles on undersea missions. I review two relevant paradigms: robotic agents that explore unknown areas and computational systems that discover scientific models. In each case, I specify the problem, identify component functions, describe current abilities, and note remaining limitations. Finally, I discuss obstacles that the community must overcome before it can develop integrated agents of exploration and discovery.

PDF

Looking Back, Looking Ahead: Symbolic versus Connectionist AI

Ashok K. Goel

The ongoing debate between symbolic and connectionist AI attends to some of the most fundamental issues in the field. In this column, I briefly review the evolution of the unfolding discussion. I also point out that there is a lot more to intelligence than the symbolic and connectionist views of AI.

PDF

Recent Posts

Letter from the Editor

By Ashok Goel

We are delighted to bring the brand new Interactive AI Magazine to you, a digital and expanded version of AI Magazine.

Remembering Nils Nilsson

By Karen Myers

“Hello! This is Nils Nilsson calling!!” Or so the booming voice on the other end of the line claimed, as I skeptically held the phone in my hand on a Sunday afternoon mid-way through my final semester as an undergraduate in Toronto. I had been in my dorm room preparing for a mid-term in my Introduction to Artificial Intelligence course, in fact reading through the textbook which Nils himself had authored.

Meet AI: Vol. 1, Ed. 2

“Meet AI” is a comic that deals with questions like; Is the existential threat from AI real? Why is there a gap between media coverage of advancements and the ground reality of research? Why do models misbehave in the real world and how do we go about solving this problem? How did we end up in an AI bubble and why is there such a gold rush mentality in ML scholarship right now?

Squirrel AI Award for Artificial Intelligence for the Benefit of Humanity — An Interview with Squirrel AI’s Richard Tong

By Ashok Goel & Ida Camacho

The Association for the Advancement of Artificial Intelligence (AAAI) and Squirrel AI Learning announced the establishment of a new $1M annual award for societal benefits of AI. The award will be sponsored by Squirrel AI Learning as part of its mission to promote the use of artificial intelligence with lasting positive effects for society. The new Squirrel AI Award for Artificial Intelligence for the Benefit of Humanity was announced jointly by Derek Haoyang Li, Founder and Chairman of Squirrel AI Learning, and Yolanda Gil, President of AAAI, at the 2019 conference for AI for adaptive Education (AIAED) in Beijing.  

Patrick Henry Winston: In Memoriam

By Mark Finlayson, Florida International University 

“What do you say?” Patrick’s enunciated greeting would ring out, ritual-like, as I presented myself at the threshold to 32-251. His blond hair poking up from behind his monitor, I could hear in his voice whether he wore his characteristic wry smile. My visits were unscheduled: I would come over from my neighboring office when I saw his light on and door open. I spoke with him almost every work day for nearly twelve years, in conversations long and short, mostly about research: science, engineering, academics, artificial intelligence, cognition, or the latest paper or proposal we were writing.

Ask-Me-Anything

AI experts answer your questions! See all of our AMAs. Your questions will be submitted to these guests and a video will be recorded with their answers and posted on the Interactive AI Magazine and in the weekly AI Alert.

 

For our fourth AMA, we have Dr. David Leake, Professor of Computer Science in the Luddy School of Informatics, Computing and Engineering at Indiana University, where he served as Executive Associate Dean from 2012-2021. He received his PhD from Yale University in 1990. His research is in artificial intelligence and cognitive science, including contributions in case-based reasoning, explanation, intelligent information systems, intelligent user interfaces, and introspective learning. He has authored/edited over 200 publications with over 8,500 Google Scholar citations. He played a key role in developing the field of case-based reasoning and is a five-time winner of best paper awards at the International Conference on Case-Based Reasoning (ICCBR). He is Editor in Chief Emeritus of AI Magazine, the official magazine of the Association for the Advancement of Artificial Intelligence (AAAI), after 17 years as Editor in Chief. In 2014 he received the AAAI Distinguished Service Award. He is a Senior Member of AAAI.

Call for Nominations for the 2023 AAAI Award for Artificial Intelligence for the Benefit of Humanity! Award is accompanied by a $25,000 prize and travel expenses. Financial support for the award is provided by Squirrel AI. https://aaai.org/Awards/ai-award.php

The AAAI community is deeply saddened and mourns the loss of Professor Fahiem Bacchus, an AAAI fellow, who passed away September 22. Fahiem was a brilliant researcher and a kind person. Heartfelt condolences to his family, friends, and all who knew him. He will be greatly missed.

The AAAI/EAAI Outstanding Educator award is given to a person (or group) who has made major contributions to #AI education. It includes a $1,000 honorarium, 1 yr AAAI membership, & registration to the EAAI conference and AAAI conference.