Vol 40 No 3: Fall 2019 | Published: 2019-09-30
The Case for Teamwork
We were delighted to see No AI Is an Island: The Case for Teaming Intelligence highlighted in the latest AI Magazine. The case for teamwork is, however, not new nor as neglected as this article suggests; it is a case we and many others have made for three decades.
Philip R. Cohen
Barbara J. Grosz
Candace L. Sidner
Worcester Polytechnic Institute
University of Melbourne
NASA Ames Research Center
Artificial intelligence, Autonomy, and Human-Machine Teams — Interdependence, Context, and Explainable AI
Because in military situations, as well as for self-driving cars, information must be processed faster than humans can achieve, determination of context computationally, also known as situational assessment, is increasingly important. In this article, we introduce the topic of context, and we discuss what is known about the heretofore intractable research problem on the effects of interdependence, present in the best of human teams; we close by proposing that interdependence must be mastered mathematically to operate human-machine teams efficiently, to advance theory, and to make the machine actions directed by AI explainable to team members and society. The special topic articles in this issue and a subsequent issue of AI Magazine review ongoing mature research and operational programs that address context for human-machine teams.
In 1983, William Lawless blew the whistle on Department of Energy (DOE) mismanagement of military radioactive wastes. After his PhD, he joined DOE’s citizen advisory board at its Savannah River Site where he coauthored over 100 recommendations on its cleanup. His research today is on interdependence for teams (human-machine teams). He advised the Naval Research and Development Enter-prise’s Applied Artificial Intelligence Summit in 2018; coed-ited four books; published widely; and co-organized eight AAAI symposia at Stanford with a ninth in 2019 on shared context.
Ranjeev Mittu is the Branch Head for the Information Management and Decision Architectures Branch within the Information Technology Division, US Naval Research Laboratory. His research expertise is in multi-agent systems, artificial intelligence, machine learning, data mining, pat-tern recognition and anomaly detection. He has won an award for transitioning technology solutions to the opera-tional community, and has coauthored one book, coedited four books, and written numerous book chapters, articles, and conference publications. He has served on scientific exchanges, as a subject matter expert and Technology Eval-uation Boards. He has an MS in Electrical Engineering from The Johns Hopkins University.
Donald Sofge is a computer scientist and roboticist at the U.S. Naval Research Laboratory (NRL) with more than 30 years of experience in AI and Control Systems R&D. He has served as PI or Co-PI on dozens of federally funded R&D programs, and has numerous publications on auton-omy, intelligent control, quantum computing, including 5 books and one patent. He leads the Distributed Auton-omous Systems Group at NRL where he develops nature- inspired computing solutions to problems in sensing, AI, and autonomous robotic systems control, including autonomous teams or swarms of robotic systems for Navy missions.
Laura Hiatt is a research scientist at the U.S. Naval Re-search Laboratory. She received her BS in symbolic sys-tems from Stanford University and her PhD in computer science from Carnegie Mellon University. Hiatt’s work has primarily focused on ways in which humans and robots can effectively work together as teammates. The research involves issues of planning and reasoning, human situ-ational awareness, and team-based task communication strategies. Much of her work has also involved developing computational cognitive models of human cognition, and leveraging them to improve the ability of robots to team with humans and accomplish their tasks.
Recent Trends in Context Exploitation for Information Fusion and AI
AI is related to information fusion (IF). Many methods in AI that use perception and reasoning align to the functionalities of high-level IF (HLIF) operations that estimate situational and impact states. To achieve HLIF sensor, user, and mission management operations, AI elements of planning, control, and knowledge representation are needed. Both AI reasoning and IF inferencing and estimation exploit context as a basis for achieving deeper levels of understanding of complex world conditions. Open challenges for AI researchers include achieving concept generalization, response adaptation, and situation assessment. This article presents a brief survey of recent and current research on the exploitation of context in IF and discusses the interplay and similarities between IF, context exploitation, and AI. In addition, it highlights the role that contextual information can provide in the next generation of adaptive intelligent systems based on explainable AI. The article describes terminology, addresses notional processing concepts, and lists references for readers to follow up and explore ideas offered herein.
Lauro Snidaro received his PhD (2006) in computer science from the University of Udine, where he is now an associate professor. His main interests include data/information fusion, computer vision, machine learning and multimedia. Appointed Italian member of several NATO Research Task Groups on information fusion since 2003, Snidaro was the lead editor of Context-Enhanced Information Fusion – Boosting Real-World Performance with Domain Knowledge, published by Springer in 2016. He is editor of the fusion for signal/image processing and understanding area for the Elsevier Information Fusion journal, and of the high level fusion area for the ISIP Journal of Advances in Information Fusion.
Jesus Garcia is an associate professor in the Computer Science Department at the Universidad Carlos III de Madrid. His main research interests are computational intelligence, sensor and information fusion, machine vision, traffic management systems and autonomous vehicles. Within these areas, including theoretical and applied aspects, he has coauthored more than 10 book chapters, 60 journal papers and 180 conference papers. He has served on several advisory and programming committees in organizations for IEEE, ISIP, and NATO. He is the chair of the Spanish IEEE Chapter on Aerospace and Electronic Systems since 2013 and appointed Spanish member of several NATO-STO Research Groups.
James Llinas is a research professor emeritus and rounding director of the Center for Multisource Info Fusion at the State University of New York at Buffalo, New York, USA. He is an internationally-recognized expert in sensor, data, and information fusion, coauthored the first integrated book Multi-Sensor Data Fusion, and has taught and lectured internationally for more than 20 years on this topic. He has also coedited and coauthored the Handbook of Data Fusion, Distributed Fusion for Net-Centric Operations, and most recently, Context Enhanced Information Fusion.
Erik Blasch is a program officer at the Air Force Office of Scientific Research. He received a BS in mechanical engineering from Massachusetts Institute of Technology; MS in mechanical, health science, and industrial engineering from the Georgia Institute of Technology; MS in electronics, economics, and business from Wright State University; and a PhD in electrical engineering from Wright State University. He has compiled more than 750 papers, 21 patents, and 5 books focusing on robotics, information fusion, and man-machine systems, starting with the 1994 AAAI Robot Competition Winner (AI Magazine 16(2), 1995). He is an ISIP member, AIAA Associate Fellow, SPIE Fellow, and IEEE Fellow.
Integrating Context into Artificial Intelligence: Research from the Robotics Collaborative Technology Alliance
Applying context to a situation, task, or system state provides meaning and advances understanding that can affect future decisions or actions. Although people are naturally good at perceiving contextual understanding and inferring missing pieces of information using various alternative sources, this process is difficult for AI systems or robots, especially in high-uncertainty and unstructured operations. Integration of context-driven AI is important for future robotic capabilities to support the development of situation awareness, calibrate appropriate trust, and improve team performance in collaborative human-robot teams. This article highlights advances in context-driven AI for human-robot teaming by the Army Research Laboratory’s Robotics Collaborative Technology Alliance. Avenues of research discussed include how context enables robots to fill in the gaps to make effective decisions more quickly, supports more robust behaviors, and augments robot communications to suit the needs of the team under a variety of environments and team organizations and across missions.
Kristin E. Schaefer is an engineer with the CCDC Army Research Laboratory. She received her PhD in modeling and simulation from the University of Central Florida in 2013. Her research interests lie primarily in the areas of artificial intelligence and modeling and simulation approaches to enhance the development of bidirectional communication and trust in human-robot teams.
Jean Oh is a systems scientist (research faculty) at the Robotics Institute at Carnegie Mellon University. Her current research is focused on the intersection among vision, language, and planning in robotics. She is passionate about creating persistent robots that can coexist with humans in shared environments, learning to improve themselves over time through continuous training, exploration, and interactions.
Derya Aksaray is an assistant professor in the Aerospace Engineering and Mechanics Department at the University of Minnesota. She received her PhD degree in aerospace engineering from the Georgia Institute of Technology in 2014. She then held post-doctoral researcher positions at Boston University from 2014-2016 and at the Massachusetts Institute of Technology from 2016-2017. Her research interests lie primarily in the areas of control theory, formal methods, and machine learning with applications to autonomous systems, robotics, and humanrobot teaming.
Daniel Barber is a research assistant professor at the University of Central Florida. He has extensive experience in the field of robotics, simulation development, training environments, and human state assessment. His current research focus is on human system interaction and training assessment including multimodal communication, user interaction devices, teaming, physiological assessment, and adaptive systems.
Context-Driven Proactive Decision Support for Hybrid Teams
A synergy between AI and the Internet of Things (IoT) will significantly improve sense-making, situational awareness, proactivity, and collaboration. However, the key challenge is to identify the underlying context within which humans interact with smart machines. Knowledge of the context facilitates proactive allocation among members of a human–smart machine (agent) collective that balances autonomy with human interaction, without displacing humans from their supervisory role of ensuring that the system goals are achievable. In this article, we address four research questions as a means of advancing toward proactive autonomy: how to represent the interdependencies among the key elements of a hybrid team; how to rapidly identify and characterize critical contextual elements that require adaptation over time; how to allocate system tasks among machines and agents for superior performance; and how to enhance the performance of machine counterparts to provide intelligent and proactive courses of action while considering the cognitive states of human operators. The answers to these four questions help us to illustrate the integration of AI and IoT applied to the maritime domain, where we define context as an evolving multidimensional feature space for heterogeneous search, routing, and resource allocation in uncertain environments via proactive decision support systems.
University of Connecticut
U.S. Naval Research Laboratory
University of Connecticut
University of Connecticut
Krishna R. Pattipati
University of Connecticut
AI Bookie: Will a Self-Authorizing AI-Based System Take Control from a Human Operator?
The AI Bookie column documents highlights from AI Bets, an online forum for the creation of adjudicatable predictions and bets about the future of AI. While it is easy to make a prediction about the future, this forum was created to help researchers craft predictions whose accuracy can be clearly and unambiguously judged when they come due. The bets will be documented on line, and regularly in this publication in The AI Bookie. We encourage bets that are rigorously and scientifically argued. We discourage bets that are too general to be evaluated, or too specific to an institution or individual. The goal is not to continue to feed the media frenzy and pundit predictions about AI, but rather to curate and promote bets whose outcomes will provide useful feedback to the scientific community. Place your bets! Please go to ai.sciencebets.org
William Lawless blew the whistle on Department of Energy (DOE) mismanagement of military radioactive wastes. After his PhD, he joined DOE’s citizen advisory board at its Savannah River Site where he coauthored over 100 recom-mendations on its cleanup. His research today is on inter-dependence for teams (human-machine teams). He is also a professor at Paine College.
Ranjeev Mittu is the Branch Head for the Information Management and Decision Architectures Branch within the Information Technology Division, US Naval Research Laboratory. His research expertise is in multi-agent systems, artificial intelligence, machine learning, data mining, pat-tern recognition and anomaly detection.
Donald Sofge is a computer scientist and roboticist at the U.S. Naval Research Laboratory (NRL) researching teams and swarms of autonomous robotic systems.