The online interactive magazine of the Association for the Advancement of Artificial Intelligence

AI Magazine Cover Winter2019

Vol 40 No 4: Winter 2019 | Published: 2020-01-02

 

Reflections on Successful Research in Artificial Intelligence: An Introduction

This editorial introduces the special topic articles on reflections on successful research in artificial intelligence. Consisting of a combination of interviews and full-length articles, the special topic articles examine the meaning of success and metrics of success from a variety of perspectives. Our editorial team is especially excited about this topic, because we are in an era when several of the aspirations of early artificial intelligence researchers and futurists seem to be within reach of the general public. This has spurred us to reflect on, and re-examine, our social and scientific motivations for promoting the use of artificial intelligence in governments, enterprises, and in our lives.

Contributors

Ching-Hua Chen is a research staff member at the T.J. Watson Research Center in Yorktown Heights, New York. She manages the Health Behavior and Decision Science group within the Center for Computational Health. She graduated from Penn State University with a dual-title PhD in Business Administration and Operations Research.

Jim Hendler is the Tetherless World Professor of Computer, Web and Cognitive Sciences at Rensselaer Polytechnic Insti-tute. He was the recipient of the 2017 Association for Ad-vancement of Artificial Intelligence Distinguished Service award and is a fellow of the Association for Advancement of Artificial Intelligence, Association for Computing Machinery, Institute of Electrical and Electronics Engineers, and the National Academy of Public Administration.

Sabbir Rashid is a graduate student working with Deborah McGuinness at Rensselaer Polytechnic Institute on research related to the semantic web. Rashid has contributed to tech-nologies involving data annotation and harmonization, using semantic data dictionaries to semantically represent and integrate several publicly available datasets. His current work includes the application of deductive and abductive reasoning techniques over linked health data. This research is being applied as part of the Health Empow-erment by Analytics, Learning, and Semantics project to help explain the actions of physicians, as well as adverse drug reactions of patients, in the context of chronic diseases such as diabetes.

Oshani Seneviratne is the director of health data research at the Institute for Data Exploration and Applications at the Rensselaer Polytechnic Institute. Seneviratne’s research interests lie at the intersection of decentralized systems and health applications. At Rensselaer Institute for Data Exploration and Applications, Seneviratne is involved in the Health Empowerment by Analytics, Learning, and Semantics project, and leads the Smart Contracts Augmented with Analytics Learning and Semantics project. Seneviratne obtained her PhD in computer science from the Massachu-setts Institute of Technology under the supervision of Sir Tim Berners-Lee. Before Rensselaer, Seneviratne worked at Oracle specializing in distributed systems, provenance, and healthcare-related research and applications.

Daby Sow is a principal research staff member at IBM Research. Since August 2017, he has managed the Biomedical Analytics and Modeling group, part of the IBM Research Center for Computational Health. In this role, he is leading a team of AI scientists developing novel AI and machine learning solutions for various open healthcare research problems. These problems range from modeling the pro-gression of complex chronic conditions, to pharmacovigi-lance with the development of signal detection algorithms for early adverse drug reaction detection from real-world evidence data (electronic health records, claims, spontaneous reporting systems) and the generation of time-varying treat-ment strategies using data collected during clinical prac-tice. Sow is an alumni of Columbia University, where he received a PhD degree in electrical engineering in 2000.

Biplav Srivastava is a distinguished data scientist and master inventor at IBM’s Chief Analytics Office. With over two decades of research experience in AI, services computing and sustainability, most of which was at IBM Research, Biplav is also an Association for Computing Machinery Distin-guished Scientist and Distinguished Speaker, and an Insti-tute of Electrical and Electronics Engineers Senior Member. Srivastava is exploring new approaches for goal-oriented, ethical, human-machine collaboration via natural interfaces using domain and user models, learning, and planning. He is leading efforts for adoption of AI technologies in a large-scale global business context and understanding their impact on workforce. Srivastava received his MS and PhD from Arizona State University, and a BTech from Indian Institute of Technology, India, all in computer science.

PDF

Reflections on Successful Research in Artificial Intelligence: An Interview with Yolanda Gil

This article contains the observations of Yolanda Gil, director of knowledge technologies and research professor at the Information Sciences Institute of the University of Southern California, USA, and president of Association for Advancement of Artificial Intelligence who was recently interviewed about the factors that could influence successful AI research. The editorial team interviewers included members of the special track editorial team from IBM Research (Biplav Srivastava, Ching-Hua Chen) and Rensselaer Polytechnic Institute (Oshani Seneviratne).

Contributors

Yolanda Gil is director of knowledge technologies and research professor at the Information Sciences Institute of the University of Southern California, USA, and president of AAAI.

Biplav Srivastava is a distinguished data scientist and mas-ter inventor at IBM’s Chief Analytics Office.

Ching-Hua Chen is a research staff member at the IBM T.J. Watson Research Center in Yorktown Heights, New York.

Oshani Seneviratne is the director of health data research at the Institute for Data Exploration and Applications at the Rensselaer Polytechnic Institute.

PDF

Standing on the Feet of Giants — Reproducibility in AI

A recent study implies that research presented at top artificial intelligence conferences is not documented well enough for the research to be reproduced. My objective was to investigate whether the quality of the documentation is the same for industry and academic research or if differences actually exist. My hypothesis is that industry and academic research presented at top artificial intelligence conferences is equally well documented. A total of 325 International Joint Conferences on Artificial Intelligence and Association for the Advancement of Artificial Intelligence research papers reporting empirical studies have been surveyed. Of these, 268 were conducted by academia, 47 were collaborations, and 10 were conducted by the industry. A set of 16 variables, which specifies how well the research is documented, was reviewed for each paper and each variable was analyzed individually. Three reproducibility metrics were used for assessing the documentation quality of each paper. The findings indicate that academic research does score higher than industry and collaborations on all three reproducibility metrics. Academic research also scores highest on 15 out of the 16 surveyed variables. The result is statistically significant for 3 out of the 16 variables, but none of the reproducibility metrics. The conclusion is that the results are not statistically significant, but still indicate that my hypothesis probably should be refuted. This is surprising, as the conferences use double-blind peer review and all research is judged according to the same standards.

Contributors

Odd Erik Gundersen (PhD, Norwegian University of Sci-ence and Technology) is the Chief AI Officer at the renew-able energy company TrønderEnergi AS and an Adjunct Associate Professor at the Department of Computer Science at the Norwegian University of Science and Technology. Gundersen has applied AI in the industry, mostly for startups, since 2006. Currently, he is investigating how AI can be ap-plied in the renewable energy sector and for driver training, and how AI can be made reproducible.

PDF

Reflections on the Ingredients for Success in AI Research: An Interview with Arvind Gupta

This article contains the observations of Arvind Gupta, who has over 22 years of experience in leadership, policy, and entrepreneurial roles, in both the Silicon Valley and India. Gupta was recently interviewed about the factors that could influence successful artificial intelligence research. At the time of the interview, Gupta was the chief executive officer of MyGov, India. During our interview, he shared with the editorial team his perspectives on investing in artificial intelligence innovations for business and society, in India. The interviewers included members of the special track edito­rial team from IBM (Biplav Srivastava, Daby Sow, and Ching-Hua Chen) and Rensselaer Polytechnic Institute (Oshani Seneviratne).

Contributors

Arvind Gupta
MyGov, Government Of India

Biplav Srivastava
IBM

Daby Sow
IBM T.J. Watson Research Center

Ching-Hua Chen
IBM T. J. Watson Research Center

Oshani Seneviratne
Rensselaer Polytechnic Institute

PDF

Identifying Critical Contextual Design Cues Through a Machine Learning Approach

Given the rise of autonomous systems in transportation, medical, and manufacturing industries, there is an increasing need to understand how such systems should be designed to promote effective interactions between one or more humans working in and around these systems. Practitioners often have difficulties in conducting costly and time-consuming human-in-the-loop studies, so an analytical strategy that helps them determine whether their designs are capturing their planned intent is needed. A traditional top-down, hypothesis-driven experiment that examined whether external displays mounted on autonomous cars could effectively communicate with pedestrians led to the conclusion that the displays had no effect on safety. However, by first taking a bottom-up, data-driven machine learning approach, those segments of the population that were most affected by the external displays were identified. Then, a hypothesis-driven, within-subjects analysis of variance revealed that an external display mounted on an autonomous car that provided the vehicle’s speed as opposed to commanding a go/no-go decision provided an additional 4 feet of safety for early adopters. One caveat to this approach is that the selection of a specific algorithm can significantly influence the results and more work is needed to determine the sensitivity of this approach with seemingly similar machine learning classification approaches.

Contributors

Mary L. “Missy” Cummings
Duke University

Alexander Stimpson
American Haval Motor Technology

PDF

Uncertain Context: Uncertainty Quantification in Machine Learning

Machine learning and artificial intelligence will be deeply embedded in the intelligent systems humans use to automate tasking, optimize planning, and support decision-making. However, many of these methods can be challenged by dynamic computational contexts, resulting in uncertainty in prediction errors and overall system outputs. Therefore, it will be increasingly important for uncertainties in underlying learning-related computer models to be quantified and communicated. The goal of this article is to provide an accessible overview of computational context and its relationship to uncertainty quantification for machine learning, as well as to provide general suggestions on how to implement uncertainty quantification when doing statistical learning. Specifically, we will discuss the challenge of quantifying uncertainty in predictions using popular machine learning models. We present several sources of uncertainty and their implications on statistical models and subsequent machine learning predictions.

Contributors

Brian Jalaian
U.S. Army Research Laboratory

U.S. Army Research Laboratory
U.S. Army Research Laboratory

Stephen Russell
U.S. Army Research Laboratory

PDF

Methods of AI for Multimodal Sensing and Action for Complex Situations

Artificial intelligence (AI) seeks to emulate human reasoning, but is still far from achieving such results for actionable sensing in complex situations. Instead of emulating human situation understanding, machines can amplify intelligence by accessing large amounts of data, filtering unimportant information, computing relevant context, and prioritizing results (for example, answers to human queries) to provide human–machine shared context. Intelligence support can come from many contextual sources that augment data reasoning through physical, environmental, and social knowledge. We propose a decisions-to-data multimodal sensor and action through contextual agents (human or machine) that seek, combine, and make sense of relevant data. Decisions-to-data combines AI computational capabilities with human reasoning to manage data collections, perform data fusion, and assess complex situations (that is, context reasoning). Five areas of AI developments for context-based AI that cover decisions-to-data include: (1) situation modeling (data at rest), (2) measurement control (data in motion), (3) statistical algorithms (data in collect), (4) software computing (data in transit), and (5) human–machine AI (data in use). A decisions-to-data example is presented of a command-guided swarm requiring contextual data analysis, systems-level design, and user interaction for effective and efficient multimodal sensing and action.

Contributors

Erik Blasch
Air Force Office of Scientific Research

Robert Cruise
Naval Surface Warfare Center

Alexander J. Aved
Air Force Research Laboratory

Uttam K. Majumder
Air Force Research Laboratory

Todd V. Rovito
Air Force Research Laboratory

PDF

Truly Autonomous Machines Are Ethical

There is widespread concern that as machines move toward greater autonomy, they may become a law unto themselves and turn against us. Yet the threat lies more in how we conceive of an autonomous machine rather than the machine itself. We tend to see an autonomous agent as one that sets its own agenda, free from external constraints, including ethical constraints. A deeper and more adequate understanding of autonomy has evolved in the philosophical literature, specifically in deontological ethics. It teaches that ethics is an internal, not an external, constraint on autonomy, and that a truly autonomous agent must be ethical. It tells us how we can protect ourselves from smart machines by making sure they are truly autonomous rather than simply beyond human control.

Contributors

John Hooker
Carnegie Mellon University

Tae Wan Kim
Carnegie Mellon University

PDF

Experiments in Social Media

Social media platforms like Facebook and Twitter permit experiments to be performed at minimal cost on populations of a size that scientists might previously have dreamed about. For instance, one experiment on Facebook involved more than 60 million subjects. Such large-scale experiments introduce new challenges as even small effects when multiplied by a large population can have a significant impact. Recent revelations about the use of social media to manipulate voting behavior compound such concerns. It is believed that the psychometric data used by Cambridge Analytica to target US voters was collected by Dr Aleksandr Kogan from Cambridge University using a personality quiz on Facebook. There is a real risk that researchers wanting to collect data and run experiments on social media platforms in the future will face a public backlash that hinders such studies from being conducted. We suggest that stronger safeguards are put in place to help prevent this, and ensure the public retain confidence in scientists using social media for behavioral and other studies.

Contributors

Toby Walsh
National ICT Australia Ltd.

PDF

A Year in K-12 AI Education

The time is ripe to consider what 21st-century digital citizens should know about artificial intelligence (AI). Efforts are under way in the USA, China, and many other countries to promote AI education in kindergarten through high school (K–12). The past year has seen the release of new curricula and online resources for the K–12 audience, and new professional development opportunities for K–12 teachers to learn the basics of AI. This column surveys the current state of K–12 AI education and introduces the work of the AI4K12 Initiative, which is developing national guidelines for AI education in the USA.

Contributors

David Touretzky
Carnegie Mellon University

Christina Gardner-McCune
University of Florida

Cynthia Breazeal
Massachusetts Institute of Technology

Fred Martin
University of Massachusetts, Lowell

Deborah Seehorn
Computer Science Teacher’s Association

PDF