The online interactive magazine of the Association for the Advancement of Artificial Intelligence

Aaron Adler, Jacob W. Crandall, Michael A. Goodrich, Knut Hinkelmann, Mayank Kejriwal, Eric Kildebeck, Andreas Martin, Abhinav Shrivastava 

The Association for the Advancement of Artificial Intelligence’s 2022 Spring Symposium Series was held at Standford University in Palo Alto, California from March 21-23, 2022. There were nine symposia in the program: AI Engineering: Creating Scalable, Human-Centered and Robust AI Systems, Artificial Intelligence for Synthetic Biology, Can We Talk? How to Design Multi-Agent Systems In the Absence of Reliable Communications, Closing the Assessment Loop: Communicating Proficiency and Intent in Human-Robot Teaming, Designing Artificial Intelligence for Open Worlds, Ethical Computing:  Metrics for Measuring AI’s Proficiency and Competency for Ethical Reasoning, How Fair is Fair? Achieving Wellbeing AI, Machine Learning and Knowledge Engineering for Hybrid Intelligence (AAAI-MAKE), Putting AI in the Critical Loop: Assured Trust and Autonomy in Human-Machine Teams. This report contains summaries of all the symposia. 

AI Engineering: Creating Scalable, Human-Centered and Robust AI Systems (S1) 

While both industry and research communities focus substantial work on AI, the development of new AI technology and implementation of AI systems are two different challenges. Current AI solutions often undergo limited testing in controlled environments and their performance is difficult to replicate, verify, and validate. To improve reliable deployment of AI and enable trust and confidence in AI systems, implementers need access to leading practices, processes, tools, and frameworks. The goal of this symposium is to establish and grow a community of research and practitioners focused on the discipline of AI Engineering, a field that combines the principles of systems engineering, software engineering, computer science, and human-centered design to create AI systems in accordance with human needs for stakeholder outcomes. By sharing lessons learned and practical experiences, we can expand the AI Engineering body of knowledge and progress from advancing individual tools to the development of systems. 

No formal report was filed by the organizers for this symposium. 

Artificial Intelligence for Synthetic Biology (S2) 

The fourth Artificial Intelligence (AI) for Synthetic Biology AAAI symposium was held virtually on March 21-22, 2022. Our primary goal for this symposium was to continue to connect and build mutually beneficial collaborations between the AI and the synthetic biology communities. The theme of the symposium was “New AI or New Data.”   

 Synthetic Biology integrates biology and engineering — mixing theoretical and experimental biology; engineering principles; and chemistry, physics, and mathematics. As the fields grows it is apparent that there are many opportunities and the need to apply AI techniques to several complex problem areas in the field. The symposium consisted of a mix of seven technical talks, two keynote talks, and discussions sessions.   

 Prof. Elizabeth Libby (Northeastern University) gave a keynote talk titled “Designing Programmable Biosensors.” Dr. Tristan Bepler (Simons Machine Learning Center, New York Structural Biology Center) gave a keynote talk titled “Learning the Protein Language: Evolution, Structure, and Function.”  

 A panel of Geoff Baldwin (Imperial College London), Pat Langley (Institute for the Study of Learning and Expertise, Stanford’s Center for Design Research), and Jacob Beal (Raytheon BBN) discussed “New AI or New Data.”   

 During the symposium we also discussed the Communications of ACM article (https://doi.org/10.1145/3500922) that was an outcome of the 2019 symposium.   

 Talks addressed specific AI and machine learning techniques including deep reinforcement learning and neural networks. Several talks discussed the need for data curation. Other talks addressed the need for rigorous experimentation to gather needed data. Highlighted applications ranged from biological threat detection to drug discovery.   

The symposium concluded with a discussion of next steps, and future meeting venues.  Aaron Adler (Raytheon BBN), Fusun Yaman (Raytheon BBN), Rajmonda Caceres (MIT Lincoln Laboratory), and Mohammed Eslami (Netrias, LLC) served as cochairs of the symposium.   

More information available on the symposium website: https://www.ai4synbio.org .   

Can We Talk? How to Design Multi-Agent Systems In the Absence of Reliable Communications (S3) 

Existing research on multi-agent autonomous systems is unable to solve an important class of real problems. At the root of many Multi-Agent Systems (MAS) approaches is the assumption of pervasive, predictable, reliable, and free communications.  These assumptions do not hold in practice; agents need to plan to communicate, reason about the time and resources needed to communicate, and respond to changes in the world requiring unplanned communications.   Uncertainty in the ability to communicate profoundly complicates multi-agent systems applications, and the combination of deployment in remote environments, poorly characterized environments, and the fragility of many aerospace and robotic systems, makes this an important problem to address. When agents are responsible for communication relay duties while simultaneously performing other tasks as part of a multi-agent system, fault management also takes on a new importance. 

The symposium featured ten papers and four invited talks focused on current and emerging methods for handling multi-agent systems problems over one and a half days.   

The invited talks explored both applications of MAS and theoretical work. Dr. Jonathan Stock of USGS described the challenges of unmeasured rivers of geology aloft (dust, particulates, and biology) that can benefit from uncrewed air systems (UAS) capable of coordinating scientific measurements.  Dr. Jaewoo Jung of NASA described the future of air traffic management, in which UAS, unpiloted air taxis, and numerous other vehicles in airspace negotiate with each other; such an airspace requires communications with automated air traffic control systems, which also must communicate and coordinate effectively with each other.  Dr. Jan Faigl described multi-robot systems capable of exploration in challenging underground environments; these robots deploy their own communications infrastructure, enabling communications between robots and from robots to human operators.  Robots make effective use of low bandwidth by exchanging only the most useful summaries of their current state, namely position information and local maps.  Dr. Ankur Meta of UCLA described strategies to co-develop multi-agent systems and their communication systems to best effect.  In one line of work, each agent estimates the complete state of a system, and agents exchange and combine states in order to reduce uncertainty, thus blending single and distributed Kalman filter approaches; in a second line of work, multiple agents needing to communicate with a single base station featuring multiple receivers build motion plans to minimize communication interference and maximize bandwidth. 

Problems: Papers covered a wide range of problem areas.   Most papers focused on how to ensure a multi-agent team can perform specific tasks in the presence of barriers to communication.  Tasks included localization (Frank et al.), movement while maintaining formation (Svancara et al.), information gathering (Selva et al., Freedman and Kutur) and teamwork manufacturing (Xiao et al., Schader and Luke).  While most papers assumed communication could take place in the open, (Mueller and Kutur) described the desire for non-traditional communication modalities, trading bandwidth for secrecy.  Finally, (Kempa et al.) describe the problem of effective communication of a multi-agent system’s aggregate state to an operator. 

Barriers to Communications: A wide range of issues arise from the need to communicate, or more generally coordinate, multi-agent systems while overcoming barriers to pervasive, predictable, reliable, and free communications.  Space-based applications (Frank et al., Kempa et al., Selva et al.) require reasoning about changing communication topologies as agents pass in and out of view of each other, time and resources needed to communicate, and communication latency. Other papers highlighted different assumptions regarding communications in multi-agent problems; synchronization may be needed but can arise when agents exchange information (Morgan et al., Schader and Luke), while in other scenarios, agents may need to craft motion plans while maintaining communications continuously (Svancara et al.) 

Communication vs Coordination: While communications was the specific focus of the symposium, several papers treated the problem as one of coordination, which can be achieved by either communication or observation of other agents’ actions or the environment (Tuisov et al., Xiao et al., Schrader et al., Morgan et al.).  

Building Blocks of Solutions: Several papers focused on solution methods allowing agents in a team to act independently while observing either agents’ effects on the environment, or other agents, or both.  Some approaches learn optimal agent behavior policies (Xiao et al.); other approaches use social laws based on observations to decouple agents (Tuisov et al.), or higher-level control via planning to decouple agents (Schader and Luke), while other approaches devise strategies for one agent ensuring a second agent will always be able to act satisfactorily regardless of its partner’s plan (Morgan et al.). Other approaches use programmable metareasoning as the foundation of agent behaviors (Freedman and Kutur).  Some papers used classic optimization techniques for coordination (Frank et al., Selva et al., Morgan et al.).  Formal methods provide the foundation for describing automatically computable agent summary techniques in (Kempa et al.)  Finally, detectable perturbations of an agents’ plan or state form the means of covert channel communication strategies (Mueller and Kutur). 

Closing the Assessment Loop: Communicating Proficiency and Intent in Human-Robot Teaming (S4) 

An AAAI Symposium on the topic of communicating proficiency and intent in human-robot teams was held at Stanford University, March 21-23, 2022.  The symposium was designed to bring together researchers from disparate fields to investigate how to create AI systems that effectively communicate regarding proficiency and intent with human partners. 

Major barriers to effective human-robot partnerships involve the communication of intent and proficiency.  Not only must robots be capable of performing tasks effectively, they must (1) know or identify the intentions of their human partners, (2) assess their own proficiency in carrying out these intentions, and (3) communicate these proficiency self-assessments back to their human partners.  Example challenge questions include: How should a robot convey predicted ability on a new task?  How should it report performance on a task that was just completed?  How should a robot adapt its proficiency criteria based on human intentions and values?  Communities in AI, robotics, HRI, and cognitive science have addressed related questions, but there are no agreed upon standards for evaluating proficiency and intent-based interactions.   

Assessing and communicating proficiency is a pressing challenge for human-robot interactions for several reasons. Prior work has shown that a robot that can assess its performance can alter human perception of the robot and decisions on control allocation. There is also substantial evidence in robotics that accurately setting human expectations is critical, especially when proficiency is below human expectations. Moreover, proficiency assessment depends on context and intent, and a human teammate might increase or decrease performance standards, adapt tolerance for risk and uncertainty, demand predictive assessments that affect attention allocation, or otherwise reassess or adapt intent. 

This symposium brought together researchers from the subfields of AI, HRI, robotics, and psychology to investigate how to create robots that possess proficiency assessment and communication capabilities in future AI technologies and systems.  One major theme of the papers presented at the symposium was assessment of both individual robot proficiency and assessment of the proficiency of teams of individuals.  Additional presentations and discussions focused on how reports of proficiency self-assessment impact human trust on these systems and on measuring the performance of human-robot teams. 

Another major theme of accepted papers presented at the symposium centered around how robots can and should communicate with human partners regarding their intentions and proficiency self-assessments.  For example, if a robot fails to achieve an outcome, how can it determine what was the cause of that failure?  Researchers discussed how imitating human behavior learning can shed insights into how to identify and communicate failure..  Another presentation investigated grand challenges in robot communication in shared control scenarios.  When sharing control with a human partner, robots must be able to engage in context-aware communication in novel situations at appropriate times and in appropriate ways. 

In addition to presentations of accepted papers, the symposium had four invited (keynote) speakers, a panel on “explainability,” and a break-out session in which groups of researchers discussed the design of futuristic AI systems.  These talks and activities addressed fundamental topics related to the symposium themes.  Dorsa Sadigh (Stanford) spoke on learning latent intent in multi-agent systems.  Hadas Kress-Gazit (Cornel) discussed how linear temporal logic can be used to perform robot proficiency self-assessment.  Bertram Malled (Brown), speaking in a joining session with the symposium on Ethical Computing, presented work on how robots explain, justify, and show norm awareness.  Finally, Jason Stack (ONR), also speaking in that joint session, discussed the process used by the US Department of Defense to attempt to ensure ethical use of AI systems.  Each talk generated significant discussions and insights into human-robot communication of proficiency and intent. 

The lively discussions in the symposium highlighted the general consensus that much additional work is needed to reach the lofty goal of robots that effectively communicate with their human partners about proficiency and intent. 

Michael Goodrich, Holly Yanco, Jacob Crandall, and Aaron Steinfeld served as co-chairs of the symposium.  Accepted papers were published as a collection on arXiv. 

Designing Artificial Intelligence for Open Worlds (S5) 

Open world learning has taken on new importance in recent years as AI systems continue to be applied and transitioned to real-world settings, where unexpected events can, and do, occur. Designing AI that can thrive in open world environments is a complex problem still in the early stages of research. This symposium assembled over fifty AI researchers and practitioners from across academia, government, and industry in a program that included more than twenty contributed and invited papers, three distinguished speakers, two panels, and a townhall to discuss the unique challenges and issues related to designing AI for open worlds. 

Designing intelligent systems that can reliably handle unexpected situations is critical if we are to transition them in complex environments. Historically, AI advances have often been piloted and fine-tuned on well-understood benchmark datasets, or in tightly controlled simulation environments where there is limited room for truly novel occurrences. In contrast with such testbeds, an open world learning framework must be prepared to deal with the unexpected following deployment. For example, a self-driving car originally trained in a simulator that assumes good visibility and clear highways should also be able to deal with fog and traffic jams, if it has been designed to handle open world challenges. 

While longstanding work in the machine learning community on concept drift and anomaly detection provide some guidance on open world learning, their assumptions are too limited for genuine open worlds. Concept drift, for example, assumes relatively slow change in the input distribution whereas open worlds can involve sudden ‘structural’ shifts, such as a car crash on the highway. AI deployed in such worlds essentially face a ‘do or die’ situation; they are not given many examples to learn from in the moment. 

 An ideal open world AI system would continuously learn from its experience, have a certain level of meta-learning ability and self-awareness (e.g., know what it does not know), and overcome changes in its environment through an application of fundamental, rather than incremental, learning principles. Hence, the symposium was organized with the agenda of synthesizing the current state-of-the-art on open world learning, including promising approaches from academia, industry, and government research programs that significantly expand upon the more traditional approaches of anomaly detection, lifelong learning, neurobiological mechanisms of machine learning, concept drift, and similar areas. 

To accomplish this goal, we designed a highly interactive program with a judicious mix of talks, panels, plenary sessions and a concluding townhall to discuss the future of open world learning. Important highlights include: 

Industry insights: Industry participants (including from Meta, Charles River Analytics, and Allen Institute for AI, among others)  emphasized the critical importance of reliability, and AI that ‘knows what it does not know.’ Typical failure rates in research environments (e.g., 5%) are often wholly unacceptable in deployed AI, with a representative from Google noting that a system failing even 0.01% of the time could translate to tens of thousands of failures per day. Such a system cannot be deployed in practice. The expectation here from deployed AI systems is not that they never fail, but that they know when the probability of failure is high so they can be used reliably. 

Contributed research pushing the state-of-the-art in open world learning: Significant research progress was presented on novelty detection and adaptation, which is an important milestone in open world learning. Of particular interest to the research community were multiple presentations describing and utilizing new testbeds for novelty research (e.g., Science Birds Novelty from Jochen Renz – Australian National University, NovGrid grid world from Mark Riedl – Georgia Tech, Smart Home testbed from Larry Holder – Washington State University, and a Monopoly novelty testbed from Mayank Kejriwal – University of Southern California) and a testbed for lifelong learning (L2Explorer from Gautam Vallabha – Johns Hopkins University). These resources and testbeds will likely prove important in mobilizing the efforts of the (growing) open world learning research community.  

Connections between open world learning and other paradigms: While a robust system is defined as one that degrades gracefully when faced with adversity, an anti-fragile system is one that improves performance through learned experience and adaptation. An excellent example in the natural world is the immune system. The two panels in the symposium, the three plenary speakers, and a vibrant discussion from symposium attendees, provided critical insight into the link between anti-fragility and open world learning. Participants also realized that much theoretical and empirical work remains to be done to understand better such links, including the links between open world learning and related fields such as meta-learning and lifelong learning. 

In summary, the symposium highlighted the potential for AI systems to move beyond artificial tasks and simulations into truly complex open worlds. Although challenges remain, the AI of the future may well be capable of fundamentally accommodating structural novelties on its own, and in real time. 

 The “Designing Artificial Intelligence for Open Worlds” symposium was co-chaired by Mayank Kejriwal (University of Southern California), Eric Kildebeck (University of Texas at Dallas), and Abhinav Shrivastava (University of Maryland, College Park). Open-access versions of the papers, and details on presenters, may be found on the website https://usc-isi-i2.github.io/AAAI2022SS/ 

Ethical Computing:  Metrics for Measuring AI’s Proficiency and Competency for Ethical Reasoning (S6) 

The prolific deployment of Artificial Intelligence (AI) across different applications have introduced novel challenges for AI developers and researchers. AI is permeating decision making for the masses: from self-driving automobiles, to financial loan approval, to military applications. Ethical decisions have largely been made by humans with vested interest in, and close temporal and geographical proximity to the decision points. With AI making decisions, those ethical responsibilities are now being pushed to AI designers who may be far-removed from how, where, and when the ethical dilemma occurs. Such systems may deploy global “ethical” rules with unanticipated or unintended local effects or vice versa. 

While explainability is desirable, it is likely not sufficient for creating “ethical AI”, i.e. machines that can make ethical decisions. These systems will require the invention of new evaluation techniques around the AI’s proficiency and competency in its own ethical reasoning. Using traditional software and system testing methods on ethical AI algorithms may not be feasible because what is considered “ethical” often consists of judgements made within situational contexts. The question of what is ethical has been studied for centuries. This symposium invites interdisciplinary methods for characterizing and measuring ethical decisions as applied to ethical AI. 

No formal report was filed by the organizers for this symposium. 

How Fair is Fair? Achieving Wellbeing AI (S7) 

What are the ultimate goals and outcomes of AI? AI has incredible potential to help humans make happy, and also has risks to cause unintentional harms. This symposium aims to combine humanity perspectives with technical AI issues, and discover new success metrics for wellbeing AI, instead of productive AI in exponential growth or economic/financial supremacies. 

We call for the AI challenges for new human-AI collaboration, which discuss the desirable human- AI partnerships for providing meaningful solutions to social problems with humanity perspectives. This challenge is inspired by the “AI for social good” movements, which pursue the positive social impacts of using AI, supporting the Sustainable Development Goals (SDGs), a set of seventeen objectives for the world to be more equitable, prosperous, and sustainable. In particular, we will focus on the two perspectives: Wellbeing and Fairness. 

First perspective is “Wellbeing.”. We define “Wellbeing AI” as artificial intelligence that aims to promote psychological wellbeing (that is, happiness) and maximize human potential. Our environment escalates stress, provides unlimited caffeine, distributes nutrition-free fast food, and encourages unhealthy sleep behavior. For this issue, wellbeing AI provides a way to understand how our digital experience affects our emotions and our quality of life and how to design a better wellbeing system that puts humans at the center. 

Second perspective is “Fairness”. AI has the potential to help humans make fair decisions. But we need to tackle the “bias” problem in AI (and in humans) to achieve fairness. In the recent trend on big data becoming personal, AI technologies to manipulate the cognitive bias inherent in people’s mind have evolved, e.g.: social media, such as Twitter and Facebook, and commercial recommendation systems. “Echo chamber effect” is known to make it easy for people with the same opinion in a community. Recently, there has been a movement to use such cognitive biases also in the political world. Advances in big data and machine learning should not overlook the new threats to enlightenment thought. 

No formal report was filed by the organizers for this symposium. 

Machine Learning and Knowledge Engineering for Hybrid Intelligence (S8) 

The AAAI 2022 Spring Symposium on Machine Learning and Knowledge Engineering for Hybrid Intelligence (AAAI-MAKE 2022) gathered researchers and practitioners in a mixed on-site and re- mote/virtual setting. Researchers from machine learning and knowledge engineering collaborated on explainable and knowledge-based hybrid intelligence, adding the human into the loop of AI. 

The AAAI 2022 Spring Symposium on Machine Learning and Knowledge Engineering for Hybrid Intelligence (AAAI-MAKE 2022) brought together researchers and practitioners of the two fields to reflect on advances in combining them, and to present their early work and initial results in creating hybrid intelligence with the two AI methods. AAAI-MAKE 2022 is the fourth edition of the sym- posium on combining machine learning and knowledge engineering, with a changing focus as an emergent field. This time, the symposium was organized as a hybrid event due to the ongoing pan- demic, with people on-site at Stanford University and people remotely connected. The focus was on hybrid intelligence as an emerging theme within AI and information systems research. Twenty-eight papers were presented as a contribution to the combination of knowledge engineering, machine learn- ing, and hybrid intelligence. The remarkable number of presentations showed the enormous need for combined, manageable, and explainable AI approaches that contribute to hybrid intelligence in re- search and business, reflecting the symposium’s goal of developing knowledgeable systems that can involve humans in the loop of AI. 

The AAAI fellow Natasha Noy held the opening keynote address. She is working at Google Re- search and is well-known for her significant contributions to ontology and knowledge engineering and her widely recognized Ontology 101 tutorial. She presented insights and learnings of her and her team’s development of Google’s Dataset Search, a dedicated search service for data pools. In par- ticular, she emphasized that despite the presence of an ML-focused AI ecosystem, there is a need to incorporate rich/semantic metadata into datasets and web resources in general. 

Hybrid AI combines two prominent AI approaches, symbolic and sub-symbolic AI. In such hy- brid architectures, agents using different types of AI work together to solve problems where separate approaches do not provide satisfactory results, e.g., in terms of explainability and data efficiency. Ex- plainability is needed to complement human intelligence in the AI loop, and data efficiency is needed for learning in domains where data availability is limited. Hybrid approaches that combine machine learning with the use of logic can explain inferences and increase data efficiency. Combining machine learning and knowledge engineering opens up new possibilities for redesigning knowledge work at the interface between humans and machines, intending to combine complementary strengths. Knowl- edge workers without strong AI skills can contribute to hybrid teams where humans and machines work synergistically to achieve common goals better in collaboration than separately. More efforts need to be made to democratize the combination of machine learning and knowledge engineering and unleash the complementary strengths. 

The submissions again showed a huge demand for combined/hybrid AI approaches that address hy- brid intelligence. The proceedings offer a collection of papers that contribute to the symposium’s aim of combining machine learning and knowledge engineering, hybrid intelligence / intelligent systems, and hybrid AI and neuro-symbolic approaches/methods. It became evident that there are a variety of applications and demands for combining machine learning and explicitly represented knowledge. Be it so to make sense of objects perceived in images, validate conceptual models by learning appro- priateness of attributes and relations, improving language translations, explain decisions in juridical cases, deliver targeted news, or improve medical diagnosis—just to mention a few. In the final and concluding discussion, the participants agreed to continue the exchange by establishing a commu- nity; furthermore, a follow-up AAAI spring symposium was suggested that would include a dedicated MAKE challenge in addition to regular paper presentations of early work. 

Andreas Martin, Hans-Georg Fill, Aurona Gerber, Knut Hinkelmann, Frank van Harmelen, Doug Lenat, and Reinhard Stolle served as organizers of the symposium. The papers of the symposium are published as CEUR Workshop Proceedings, Volume 3121. This report was written by Andreas Martin and Knut Hinkelmann. 

Putting AI in the Critical Loop: Assured Trust and Autonomy in Human-Machine Teams (S9) 

There will always be interactions between machines and humans. When the machine has a high level of autonomy and the human-machine relationship is close, there will be underpinning, implicit assumptions about behavior and mutual trust. The performance of the Human-Machine team will be maximized when a partnership is formed that is based on providing mutual benefits. Designing systems that include human-machine partnerships requires an understanding of the rationale of any such relationship, the balance of control, and the nature of autonomy. Essential first steps are to understand the nature of human-machine cooperation, to understand synergy, interdependence, and discord within such systems, and to understand the meaning and nature of “collective intelligence.” The reasons why it can be hard to combine machines and humans, attributable to their distinctively different characteristics and features, are also central to why they have the potential to work so well together, ideally overcoming each other’s weaknesses. Across the widest range of applications, these topics remain persistent as a major concern of system design and development. Intimately related to these topics are the issues of human-machine trust and “assured” performance and operation of these complex systems, the focal topics of this year’s proposed symposium. Recent discussions on trust emphasize that, with regard to human-machine systems, trust is bidirectional and two-sided (as it is in humans); humans need to trust AI technology but future AI technology at least may need to trust human inputs and guidance as well. In the absence of an adequately high level of autonomy that can be relied upon, substantial operator involvement is required, which not only severely limits operational gains, but creates significant new challenges in the areas of human-machine interaction and mixed initiative control. The meaning of assured operation of a human-machine system also needs considerable specification; assurance has been approached historically through design processes by following rigorous safety standards in development, and by demonstrating compliance through system testing, but largely in systems of bounded capability and where human roles were similarly bounded. These intersecting themes of collective intelligence, bidirectional trust, and continual assurance form the challenging and extraordinarily interesting themes of this symposium. 

No formal report was filed by the organizers for this symposium. 

 

Biographies 

Aaron Adler is a Senior Scientist at Raytheon BBN in Columbia, MD.     

  

Jacob W. Crandall is a professor in the Computer Science Department at Brigham Young University.  

  

Michael A. Goodrich is a professor in the Computer Science Department at Brigham Young University.  

  

Knut Hinkelmann is a professor at the FHNW University of Applied Sciences and Arts Northwestern Switzerland and a research associate at the University of Pretoria.  

  

Mayank Kejriwal is a Research Assistant Professor of Industrial & Systems Engineering at the University of Southern California  

  

Eric Kildebeck is a Research Scientist in the Center for Engineering Innovation at the University of Texas at Dallas.  

  

Andreas Martin is a lecturer and senior researcher in information systems and artificial intel- ligence at the FHNW University of Applied Sciences and Arts Northwestern Switzerland.  

  

Abhinav Shrivastava is an Assistant Professor of Computer Science and UMIACS at the University of Maryland, College Park
 

 

Aaron Adler is a Senior Scientist at Raytheon BBN in Columbia, MD.     

  

Jacob W. Crandall is a professor in the Computer Science Department at Brigham Young University.  

  

Michael A. Goodrich is a professor in the Computer Science Department at Brigham Young University.  

  

Knut Hinkelmann is a professor at the FHNW University of Applied Sciences and Arts Northwestern Switzerland and a research associate at the University of Pretoria.  

  

Mayank Kejriwal is a Research Assistant Professor of Industrial & Systems Engineering at the University of Southern California  

  

Eric Kildebeck is a Research Scientist in the Center for Engineering Innovation at the University of Texas at Dallas.  

  

Andreas Martin is a lecturer and senior researcher in information systems and artificial intel- ligence at the FHNW University of Applied Sciences and Arts Northwestern Switzerland.  

  

Abhinav Shrivastava is an Assistant Professor of Computer Science and UMIACS at the University of Maryland, College Park