Aaron Adler, Christopher Amato, Rajmonda Caceres , Eric Darve, Hans-Georg Fill, Steven Greidinger, Russell Greiner, Knut Hinkelmann, Neeraj Kumar, Jonghyun Lee, Zitao Liu, James Llinas, Andreas Martin, Vivek Nallur, Mohammed Eslami, Netrias , Frans A Oliehoek, Shayegan Omidshafiei, Rajan Puri, Samira Rahimi, Anand Rao, Selma Sabanovic, Karl Tuyls, Xuesu Xiao, Fusun Yaman, Xiao Zhai
The Association for the Advancement of Artificial Intelligence’s 2021 Spring Symposium Series was held virtually from March 22-24, 2021. There were ten symposia in the program: Applied AI in Healthcare: Safety, Community, and the Environment, Artificial Intelligence for K-12 Education, Artificial Intelligence for Synthetic Biology, Challenges and Opportunities for Multi-Agent Reinforcement Learning, Combining Machine Learning and Knowledge Engineering, Combining Machine Learning with Physical Sciences, Implementing AI Ethics, Leveraging Systems Engineering to Realize Synergistic AI/Machine-Learning Capabilities, Machine Learning for Mobile Robot Navigation in the Wild, and Survival Prediction: Algorithms, Challenges and Applications. This report contains summaries of all the symposia.
Applied AI in Healthcare: Safety, Community, and the Environment (S1)
The two-day international virtual symposium included invited speakers, presenters of research papers, and breakout discussions from attendees around the world. Registrants were from different countries/cities including the US, Canada, Melbourne, Paris, Berlin, Lisbon, Beijing, Central America, Amsterdam, and Switzerland. We had active discussions about solving health-related, real-world issues in various emerging, ongoing, and underrepresented areas using innovative technologies including Artificial Intelligence and Robotics. We primarily focused on AI-assisted and robot-assisted healthcare, with specific focus on areas of improving safety, the community, and the environment through the latest technological advances in our respective fields.
The day was kicked off by Raj Puri, Physician and Director of Strategic Health Initiatives & Innovation at Stanford University spoke about a novel, automated sentinel surveillance system his team built mitigating COVID and its integration into their public-facing dashboard of clinical data and metrics. Selected paper presentations during both days were wide ranging including talks from Oliver Bendel, a Professor from Switzerland and his Swiss colleague, Alina Gasser discussing co-robots in care and support, providing the latest information on technologies relating to human-robot interaction and communication. Yizheng Zhao, Associate Professor at Nanjing University and her colleagues from China discussed views of ontologies with applications to logical difference computation in the healthcare sector. Pooria Ghadiri from McGill University, Montreal, Canada discussed his research relating to AI enhancements in health-care delivery for adolescents with mental health problems in the primary care setting. Invited talks included an insightful discussion by Samira Rahimi, Assistant Professor at McGill on her research involving Applied AI in community based primary health care. Nathan Ensmenger (Associate Professor) and Selma Sabonovic (Associate Dean for Graduate Education) from Indiana University, enlightened the audience about their thoughtful research into the wide-ranging complexities of dementia care and its integration with AI and robotics. Highlighted breakout and spontaneous discussions included conversations about novel and emerging technologies in healthcare, defining priorities and next steps in AI.
The symposium ended with the attendees appreciating unique insights and conversations about highly evolving topics and agreeing to continue sharing related experiences beyond the event and developing further collaboration.
Rajan Puri, Samira Rahimi, and Selma Šabanović, cochaired the symposium. This report was written by Rajan Puri, Samira Rahimi, and Selma Šabanović.
Artificial Intelligence for K-12 Education (S2)
Artificial intelligence (AI), as one of transformative technologies, is making its way into K-12 education based on the increasingly digitalized education tools and the popularity of online learning. Despite the great potential and bright development prospect, there are still many unique challenges in introducing AI into K-12 education. Hence, the symposium introduces research progress and discussed recent advances of handling challenges on applying AI in K-12 education. The symposium also makes efforts to bring the AI community members together to exchange problems and solutions and build possible collaborations in the future based on our continued efforts.
Technology has transformed over the last few years, turning from futuristic ideas into today’s reality. Artificial intelligence (AI) is one of these transformative technologies that is now achieving great successes in various real-world applications and made our life more convenient and safer. AI is now shaping the way businesses, governments, and educational institutions do things and is making its way into K-12 classrooms, schools and districts across many countries.
In fact, the increasingly digitalized education tools and the popularity of online learning have produced an unprecedented amount of data that provides us with invaluable opportunities for applying AI in K-12 education. Recent years have witnessed growing efforts from AI research community devoted to advancing our education and promising results have been obtained in solving various critical problems in K-12 education. For examples, AI tools are built to ease the workload for teachers. Instead of grading each piece of work individually, which can take up a bulk of extra time, intelligent scoring tools allow teachers the ability to have their students work automatically graded. What’s more, various AI based models are trained on massive student behavioral and exercise data to have the ability to take note of a student’s strengths and weaknesses, identifying where they may be struggling. These models can also generate instant feedback to instructors and help them to improve their teaching effectiveness.
Despite gratifying achievements have demonstrated the great potential and bright development prospect of introducing AI in K-12 education, developing and applying AI technologies to educational practice is fraught with its unique challenges, including, but not limited to, extreme data sparsity, lack of labeled data, and privacy issues. Hence, this symposium introduces research progress on applying AI to K-12 education and discusses recent advances of handling challenges encountered in AI educational practice. The symposium builds upon the continued efforts (AAAI’20 workshop, IJCAI’20 tutorial, KDD’20 tutorial) in bringing the AI community members together for the above-mentioned themes. The symposium brings together AI researchers, learning scientists, educators and policymakers to exchange problems and solutions and builds possible collaborations in the future.
In the two-day symposium, invited speakers from North America, Asia and Europe shared their research advances on how AI empower K-12 education, the development directions of future technology, and related methodologies and principles. In addition, industrial researchers showcased some of the latest applications in K-12 education. In the live Q&A session after each talk, audiences from all over the world interacted with the speakers. The success of the two-day symposium is inseparable from the hard work of the symposium chairs of Zitao Liu, Jiliang Tang, Yi Chang, Xiangen Hu, and Diane Litman. This report was written by Xiao Zhai and Zitao Liu.
Artificial Intelligence for Synthetic Biology (S3)
The Artificial Intelligence (AI) for Synthetic Biology AAAI symposium was held virtually on March 22-24, 2021. The primary goal of the symposium was to understand how the integration of AI and synthetic biology have changed in a post-COVID era. The intersection of these two fields is rich with problems that if addressed can significantly help the world cope with future pandemics. We focused our discussions, keynotes, and presentations on how a research group can deliver on the promise of AI for synthetic biology to go from idea to impact.
Synthetic Biology integrates biology and engineering — mixing theoretical and experimental biology; engineering principles; and chemistry, physics, and mathematics. The Artificial Intelligence for Synthetic Biology symposium seeks to introduce challenges and opportunities of enhancing the engineering goals in synthetic biology with AI techniques. The symposium consisted of a mix of 8 technical talks, 2 invited talks, 2 discussion sessions, and a panel that spanned across industry and government.
Participants in the symposium had different backgrounds in the two fields. Paper presentations included models that included prior biological knowledge, challenges with representation and integration of data from automated cloud labs, explainable data-driven biological models, and transitioning a research project to a deployable COVID diagnostic test. The papers all presented technologies and models as they compared with state-of-the-art for both design and learn stages of the engineering cycle. Primary challenges highlighted, even with structured automated labs, were in consolidation and integration of experimental datasets as each lab represents their data in different formats and makes their measurements in different units and scales. Technology was presented that could overcome these challenges to enhance a model’s predictive performance.
The symposium also included a panel that discussed funding opportunities and challenges at the AI and Synthetic Biology intersection focused on health and defense. Panelists included Šeila Selimović (BARDA); Peter Carr (MIT Lincoln Laboratory); Hector Munoz-Avila (NSF); and Vanessa Varaljay (AFRL). The panelists presented their view on the future of opportunities at the intersection of the two fields. Most notable were methods that focused on the application and brought the biological researcher into both design of the experiment and the model. Focusing the application of the model to enhance a researcher’s capability beyond state-of-the-art were the primary metrics they looked for in technology decisions.
The symposium also included two keynote talks from Professor Pamela Silver, Harvard Medical School, and Professor Timothy Lu, Massachusetts Institute of Technology. Professor Silver’s keynote discussed the design of biosensors and applications of both synthetic biology and AI to engineer solutions to delay degradation of living things and aid their perseverance and recovery under extreme conditions (DARPA Biostasis program). Another topic was on searching for homologs (co-evolutionary sites) across all evolution to discover if proteins exist in other organisms with similar functions. The goal is then to build statistical models to be leveraged for engineering and design. Prof. Silver described the EVcouplings, an evolutionary coupling analysis platform that allowed them to search over 100 million protein sequences using Hidden Markov methods.
Professor Lu’s keynote was titled “Using Synthetic Biology to Decipher and Diagnose Disease Biology”. He highlighted the perspective that Biology is a network science: reprogramming cells involves several genes or target-drug combinations often have synergistic effects. Prof. Lu discussed both experimental and computational challenges of mapping combinatorial interactions in human cells and the role high-throughput synthetic biology methods as well as AI can play in addressing these challenges.
Finally, the symposium kicked off with a breakout and ended with a discussion session that focused on the new challenges and opportunities of merging the two domains in a post-COVID era. The breakout sessions were led by the symposium co-chairs and can be summarized as:
- Data Collection and Curation = Languages and standards are required to keep up with the rate of data being made available. Research publications on Biorxiv and other peer-reviewed venues, such as Nature, began to feel like Twitter in their publishing of COVID related articles. Protocols and experimental reporting need a standardized framework. There are opportunities to collect and process metabolomic data at the cell level in order to more closely connect design features to system-level impacts. There are also opportunities to invest in collection of diverse datasets that still support transfer learning of knowledge.
- Composable Tools and Models = Dependencies, inconsistent labels, disparate data repositories, and formats all make composable systems of existing tools difficult to leverage for rapid analyses. A lack of a central biological model repository or support was also felt across labs during this time. The challenge of working with multi-scale problems (omics and other phenotypic assays) makes the composition of analyses to provide a grandeur picture difficult. Nonetheless, we have begun to scratch the surface in design tools for synthetic biology, but we need common representation of labels, clear constraints of design space, a comprehensive list of available materials and models, and, of course, good discovery models to allow people to iterate on their designs in real-time.
- Cyber/Biosecurity = Several aspects were discussed including how to screen DNA sequence orders, development and use of data standards and common knowledge bases, and security of machine learning models. There was discussion about various threats and the drawing on lessons learned in other domains. This is a topic of increasing importance that is seldom discussed in synthetic biology.
- Social/Ethics = The focus of this breakout session was on communication of results and the discoveries that were made. Misinformation led to a host of conspiracy theories about the origins of the virus, its transmissibility, and methods of protection. Surprisingly, the group noticed that skeptics become more skeptical with more information. The conclusion was to express ourselves in a more accessible way and properly represent positive stories and outcomes. Finally, as always when thinking on synthetic biology, the dual-use nature of the field was discussed. The opinion of the biologists, who recognize the difficulty of engineering robust organisms, is it is very hard to weaponize biology through engineering. It has been noted in the session that while there is no evidence that the virus is engineered the vaccines that are going to end the virus are engineered. The new vaccines are very different from historic vaccines which were based on natural organisms. The success story of the Covid-19 vaccines should be the frontline story for promoting synthetic biology.
The symposium concluded with a discussion of next steps, target publications, and future meeting venues. Aaron Adler (BBN Technologies), Rajmonda Caceres (MIT Lincoln Lab), Mohammed Eslami (Netrias, LLC), and Fusun Yaman (BBN Technologies) served as co-chairs of the symposium. Some papers and talks are available on the symposium website: https://www.ai4synbio.org/. This report was written by Mohammed Eslami, Rajmonda Caceres, Aaron Adler, and Fusun Yaman.
Challenges and Opportunities for Multi-Agent Reinforcement Learning (S4)
We live in a multi-agent world and to be successful in that world, intelligent agents, will need to learn to take into account the agency of others. At COMARL, the participants presented and discussed challenges and novel opportunities for such intelligent agents.
The Challenges and Opportunities for Multiagent Reinforcement Learning (COMARL) symposium was originally planned for spring 2020, targeting a very interactive format. Due to the pandemic-related circumstances, we instead held a virtual meeting in spring 2021 without many of the interactive elements. Nevertheless, we had a strong program with 20 contributed papers, and 4 invited speakers: Romuald Elie (DeepMind), Thore Graepel (DeepMind), Edward Lockhart (DeepMind), and Georgios Piliouras (SUTD). Additionally, in the lead up to the symposium, we have been organizing a seminar series on COMARL with 5 invited speakers.
The spring symposium itself took place over two days, and followed a more standard scientific organization with talks interleaved with some time for questions. Many of the talks highlighted the challenges the field of MARL faces (e.g., nonstationarity, multiagent dynamics, representations, large-scale environments, equilibrium computation, inter-agent communication, transfer, multi-agent evaluation, etc.). Concurrently, the speakers also highlighted at least as many opportunities, both in terms of overcoming the challenges, as well as envisioning the impact of multiagent learning in widely different fields of application. Attendees agreed that, while all the talks were connected through the techniques of MARL, the topics touched upon were quite diverse and stimulating, which led to several interesting discussions and potential new research collaborations.
More precisely, a number of talks focused on the idea of language formation and the means by which MARL can further support the theories and practices in the field of language learning. Several other talks addressed the interface of MARL researchers with much wider communities; for instance, Wolfram Barfus (Tuebingen AI Center) proposed that to bring together vastly different communities that all study multiagent learning (biology, economics, computer science, physics, etc.), it would be useful to have a set of benchmarks. Ed Lockhart (DeepMind) gave an overview of the OpenSpiel toolbox that makes a step in that direction using games, and Alexander Shmakov (UC Irvine) presented a framework that specifically focuses on multiplayer (n>2) benchmarks.
Some of the promising applications of (and accompanying challenges for) MARL that were identified were: fully autonomous multiagent aviation, large scale air traffic management, supporting humans and humanity to achieve better outcomes in social dilemmas, resource allocation problems, learning in social networks, and computer games. On the more foundational side, challenges were raised concerning learning institutions and norms, transferring behaviors from few to many agents (and vice versa), dealing with possibly outdated information, and randomization of continuous actions.
Overall, despite the logistical challenges we faced due to the pandemic, the symposium was a success in terms of bringing together the above various groups of MARL researchers, highlighting important challenges and potential opportunities for interaction with other fields. In the concluding session of the symposium, the attendees expressed their interest in continuing participation in such forums, especially once in-person symposia can resume in subsequent years.
The organizers, Chris Amato (Northeastern), Frans Oliehoek (TU Delft), Shayegan Omidshafiei (Deepmind), and Karl Tuyls (Deepmind), welcome researchers that are interested in co-organizing such future events to get in touch. This report was written by Christopher Amato, Frans A Oliehoek, Shayegan Omidshafiei, Karl Tuyls.
Combining Machine Learning and Knowledge Engineering (S5)
The AAAI 2021 Spring Symposium on Combining Machine Learning and Knowledge Engineering virtually assembled researchers and practitioners from the fields of machine learning and knowledge engineering to work together on explainable and knowledge- based AI.
The symposium on combining machine learning and knowledge engineering brought together researchers and practitioners from the machine learning and knowledge engineering domains to reflect two years later the progress on combining the two fields after it has been raised in the 2019 AAAI spring symposium series for the first time. The 2021 edition was held as a virtual event with 47 presentations, including nine from the canceled 2020 symposium, a keynote, and a panel discussion. The remarkable number of presentations showed the tremendous demand for combined, hybrid, manageable, and explainable AI approaches in research and business, reflecting the symposium’s aim of building knowledgeable systems capable of both learning and reasoning.
Although machine/deep learning approaches alone can handle data-intensive learning tasks and help solve complex tasks with great success, there are still some challenges. Machine learning is best suited for building AI systems when knowledge is unknown or tacit. On the other hand, knowledge-based systems that make expert knowledge explicit and accessible are often based on logic can explain their conclusions compared to pure probabilistic approaches. Here, combined approaches of symbolic machine learning and ontology learning promise to reduce initial effort and deliver explainable capabilities, which was the rationale of this symposium.
In his keynote, Doug Lenat presented insights and learnings from 50 years of research in large-scale machine learning and large-scale knowledge engineering. He showed ways to com- bine machine learning and knowledge engineering to exploit their latent synergies.
In a panel discussion, six researchers from Europe, Africa, and North America gave insights into their views on how to combine machine learning and knowledge engineering and which challenges will have to be tackled in the future. Frank van Harmelen put forward the two traditional aspects in computer science of modularity and abstraction for making progress in combined solutions. Reinhard Stolle stated that the contributions are so far mostly pragmatic engineering solutions and raised the question of whether theories are required for the further advancement of the field. Doug Lenat reminded the audience to make use of the wide range of solution approaches available in machine learning, knowledge engineering, databases, and others, instead of focusing just on techniques from one field. Hans-Georg Fill pointed to the aspect of trust in systems combining machine learning and knowledge engineering and the question of provenance and responsibility for results. Aurona Gerber highlighted the fact that approaches in the two fields rely on different paradigms and processes, i.e., the conceptualization with rules and relationships in knowledge engineering vs. the experience from data and observations in machine learning, which need to be aligned through modular design and the specification of interfaces. Andreas Martin concluded with the question, which topics of hybrid artificial intelligence should enter study programs and research initiatives in order to advance the design of new types of information systems that use knowledge, experience, and common sense in learning and execution. Subsequent to the statements by the panelists, a lively discussion developed in which several participants from the audience contributed with further insights.
The remarkable number of submissions showed the significant demand for combined/hybrid AI approaches. Furthermore, the symposium proceedings provide a collection of papers that contribute to the symposium’s goal of combining machine learning and knowledge engineering, as well as to hybrid AI and neuro-symbolic approaches/methods. Although the participants were delighted with the symposium’s virtual format, the informal discussions in the breakouts, and the overall organization, there was still a strong desire to meet physically in the future. Consequently, in the final and concluding discussion, the participants agreed to continue the exchange by establishing a community and, from time to time, organize presentations and net- working events; furthermore, the wish for a proposal for a follow-up on-site symposium or workshop was stated clearly.
Andreas Martin, Hans-Georg Fill, Aurona Gerber, Knut Hinkelmann, Frank van Harmelen, Doug Lenat, and Reinhard Stolle served as organizers of the symposium. The papers of the symposium were published as CEUR Workshop Proceedings, Volume 2846. This report was written by Andreas Martin, Knut Hinkelmann, and Hans-Georg Fill.
Combining Machine Learning with Physical Sciences (S6)
The AAAI 2021 Spring Symposium on Combining Artificial Intelligence and Machine Learning with Physics Sciences brought researchers and scientists from diverse areas to present the current state of the art and identify opportunities and gaps in AI/ML-based physics science.
With recent advances in scientific data acquisition and high-performance computing, AI and ML have shown great potential to leverage scientific domain knowledge to support new scientific discoveries and enhance the development of physical models and scientific understanding for complex natural and engineering systems. However, a rigorous understanding of when AI/ML is the right approach is largely lacking: for what class of problems, underlying assumptions, available data sets, and constraints are these new methods best suited? The lack of interpretability in AI-based modeling and related scientific theories makes them insufficient for high-impact, safety-critical applications such as medical diagnoses, national security, and environmental contamination. The symposium aimed to discuss challenges and opportunities for increasing the scale, rigor, robustness, and reliability of physics-informed AI necessary for routine use in science and engineering applications and discuss potential researcher-AI collaborations to significantly advance diverse scientific areas and transform the way science is done.
More than 100 participants contributed to intense discussion during the presentation of 49 extended abstracts and short papers. Presenting topics include 1) state-of-art learning frameworks that can seamlessly synthesize models, governing equations and data, 2) architectural and algorithmic improvements for scalable physics-informed learning, 3) stability and error analysis for physics-informed learning, 4) software development facilitating the inclusion of physics domain knowledge in learning, and 5) discovery of physically interpretable laws from data. Applications included fluid mechanics, quantum mechanics, material sciences, and chemistry, and provided recent efforts in incorporating domain knowledge into machine learning. The participants had the opportunity to attend five invited talks by Surya Ganguli (Stanford University), Ben Adcock (Simon Fraser University), Animashree Anandkumar (Caltech/NVIDIA), Nathan Kutz (University of Washington), and Jan S Hesthaven (EPFL) about their recent achievements in AI/ML physics sciences.
Jonghyun Lee, Eric F. Darve, Peter K. Kitanidis, Michael W. Mahoney, Anuj Karpatne, Matthew Farthing, and Tyler Hesser were part of the organizing team of this symposium and served as session chairs. Jonghyun Lee and Eric Darve wrote this report.
Implementing AI Ethics (S7)
The symposium brought together academic researchers, corporate practitioners, and policymakers from diverse backgrounds and multiple disciplines to discuss a broad range of topics, from the philosophical questions that society needs to ask itself, to designs/techniques to implement ethics to the role of the corporate and the public sector in creating a better and safer society that can successfully incorporate advanced AI. Held over three days, spanning multiple time zones, it included close to one hundred participants from around the world interacting, questioning, discussing, and creating research, corporate, and policy agendas to make ethical implementations of AI a reality.
With increasing research on ethics in AI, detailed policy making in the area of automated decision making and AI by professional associations and government bodies, and adoption of machine learning, natural language processing and other AI models by corporations to assist and augment human decision making, there is a growing need for multi-disciplinary groups of experts across industry, academia, and governments to address practical and pragmatic issues around implementing AI ethics. This AAAI Spring Symposium met for a total of twenty-six hours spread over three days from March 22 to March 24 in an online format. The symposium’s principal aim was to facilitate a deeper discussion on how intelligence, agency and ethics may intermingle in socio-technical systems as well as organizations.
The first day and a half consisted of twelve foundational sessions covering nine distinct topic areas. Sessions on machine ethics, automating machine ethics, and paper presentations focused on:
- Can ethical behavior be formulated as rules, values, quantitative measures, regulations?
- How can we take advantage of technologies to build more interpretable, explainable, safe AI?
Sessions on AI principles, public sector AI and standardization and certification focused on:
- How to balance potentially conflicting AI principles and how to apply them across multiple industry sectors and use cases across a range of decisions?
- How to develop safe and secure AI in military and public sector applications?
- How to develop national AI strategies and what role should regulations play?
- What are some of the standardization, certification, and self-regulation efforts that make the practical application of high-level AI principles a reality?
Sessions on Enterprise AI and Operationalizing AI ethics focused on:
- What are some of the best practices and challenges in implementing AI ethics within large organizations?
- What end-to-end and top-down governance mechanisms are required in organizations to ensure the responsible use of AI?
Insights from these eight sessions were synthesized in two working sessions on the research and policy agenda and the corporate agenda. These sessions used online collaborative tools to gather input from all the participants and engage in an active discussion on key ideas and next steps.
The final day consisted of three summary sessions on the research, policy, and corporate agendas. The research agenda included further research into modeling socio-technical systems, formally verifiable systems, ethical optimization, and trade-off techniques. The policy agenda focused on the need for re-thinking the spectrum of regulations from self-governance to regulatory sandboxes, building public trust in AI, and global institutions to track and monitor national AI ethical efforts. The corporate agenda focused on governance, training and upskilling of staff and customers, development processes and tools, and sharing of best practices and benchmarks.
The Symposium also surfaced multiple, long-term efforts to create AI systems that themselves perform ethical reasoning. The developers expressed interest in building these systems into robots, vehicles, and other autonomous machines to ensure that they operate safely and according to norms and ethical rules. The researchers also discussed designs for inserting ethics analysis engines into software systems that help manage salespeople, support corporate and government budgeting and human resources decisions, and fight government corruption. Concepts on display included advanced logical, Bayesian, optimization, and game-theoretic methods.
Participants in the symposium included academics, students, software practitioners, members of policy-making bodies, technology providers, and standards organizations. These participants aim to continue the discussions and potentially meet regularly in a newly created Slack channel. Readers who would like to be included in the Slack channel can email the organizers at [email protected].
The Organizing Committee for the symposium consisted of Kay Firth-Butterfield, Virginia Dignum, Graham Finlay, Steven Greidinger, Vivek Nallur, and Anand Rao. This report was written by Anand Rao, Global AI Lead, PwC, Vivek Nallur, University College Dublin, and Steven Greidinger.
Leveraging Systems Engineering to Realize Synergistic AI/Machine Learning Capabilities (S8)
Following last year’s AAAI Spring (and Fall Replacement) 2020 Symposium with the theme of exploring how Systems Engineering (SE) can be employed to improve the development of artificial intelligence (AI) systems (a theme labeled, “SE4AI”), this year’s 2021 Spring Symposium continued to explore how SE can be leveraged to achieve synergies among the interdependent processes existing in real-world AI/machine learning (ML) systems. Our inspiration was derived from Jay Forrester’s thoughts on system dynamics [Forester, 2009], when he remarked that we “live in a complex world of nested feedback loops” involving cascading interdependencies across loops that vary in complexity, space, and time. Many if not most of the current AI/ML, data and information fusion processes (and perhaps other methods) are attempting to estimate conditions (situations, contexts) in this complex world that Forrester had described, and thus face the challenges of dealing with nested feedback loops and their associated interdependencies.
Any current review of the evolving literature related to the design and development of AI/ML software in systems today illustrates that the communities of AI/ML technologists, developers, and users are all expressing concern for how these systems are being designed, developed, tested and used. In parallel with these concerns are those from the SE community that is also reexamining its own SE practices and standards for the construction and deployment of AI/ML systems. There, the view of its innovators is that AI/ML technologies may help in improving SE engineering practices (this paradigm being called “AI4SE”), where another set of “interdependences” cascade across system engineering practices and methods. Finally, both the AI/ML and SE communities are concerned with the overarching efficiencies obtained by combining development and deployment practices toward controlling and minimizing the life-cycle costs of complex system deployments—this paradigm is called “DevOps,” and attempts to achieve such synergies. The bottom line is that engineering practices for, and from, AI/ML development are under considerable re-thinking from multiple perspectives, motivating the evolution of both artificial intelligence and systems engineering.
Our Spring 2021 Symposium gathered 18 speakers with papers that addressed the broad mix of topics related to our theme. We had 8 Invited speakers who had longer, 60-minute presentations, and 8 Regular speakers, whose presentations were 30-minutes long, plus we had two Opening Speakers, one from the Defense Advanced Research Projects Agency (DARPA) and one from University College London; these two opening talks addressed, respectively, Measurement and Testing of AI Systems and the Authority to Act in Non-deterministic Systems. The latter talk addressed yet another looming, but complex topic for the AI/ML communities with the issue of having to deal with the ethical control and execution of autonomous systems.
Overall, the symposium was a success in addressing all of the topics related to our theme. We encourage readers to take a look at the Symposium Program at the link: https://sites.google.com/view/systems-engineering-ai-ml/program . There you will see a balance of topics to include: Interdependence and vulnerability in AI systems; being human in an artificial system (one of a few papers that addressed the important issue of the human roles in AI/ML systems); the rising concern of ethics in AI/ML system design; combining AI and optimization techniques in engineering designs; the management and deployment of DevOps processes; and the engineering problem of dealing with explanation in, and by, AI/ML processes in the laboratory matched against results from the field. We plan to publish the proceedings in Springer’s LNCS series later this year.
The SE4AI and AI4SE challenges, as well as those related to maturing the DevOps process, will not go away any time soon. It can be anticipated that future symposia addressing these many and similarly complex topics will be required for quite some time to come as the engineering methods for AI/ML systems continue to evolve and mature, and as new applications, technologies and practices are discovered and fielded.
W.F. Lawless (Paine College), Ranjeev Mittu (U.S. Naval Research Laboratory), Don Sofge (U.S. Naval Research Laboratory), Thomas Shortell (Lockheed Martin Company), Thomas McDermott (Stevens Institute of Technology), James Llinas (University at Buffalo (North Campus), and Julie L. Marble, (Johns Hopkins University) served as cochairs of this symposium. This report was written by James Llinas.
Machine Learning for Mobile Robot Navigation in the Wild (S9)
The Machine Learning for Mobile Robot Navigation in the Wild AAAI Spring Symposium was held virtually via Zoom on March 22, 23, and 24, 2021. The goal of this symposium was to bridge together researchers who are interested in using machine learning to enable mobile robot navigation in the wild and to provide a shared platform to discuss learning fundamental navigation (sub)problems, despite different application scenarios.
Decades of research efforts have enabled classical navigation systems to move robots from one point to another, observing system and environmental constraints. However, navigation outside a controlled test environment, i.e., navigation in the wild, remains a challenging problem: an extensive amount of engineering is necessary to enable robust navigation in a wide variety of environments, e.g., to calibrate perception or to fine-tune navigational parameters; classical map-based navigation is usually treated as a pure geometric problem, without considering other sources of information, e.g., terrain, risk, social norms, etc. On the other hand, advancements in machine learning provide an alternative avenue to develop navigation systems, and arguably an “easier” way to achieve navigation in the wild. Vision input, semantic information, terrain stability, social compliance, etc. have become new modalities of world representations to be learned for navigation beyond pure geometry. Learned navigation systems can also largely reduce engineering effort in developing and tuning classical techniques. However, despite the extensive application of machine learning techniques on navigation problems, it still remains a challenge to deploy mobile robots in the wild in a safe, reliable, and trustworthy manner. In this symposium, we focus on navigation in the wild as opposed to navigation in a controlled, well-engineered, sterile environment like labs or factories. In the wild, mobile robots may face a variety of real-world scenarios, other robot or human companions, challenging terrain types, unstructured or confined environments, etc. This symposium aims at bringing together researchers who are interested in using machine learning to enable mobile robot navigation in the wild and to provide a shared platform to discuss learning fundamental navigation (sub)problems, despite different application scenarios. Through this symposium, we want to answer questions about why, where, and how to apply machine learning for navigation in the wild, summarize lessons learned, identify open questions, and point out future research directions.
We received 18 papers, from which five two-page abstract papers, and nine six-page full papers were accepted. From the submissions, we identified three themes in terms of using machine learning for mobile robot navigation in the wild: 1. Navigation in Unstructured Environments, 2. Navigation in Social Contexts with Other Human or Robotic Agents, and 3. Mobile Robot Navigation: Applications. All full papers fell under the first two themes and were presented in the form of 20-minute prerecorded presentation and 10-minute live Q&A in the first two days, while all abstract papers fell under the third theme and were presented in the form of 10-min prerecorded presentation and 5-min live Q&A. More than 40 people attended the virtual session on Zoom.
In addition to the paper presentations, we had four invited speakers, Dr. Pratap Tokekar from University of Maryland, Dr. Srikanth Saripalli from Texas A&M University, Dr. Chris Mavrogiannis from University of Washington, and Dr. Ji Zhang from Carnegie Mellon University. Each invited speaker gave a 60-min talk, including Q&A, on their recent research.
Furthermore, we had invited six industrial partners, Hydronalix, Independent Robotics, HEBI Robotics, Bosch, iRobot, and Clearpath Robotics, to participate our symposium. Each industrial partner gave a 15-minute spotlight talk.
Finally, we organized virtual social event on Gather.town for all participants to interact with each other in an informal way.
The AAAI Spring Symposium 2021 Machine Learning for Mobile Robot Navigation in the Wild is a successful event that brings together academic and industry, researchers and students, and the fields of mobile robot navigation and machine learning. The cochairs consisted of: Xuesu Xiao, Harel Yedidsion, Reuth Mirsky, Justin Hart, Peter Stone, Ross Knepper, Hao Zhang, Jean Oh, Davide Scaramuzza, Vaibhav Unhelkar, Michael Everett, and Gregory Dudek. This report was written by Xuesu Xiao, Ph.D.
Survival Prediction: Algorithms, Challenges and Applications (S10)
A survival analysis model estimates the time until a specified event will happen in the future (or some related survival measure), for an individual. This event could be the time to death or relapse of a patient, or time until an employee leaves a company, or until the failure of a mechanical system, etc. The key challenge in learning effective survival models is that this time- to-event is censored for some individuals, which limits the direct use of standard regression techniques. This symposium focused on approaches for learning models that estimate survival measures from such survival datasets, which include censored instances. Its objective was to push the state-of-the-art in survival prediction algorithms and address fundamental issues that hinder their applicability for solving complex real-world problems. A few interdisciplinary collaborations were established, and new research directions were identified through 22 paper presentations, 6 invited talks, and 6 discussion sessions held during the symposium.
This symposium included over 70 registrants – academic and industrial researchers, with diverse backgrounds in statistics, machine learning, and biomedical engineering, and mathematics – and including 37% students. The call for papers attracted 28 paper submissions; each receiving two reviews, leading to 16 accepted as podium presentations and 6 as posters; 20 of the 22 authors provided a 3-minute promo video of their accepted papers, which were released a week before the symposium to preview the material. The symposium papers presented ideas and approaches from deep learning, statistics, multi-state and dynamic models, temporal point processes, Bayesian networks, etc. for solving a wide variety of survival prediction tasks. We also discussed many challenges in applying survival prediction models to real-world applications, including evaluation methods (calibration vs discrimination), interpretability, etc.
This symposium featured six invited talks. First, Russell Greiner (University of Alberta) discussed ways to learn “individual survival distributions” (personalized survival prediction models that estimate a subjects survival probability each time) from survival datasets. He also discussed several metrics for evaluating and comparing survival prediction models, including a novel approach. Next, Rajesh Ranganath (New York University) discussed several survival tasks: the application for deep learning algorithms to time-to-event analysis, challenges including model calibration, missing data, and interpretability in using deep learning for survival prediction. Third, Mihaela van der Schaar (Cambridge University) discussed automated machine learning and transfer learning approaches for survival prediction tasks while addressing challenges including competing risks and time-varying covariates. She also discussed several healthcare applications of such machine learning based survival models. Mark van der Laan (UC Berkeley) then discussed the one-step targeted maximum likelihood estimation of the casual effect of treatment on survival, and the resulting causal inference methods for survival data based on discrete and continuous time-to-event data. Fifth, Thomas Alexander Gerds (University of Copenhagen) discussed the general framework for performance comparison of survival models for predicting the risks that a medical event occurs within t-years in the presence of competing risks and based on right-censored time-to-event data. Finally, Rob Tibshirani and Erin Craig (Stanford University) presented a “stacking” method for treating survival analysis as a classification problem.
One of the major challenges of organizing a virtual symposium was enabling casual interaction between symposium participants, which happens naturally in physical meetings – during coffee breaks, lunch, or dinner time, etc. we tried to simulate such interactions through six virtual discussion sessions spread across the symposium. Each discussion group was centered around a theme: (1) Evaluation of survival models, (2) Counterfactual reasoning and causality, (3) competing risks, comorbidities, and multiple events, (4) international survival prediction competition, (5) New research directions, novel applications and extending survival prediction, and (6) Mentorship session for graduate students.
Overall, the symposium was very successful, with attendees participating enthusiastically in the discussion groups and in Q&As following presentations. The symposium papers will be available online at the proceedings of Machine Learning Research [http://proceedings.mlr.press/] and all symposium recordings are available now at the symposium website [Recordings(weebly.com)].
Russell Greiner (University of Alberta) served as the symposium chair with Neeraj Kumar (University of Alberta), Thomas Alexander Gerds (University of Copenhagen), and Mihaela van der Schaar (Cambridge University) as co-organizers. This report was written by Dr. Neeraj Kumar.
Notes
[Forrester, 2009] Forrester JW. Some basic concepts in system dynamics. In: Report No. D-4894. Massachusetts Institute of Technology, Sloan School of Management; Jan 2009.
Biographies
Aaron Adler works at BBN Technologies.
Christopher Amato works in the College of Computer and Information Science at Northeastern University.
Rajmonda Caceres works at MIT in the Lincoln Lab.
Eric Darve is a Professor at Stanford University.
Hans-Georg Fill works in the Research Group Digitalization and Information Systems at University of Fribourg.
Steven Greidinger is Founder/CEO at Timely Data Science.
Russell Greiner is at the University of Alberta.
Knut Hinkelmann is at FHNW University of Applied Sciences and Arts Northwestern Switzerland, School of Business and University of Pretoria, Department of Informatics.
Neeraj Kumar is at the University of Alberta.
Jonghyun Lee is an Assistant Professor at the University of Hawaii at Manoa.
Zitao Liu is with TAL Education Group in Beijing, China.
James Llinas works at the University at Buffalo, North Campus.
Andreas Martin is at FHNW University of Applied Sciences and Arts Northwestern Switzerland, School of Business.
Vivek Nallur works at the University College Dublin.
Mohammed Eslami works for Netrias.
Frans A Oliehoek works in the Department of Intelligent systems at Delft University of Technology.
Shayegan Omidshafiei is at Google DeepMind, Paris.
Rajan Puri, MD, MPH, works at Stanford University.
Samira Rahimi works at McGill University in Canada.
Anand Rao is the Global AI Lead at PwC.
Selma Sabanovic works at Indiana University in Bloomington.
Karl Tuyls is at Google DeepMind, Paris.
Xuesu Xiao is at The University of Texas at Austin.
Fusun Yaman works at BBN Technologies.
Xiao Zhai is with TAL Education Group in Beijing, China.