The online interactive magazine of the Association for the Advancement of Artificial Intelligence

The 10th AAAI Conference on Human Computation and Crowdsourcing (HCOMP 2022) will be held November 6-10th as a virtual conference.

HCOMP is the premier venue for disseminating the latest research findings on human computation and crowdsourcing. While artificial intelligence (AI) and human-computer interaction (HCI) represent traditional mainstays of the conference, HCOMP believes strongly in fostering and promoting broad, interdisciplinary research. Our field is particularly unique in the diversity of disciplines it draws upon and contributes to, including human-centered qualitative studies and HCI design, social computing, artificial intelligence, economics, computational social science, digital humanities, policy, and ethics. We promote the exchange of advances in human computation and crowdsourcing not only among researchers but also engineers and practitioners to encourage dialogue across disciplines and communities of practice.

This year, we especially encourage work that generates new insights into the connections between human computation and crowdsourcing, and humanity. For example, how to support the well-being and welfare of participants of human-in-the-loop systems? How to promote diversity and inclusion of the crowd workforce? How can crowdsourcing be used for social good, e.g., to address societal challenges and improve people’s lives? How can human computation and crowdsourcing studies advance the design of trustworthy, ethical, and responsible AI? How can crowd science inform the development of AI that extends human capabilities and augments human intelligence?

HCOMP 2022 builds on a successful history of past meetings: nine HCOMP conferences (2013–2021) and four earlier workshops, held at the AAAI Conference on Artificial Intelligence (2011–2012), and the ACM SIGKDD Conference on Knowledge Discovery and Data Mining (2009–2010).

Data Excellence (W1)

Human annotated data is crucial for operationalizing empirical ways for evaluating, comparing, and assessing the progress of ML/AI research. As human annotated data represents the compass that the entire ML/AI community relies on, the human computation (HCOMP) research community has a multiplicative effect on the progress of the field. Optimizing the cost, size, and speed of collecting data has attracted significant attention by HCOMP and related research communities. In the first to market rush with data, aspects of maintainability, reliability, validity, and fidelity of datasets are often overlooked. We want to turn this way of thinking on its head and highlight examples, case-studies, methodologies for excellence in data collection. Data excellence happens organically due to appropriate support, expertise, diligence, commitment, pride, community, etc. We will invite speakers and submissions exploring such case studies in data excellence, focusing on empirical and theoretical methodologies for reliability, validity, maintainability, and fidelity of data. Goals of the workshop:

  • Gather case studies of data excellence;
  • Help define data excellence;
  • Build a catalog of best practices for data excellence;
  • Empirical and theoretical methodologies for reliability, validity, maintainability, fidelity of data;
  • How do we invest more, not less, in data? How do we justify that investment?

Rigorous Evaluation of AI Systems (W2)

Human annotated datasets have emerged as the primary mechanism to operationalize empirical ways for evaluating, comparing, and assessing progress in machine learning, AI, and related fields. Crowdsourcing has helped solve the issue of scale in human annotated datasets while additionally has increased the impact of the variability of humans as instruments to provide these annotations. In recent research and applications where these evaluations rely on human annotated datasets or methodologies, we are interested in the meta-questions around characterization of those methodologies. Some of the expected activities in the workshop include:

  • Invited and contributed presentations from the evaluation, crowdsourcing, HCI, and AI communities;
  • Group discussion on human-centered evaluation metrics as well as similarities and differences from dataset annotation;
  • A focus on metascience, which aims to overtly, iteratively make science better through a reflective investment on evaluation, measurement, replicability, and contribution, thus attempting to build a substantial foundation of science in order to make long term fundamental improvements to the scientific method;
  • Collecting, examining and sharing current evaluation and replication efforts, comprehensive of one system or competitive of multiple systems with the goal of critically evaluating the evaluations themselves;
  • Developing an open repository of existing evaluations with relevant methodology fully documented and raw data and outcomes available for public scrutiny.