Chris Welty, Google Research
Praveen Paritosh, Google Research
Kurt Bollacker, LongNow Foundation
The AI bookies have spent a lot of time and energy collecting scientific bets from AI researchers since the birth of this column three years ago. While we have met with universal approval of the idea of scientific betting we have likewise met with nearly universal silence in our acquisition of bets. We have collected only a very few in this column over the past two years. In our first column we published the “will voice interfaces become the standard” bet, as well as a set of 10 predictions from Eric Horvitz that we proposed as bets awaiting challengers. No challengers have emerged.
We have also published bets on autonomous weapons and on whether AI will ever outgrow human labeling. All of the full-fledged bets were fascinating to behold, as they evolved from opinions and predictions into rigorously defensible statements that could verifiably go one way or the other. Each bet took several months to write, and the process was different from writing a collaborative research paper because the authors disagreed with each other and held a goal of clearly articulating that disagreement. Often this process changed the disagreement, and ultimately the positions, of the stakeholders. Another important lesson we got out of these early bets is a generalizable methodology for adjudicating bets1. We use the proportion of research papers at major AI conferences that support one or the other side of the bet as a surrogate to quantify the research community’s perspective on the bet.
What we have done has resulted in publication of some excellent bets, but not in our hoped-for self-propelled momentum of bets in the community at large. We recognize that this is really a problem of bootstrapping — once folks are aware of the value of making bets, they will initiate bets themselves. Our main mechanism for increasing awareness so far has been the publication of actual bets, and this has not scaled as we hoped. The three full bets we’ve published were obtained through direct, in-person arm twisting: the first involved one of the bookies as a bettor, the second arose from recruitment at the 2018 AAAI Spring Symposium by our erstwhile AI Magazine editor, Ashok, and the third was the result of incubation by the bookies at the WWW conference in 2019. In this last case, we took over a workshop for an hour, gave participants ideas of bets, broke them into groups and had them take sides and draft arguments for each side. One of these groups had enough energy and interest in both sides of the bet (whether ML will outgrow human labeling) to continue with it for nearly a year to publication [Schaekerman et al., 2021].
Given the success of these three highly curated bets and our continued feedback that this column is a great idea, why is it that our efforts to solicit bets continue to come up short? What are the obstacles? One possible obstacle is the a priori 50% chance of public “shaming” when one of the two sides is proven wrong. Such a high chance of being publicly wrong is unusual for the practice of science, but such is the nature of a productive disagreement. This collaborative adversarial framework is precisely the goal of the bookies column, as stated in our first call [Bollacker et al., 2018]. The science of AI is proceeding with a nearly exclusive focus on positive results, which means we don’t share our losses and lessons learned. The AI field has adopted a style of scientific writing in which success is achieved simply by ranking better than the other 80% (e.g. of conference submissions), which is sometimes achieved in ways that many, the bookies included, think is harmful. Direct adversarial feedback, e.g. a colleague who assumes the role of doubting proposed results, is altogether a different process and leads to better, more reproducible, science.
Another obstacle to collecting more bets of course is time and credit. Most researchers work on the basis of a pay scale measured in traditional forms of publication and citations, as well as the more recently introduced leaderboard ranks. Though it may not carry the prestige of a journal publication, a bet published through the AI bookies column is a true publication – archival and peer reviewed. In fact, the adversarial approach to bets makes the review process more rigorous in some ways than any journal or conference.
The final obstacle to making scientific bets we address here is coming up with controversial ideas. Scientists are trained to think in terms of hypothesis and experiments that will prove the hypothesis, but are not trained to find areas of true disagreement. Indeed, this is very hard, and to help usher more brave souls past that obstacle, we will be proposing bets ourselves and recruiting AI Researchers to take sides. The researchers will clarify the terms of the bets and make them adjudicatable. Scientists with their own bet ideas still can, and should, propose them independently; this remains our ideal contribution. But we hope to widen our reach with an additional first batch of seed bets that are of broad interest to the research community:
Note that we don’t take any particular side on these seed bets, but identify them as topics of current activity. We invite you to take a position on one or more of these bets (or propose a new bet of your own creation) for the coming issue. You can make the bet statement more general or specific, change the timeline, judgement criteria, or other parameters, or add more context to enrich your argument. There is almost no limit to the number and diversity of bets that could be made about the progress of AI in science. Bets frame what we know with what we think will happen. This is a powerful tool for advancing scientific knowledge because it embodies the parts of scientific opinion that are otherwise not easily captured.
In order to participate you need only three things: a) a bet statement, b) your position (pro/con), and c) a brief rationale of up to 500 words. Optionally, you can suggest candidate challengers. You will be able to refine it with a challenger to craft a strong, relevant story that others will find valuable. Don’t worry if your ideas still seem nascent or not fully fleshed out, or don’t yet have a challenger. We can help.
One of the ways to get started is to go to www.sciencebets.org, where you can browse a list of potential bets that you could claim, or suggest a bet of your own to us ([email protected]). We will work with you to refine your ideas and assemble all of the pieces into a well formed bet, ready for publication in AI Magazine.