By Ian Beaver
Research interest in Conversational AI has experienced a massive growth over the last few years and several recent advancements have enabled systems to produce rich and varied turns in conversations similar to humans. However, this apparent creativity is also creating a real challenge in the objective evaluation of such systems as authors are becoming reliant on crowd worker opinions as the primary measurement of success and, so far, few papers are reporting all that is necessary for others to compare against in their own crowd experiments. This challenge is not unique to ConvAI, but demonstrates as AI systems mature in more “human” tasks that involve creativity and variation, evaluation strategies need to mature with them.
By Ashok Goel; School of Interactive Computing, Georgia Institute of Technology
Like much of the AI community, I have watched the ongoing discussion between symbolic AI and connectionist AI with fascination. While symbolic AI posits the use of knowledge in reasoning and learning as critical to producing intelligent behavior, connectionist AI postulates that learning of associations from data (with little or no prior knowledge) is crucial for understanding behavior. The recent debate between the two AI paradigms has been prompted by advances in connectionist AI since the turn of the century that have significant applications.
By Odd Erik Gundersen, Norwegian University of Science and Technology
Registered reports have been proposed as a way to move from eye-catching and surprising results and toward methodologically sound practices and interesting research questions. However, none of the top-twenty artificial intelligence journals support registered reports, and no traces of registered reports can be found in the field of artificial intelligence. Is this because they do not provide value for the type of research that is conducted in the field of artificial intelligence?
Chris Welty, Google Research, USA
Praveen Paritosh, Google Research
Kurt Bollacker, LongNow Foundation
The AI bookies have spent a lot of time and energy collecting scientific bets from AI researchers since the birth of this column three years ago. While we have met with universal approval of the idea of scientific betting we have likewise met with nearly universal silence in our acquisition of bets. We have collected only a very few in this column over the past two years. In our first column we published the “will voice interfaces become the standard” bet, as well as a set of 10 predictions from Eric Horvitz that we proposed as bets awaiting challengers. No challengers have emerged.
By Michael Wollowski, Rose-Hulman Institute of Technology, USA
In this panel, AI faculty with experience teaching online and blended classes were asked to share their experiences teaching online classes. The panel was composed of Ashok Goel, Georgia Institute of Technology, Ansaf Salleb-Aouissi, Columbia University and Mehran Sahami, Stanford University. The panelists were asked to describe which tools and methods work well to help instructors engage and bond with students online. They were furthermore asked to share their insights into which components of a course can be done best online and which ones are best accomplished in person. The panel took place as part of the 2021 Symposium on Educational Advances of AI, which was collocated with AAAI-21. The panel was attended by about 55 people and it included a vigorous Q/A portion.