The online interactive magazine of the Association for the Advancement of Artificial Intelligence
Looking Back, Looking Ahead: Humans, Ethics, and AI

Looking Back, Looking Ahead: Humans, Ethics, and AI

By Ashok Goel

Concerns about ethics of AI are older than AI itself. The phrase “artificial intelligence” was first used by McCarthy and colleagues in 1955 (McCarthy et al. 1955). However, in 1920 Capek already had published his science fiction play in which robots suffering abuse rebelled against human tyranny (Capek 1920), and by 1942, Asimov had proposed his famous three “laws of robotics” about robots not harming humans, not harming other robots, and not harming themselves (Asimov 1942). During much of the last century, when AI was mostly confined to research laboratories, concerns about ethics of AI were mostly limited to futurist writers of fiction and fantasy. In this century, as AI has begun to penetrate almost all aspects of life, worries about AI ethics have started permeating mainstream media. In this column, I briefly examine three broad classes of ethical concerns about AI, and then highlight another concern that has not yet received as much attention.

Looking Back, Looking Ahead: Symbolic versus Connectionist AI

Looking Back, Looking Ahead: Symbolic versus Connectionist AI

By Ashok Goel; School of Interactive Computing, Georgia Institute of Technology

Like much of the AI community, I have watched the ongoing discussion between symbolic AI and connectionist AI with fascination. While symbolic AI posits the use of knowledge in reasoning and learning as critical to producing intelligent behavior, connectionist AI postulates that learning of associations from data (with little or no prior knowledge) is crucial for understanding behavior. The recent debate between the two AI paradigms has been prompted by advances in connectionist AI since the turn of the century that have significant applications.