An autonomous agent that can engage in competitive debate with humans is reported in this week’s Nature. Although the human debaters were judged to have won the competition, the demonstration suggests that artificial intelligence may have the ability to participate in complex human activities.
Artificial intelligence (AI) makes it possible to produce machines that can perform human tasks. Mimicking language-related tasks using AI has had mixed results; predicting the sentiment of a sentence has been successfully demonstrated, whereas complex tasks, such as summarizing and engaging in dialogue, have proved more challenging.
An autonomous system named Project Debater that can debate with humans in a meaningful way is presented by Noam Slonim and colleagues. The system can scan through an archive of 400 million newspaper articles and Wikipedia pages to form opening statements and counter-arguments. Its performance against humans (including expert debaters) and existing AI technology is evaluated over a range of topics, such as subsidizing preschool, judged blind by a virtual audience of people who were given transcripts of the debates and asked to score the statements. Project Debater scored highly in producing opening statements, only being beaten by the expert debaters, but is yet to win a debate. The authors conclude that debating humans resides outside the AI comfort zone, for now.
After the embargo ends, the full paper will be available at: https://www.nature.com/articles/s41586-021-03215-w
Engineering: Earmuffs measure blood alcohol levels through the skinScientific Reports
Physics: Modelling improvements to ride-sharing adoptionNature Communications
Biomedical engineering: Sound compression in hearing aids may make them worseNature Biomedical Engineering