The goal of AI is to make it think like humans, act like humans, and do things like humans. The Turing Test is that if we cannot tell when talking to “someone” on the other side of the wall whether that is a real person or a machine, we have achieved artificial intelligence.
Enters fake news. With today’s AI advances, fake news can be produced by an AI program, so human-like and trustworthy sounding that is difficult for us to detect. We need to be ready for a future that is coming with both the goods and bad of AI.
Researchers at Allen Institute and the University of Washington have created “Grover,” a program that both creates convincing fake articles but also is able to detect them. The essential idea is that “machines that generate fake text leave a trace or signature in the way they predict word combinations” and “a neural network constructed in the same way as the network that makes the fake text automatically spots those idiosyncratic artifacts”.
The authors have decided to release all their code to the public. “At first, it would seem like keeping models like Grover private would make us safer,” they observe. But, “If generators are kept private, then there will be little recourse against adversarial attacks.” More information about the Grover project can be found here.
The AIWS has also made efforts toward addressing fake news and its impact. In 2018, its parent organizations, the Boston Global Forum and the Michael Dukakis Institute, organized the 4th Annual Global Cybersecurity Day December 12, 2017, with an event entitled ‘AI Solve Disinformation’ to explore the current state of cyber issues and the threat posed by disinformation and fake news, as well as effective defense mechanisms against these activities. In 2017, the BGF also wrote a policy proposal on fake news for consideration at the 2017 G-7 Summit in Taormina, Italy.