Trick Me If You Can

Seeing how computers “think” helps humans stump machines and reveals AI weaknesses. That is the direction a research group at the University of Maryland is taking in pursuance of a technique for reliably generating questions that challenge computers.

Using this technique, Eric Wallace and his co-authors have developed a dataset of more than 1,213 questions that are easy for people to answer yet beyond the capabilities of the best modern computer-answering systems. The work was recently published in the journal Transactions of the Association for Computational Linguistics.

An AI answer machine usually recognizes certain word patterns as a person types a question, based on which an answer is returned. The proposed technique, which is an interactive interface involving human- in-the-loop adversarial generation, can trick the AI program by editing these words. Consequently, a system that learns to master these questions could have a better understanding of language than any system currently in existence. The dataset also could be used to train improved machine learning algorithms.

The research is useful because it helps address one of the challenges of machine learning: knowing why systems fail. The full article is here.