A year ago, Gary Marcus, a frequent critic of deep learning forms of AI, and Yoshua Bengio, a leading proponent of deep learning, faced off in a two-hour debate about AI at Bengio’s MILA institute headquarters in Montreal.
Wednesday evening, Marcus was back, albeit virtually, to open the second installment of what is now planned to be an annual debate on AI, under the title “AI Debate 2: Moving AI Forward.”
Vincent Boucher, president of the organization Montreal.AI, who had helped to organize last year’s debate, opened the proceedings, before passing the mic to Marcus as moderator.
Marcus said 3,500 people had pre-registered for the evening, and at the start, 348 people were live on Facebook. Last year’s debate had 30,000 by the end of the night, noted Marcus.
Bengio was not in attendance, but the evening featured presentations from sixteen scholars: Ryan Calo, Yejin Choi, Daniel Kahneman, Celeste Kidd, Christof Koch, Luís Lamb, Fei-Fei Li, Adam Marblestone, Margaret Mitchell, Robert Osazuwa Ness, Judea Pearl, Francesco Rossi, Ken Stanley, Rich Sutton, Doris Tsao and Barbara Tversky.
Fei-Fei Li, the Sequoia Professor of computer science at Stanford University, started off the proceedings. Li talked about what she called the “north star” of AI, the thing that should be serving to guide the discipline. In the past five decades, one of the north stars, said Li, was the scientific realization that object recognition is a critical function for human cognitive ability. Object recognition led to AI benchmark breakthroughs such as the ImageNet competition (which Li helped create.)
Next came Pearl, who has written numerous books about causal reasoning, including the Times best-seller, The Book of Why. His talk was titled, “The domestication of causality.”
“We are sitting on a goldmine,” said Pearl, referring to deep learning. “I am proposing the engine that was constructed in the causal revolution to represent a computational model of a mental state deserving of the title ‘deep understanding’,” said Pearl.
Deep understanding, he said, would be the only system capable of answering the questions, “What is?” “What if?” and “If Only?”
Next up was Robert Ness, on the topic of “causal reasoning with (deep) probabilistic programming.”
Ness said he thinks of himself as an engineer, and is interested in building things. “Probabilistic programming will be key,” he said, to solving causal reasoning. Probabilistic programming could build agents that can reason counter-factually, a key to causal reasoning, said Ness. That was something he felt was “near and dear to my heart.” This, he said, could address Pearl’s question pertaining to the “If only?”
The article was originally posted here.
In the field of AI application with Causal Reasoning, Professor Judea Pearl is a pioneer for developing a theory of causal and counterfactual inference based on structural models. In 2011, Professor Pearl won the Turing Award, computer science’s highest honor, for “fundamental contributions to artificial intelligence through the development of a calculus of probabilistic and causal reasoning.” In 2020, Michael Dukakis Institute also awarded Professor Pearl as World Leader in AI World Society (AIWS.net) for Leadership and Innovation (MDI) and Boston Global Forum (BGF). At this moment, Professor Judea is a Mentor of AIWS.net and Head of Modern Causal Inference section, which is one of important AIWS.net topics on AI Ethics.