War is terrible. But it has often played a pivotal role in advancing technology. And Russia’s invasion of Ukraine is shaping up to be a key proving ground for artificial intelligence, for ill and, perhaps in a few instances, for good, too.
Civil society groups and A.I. researchers have been increasingly alarmed in recent years about the advent of lethal autonomous weapons systems—A.I.-enabled weapons with the ability to select targets and kill people without human oversight. This has led to a concerted effort at the United Nations to try to ban or at least restrict the use of such systems. But those talks have so far not resulted in much progress.
Meanwhile, the development of autonomous weapons has continued at a quickening pace. Right now, those weapons are still in their infancy. We won’t see humanitarian groups’ worst nightmares about swarms of “slaughterbot” drones realized in the Ukraine conflict. But weapons with some degree of autonomy are likely to be deployed by both sides.
Already, Ukraine has been using the Turkish-made TB2 drone, which can take off, land, and cruise autonomously, although it still relies on a human operator to decide when to drop the laser-guided bombs it carries. (The drone can also use lasers to guide artillery strikes.) Russia meanwhile has a “kamikaze” drone with some autonomous capabilities called the Lantset, which it reportedly used in Syria and could use in Ukraine. The Lantset is technically a “loitering munition” designed to attack tanks, vehicle columns, or troop concentrations. Once launched, it circles a predesignated geographic area until detecting a preselected target type. It then crashes itself into the target, detonating the warhead it carries.
Russia has made A.I. a strategic priority. Vladimir Putin, the country’s president, said in 2017 that whoever becomes the leader in A.I. “will become the ruler of the world.” But at least one recent assessment, from researchers at the U.S. government–funded Center for Naval Analyses, says Russia lags the U.S. and China in developing A.I. defense capabilities.
A.I. might also play a vital role in the information war. Many fear that A.I. techniques such as deepfakes—highly realistic video fakes created using an A.I. technique—will supercharge Russian disinformation campaigns, although so far there is no evidence of deepfakes being used. Machine learning can also be used to help detect disinformation. The large social media platforms already deploy these systems, although their track record in accurately identifying and removing disinformation is spotty at best.
The original article was posted at Fortune.
The Boston Global Forum (BGF), in collaboration with the United Nations Centennial Initiative, released a major work entitled Remaking the World – Toward an Age of Global Enlightenment. More than twenty distinguished leaders, scholars, analysts, and thinkers put forth unprecedented approaches to the challenges before us. These include President of the European Commission Ursula von der Leyen, Governor Michael Dukakis, Father of Internet Vint Cerf, Former Secretary of Defense Ash Carter, Harvard University Professors Joseph Nye and Thomas Patterson, MIT Professors Nazli Choucri and Alex ‘Sandy’ Pentland, and Vice President of European Parliament Eva Kaili. The BGF introduced core concepts shaping pathbreaking international initiatives, notably, the Social Contract for the AI Age, an AI International Accord, the Global Alliance for Digital Governance, the AI World Society (AIWS) Ecosystem, and AIWS City.