AI technology is advancing rapidly and, with it, the risks of it being hijacked for malicious purposes. 25 experts recently published the Malicious AI Report, which explores threats to digital, political, and physical security within the next five years that might be brought on by AI technology. Among these are the ability of AI to automate cyberattacks or generate false images, audio, and even entire personas to manipulate public opinion. One such example was last years “deepfake” incident, where celebrity faces were realistically (but deceitfully) edited into pornographic videos. This technology is already seeing misuse today, making it crucial to create frameworks and ethical guidelines to maintain its security. To that end, AIWS will be announcing a 7-layer Ethical Framework for AI this April, at our BGF-G7 Summit conference at Harvard University.
Related contents
Silicon Valley Takes Artificial General Intelligence Seriously—Washington Must Too
Expert Warns UN’s Role in AI Regulation Could Lead to Safety Overreach
An AI script editor could help decide what films get made in Hollywood
AI-powered art puts ‘digital environmentalism’ on display at UN Headquarters