AI technology is advancing rapidly and, with it, the risks of it being hijacked for malicious purposes. 25 experts recently published the Malicious AI Report, which explores threats to digital, political, and physical security within the next five years that might be brought on by AI technology. Among these are the ability of AI to automate cyberattacks or generate false images, audio, and even entire personas to manipulate public opinion. One such example was last years “deepfake” incident, where celebrity faces were realistically (but deceitfully) edited into pornographic videos. This technology is already seeing misuse today, making it crucial to create frameworks and ethical guidelines to maintain its security. To that end, AIWS will be announcing a 7-layer Ethical Framework for AI this April, at our BGF-G7 Summit conference at Harvard University.
- Google says it has cracked a quantum computing challenge with new chip
- Silicon Valley Takes Artificial General Intelligence Seriously—Washington Must Too
- Expert Warns UN’s Role in AI Regulation Could Lead to Safety Overreach
- An AI script editor could help decide what films get made in Hollywood
- AI-powered art puts ‘digital environmentalism’ on display at UN Headquarters
- The Shinzo Abe Initiative 2nd Conference: Japan’s Prominence in the New Age of Global Enlightenment
- Happy New Year 2023 – Celebrate 90th Birthday of Governor Michael Dukakis by “AIWS Actions to create an Age of Global Enlightenment”
- Boston Global Forum honors UN Secretary-General’s Envoy on Technology Amandeep Gill with World Leader in AIWS Award
- Peace in the Age of Global Enlightenment: Technology for Peace
- Former Gov. Michael Dukakis on how Shinzo Abe’s assassination is resonating in Massachusetts