FRAMEWORK FOR PEACE AND SECURITY IN THE 21ST CENTURY

 

SPEAKERS AND DISCUSSANTS

  • Governor Michael Dukakis, Co-founder and Chairman of the Boston Global Forum, Co-Chair
  • Stratos Efthymiou, Consul General of Greece in Boston, Co-Chair
  • Prof. Stephen Walt, Harvard Kennedy School
  • Prof. Nazli Choucri, MIT
  • Prof. Thomas Patterson, Harvard Kennedy School
  • Prof. David Silbersweig, Harvard Medical School
  • Prof. Thomas Creely, Naval War College
  • Barry Nolan, Adviser of US Congress
  • Nguyen Anh Tuan, Co-founder and CEO of the Boston Global Forum
  • Prof. Constantine Arvanitopoulos, the Karamanlis Chair at the Fletcher School of Law and Diplomacy, Professor of International Relation, Tufts University
  • Prof. Christo Wilson, Northeastern University, Harvard Law School Fellow, Michael Dukakis Leadership Fellow
  • Nguyen Phan Nguyet Minh, AI World Society Young Leader

You can download detail Agenda here 

Trick Me If You Can

Seeing how computers “think” helps humans stump machines and reveals AI weaknesses. That is the direction a research group at the University of Maryland is taking in pursuance of a technique for reliably generating questions that challenge computers.

Using this technique, Eric Wallace and his co-authors have developed a dataset of more than 1,213 questions that are easy for people to answer yet beyond the capabilities of the best modern computer-answering systems. The work was recently published in the journal Transactions of the Association for Computational Linguistics.

An AI answer machine usually recognizes certain word patterns as a person types a question, based on which an answer is returned. The proposed technique, which is an interactive interface involving human- in-the-loop adversarial generation, can trick the AI program by editing these words. Consequently, a system that learns to master these questions could have a better understanding of language than any system currently in existence. The dataset also could be used to train improved machine learning algorithms.

The research is useful because it helps address one of the challenges of machine learning: knowing why systems fail. The full article is here.

PROFESSOR PATRICK WINSTON, FORMER DIRECTOR OF MIT’S ARTIFICIAL INTELLIGENCE LABORATORY, DIES AT 76

(from left to right: Governor Michael Dukakis, President of Estonia Toomas Hendrik Ilves, and Professor Patrick Winston

at the AI World Society’s first meeting December 12, 2017.)

Patrick Winston, a beloved professor and computer scientist at MIT, died on July 19 at Massachusetts General Hospital in Boston. He was 76.

A professor at MIT for almost 50 years, Winston was the director of MIT’s Artificial Intelligence Laboratory from 1972 to 1997 before it merged with the Laboratory for Computer Science to become MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL).
A devoted teacher and cherished colleague, Winston led CSAIL’s Genesis Group, which focused on developing AI systems that have human-like intelligence, including the ability to tell, perceive, and comprehend stories. He believed that such work could help illuminate aspects of human intelligence that scientists don’t yet understand.

His Genesis project aimed to faithfully model computers after human intelligence in order to fully grasp the inner workings of our own motivations, rationality, and perception. Using MIT research scientist Boris Katz’s START natural language processing system and a vision system developed by former MIT PhD student Sajit Rao, Genesis can digest short, simple chunks of text, then spit out reports about how it interpreted connections between events.

Winston’s dedication to teaching earned him many accolades over the years, including the Baker Award, the Eta Kappa Nu Teaching Award, and the Graduate Student Council Teaching Award. He was also renowned for his accessible and informative lectures, and gave a hugely popular talk every year during the Independent Activities Period called “How to Speak.”

A past president of the Association for the Advancement of Artificial Intelligence (AAAI), Winston also wrote and edited numerous books, including a seminal textbook on AI that’s still used in classrooms around the world. Outside of the lab he also co-founded Ascent Technology, which produces scheduling and workforce management applications for major airports.

As a pioneer researcher in Artificial Intelligence (AI), Professor Patrick Winston was also key figure to the AI World Society (AIWS), which has been established by Michael Dukakis Institute for Leadership and Innovation (MDI). He was an intellectual and active contributor to AIWS and MDI from the very first days, including AI World Society first meeting on December 12, 2017 at Harvard University Faculty Club.

Michael Dukakis Institute for Leadership and Innovation (MDI) and AI World Society (AIWS) express sincere condolences to professor Patrick Winston and his family. Professor Patrick Winston is always memorized as an inspirational AI expert in AI World Society for promoting ethical norms and practices in the development and use of AI.

A MACHINE COULD ONE DAY BECOME YOUR BOSS

Automation is to achieve efficiency. What if AI sees humanity itself as the thing to be optimized? The New York Times this week wrote about the possibility of robots replacing your bosses.

It is happening indeed, kind of. Amazon’s complex algorithms are already used to track worker productivity in its fulfillment centers and can automatically generate the paperwork to fire workers who don’t meet their targets. IBM’s AI Platform, the Watson, its A.I. platform can predict future performance of employees with almost 100% accuracy. Cogito is an AI supervisor for call centers and other workplaces; it gives workers feedback in real time.

But the use of AI program to manage workers remains controversial. “It is surreal to think that any company could fire their own workers without any human involvement,” Marc Perrone, the president of United Food and Commercial Workers International Union. How do you resolve conflict between the workers and the platforms serving as the supervisor?

Defenders of workplace AI argue that these systems are meant to make workers better. For example, there may be situations in which human bias skews decision-making, such as hiring and this is where AI can help.

Nevertheless, one should by all means avoid the temptation to abuse AI for the purpose of big-brother watching the workers. The full article of the New York Times is here.

LIVE SCHEDULE UNITED NATIONS ACADEMIC IMPACT CHARTER DAY LECTURE

Wed 26 Jun 2019 10:00 AM – 1:00 PM (Time zone: Eastern Time US & Ca)

If you would like to join the discussion online you can watch the event live at webtv.un.org.

Our speaker will be Dr. David A. Bray, whose talk Artificial Intelligence, the Internet and the Future of Data: Where Will We Be in 2045? , will examine the impact of technology on the mission of the UN 100 years after its creation.

Dr. Bray has served as Executive Director for the People-Centered Internet Coalition focused on providing support and expertise for community-based projects that measurably improve people’s lives using the internet. Business Insider named him one of the top “24 Americans Who Are Changing the World under 40″ and he was named a Young Global Leader by the World Economic Forum for 2016-2021, a Marshall Memorial Fellow and a Senior Fellow with the Institute for Human-Machine Cognition.

Dr. Bray’s talk will be followed by reflections of discussants and a larger conversation with the audience. The invited discussants include:

  • Fabrizio Hochschild, United Nations Under Secretary-General and Special Adviser on the Preparations for the Commemoration of the Seventy-Fifth Anniversary of the United Nations
  • David Silbersweig, Stanley Cobb Professor of Psychiatry at Harvard Medical School
  • Mariko Gakiya, Director, Global Leadership for Health, Peace and Human Security, Boston Global Forum
  • Nam Pham, Department of Business Development and International Trade , State of Massachusetts
  • Atefeh Riazi, UN Assistant Secretary-General, Chief Information Technology Officer , United Nations Office of Information and Communications Technology

INTERNATIONAL RELATIONS IN THE CYBER AGE: THE CO-EVOLUTION DILEMMA

Professor Nazli Choucri, MIT, Board Member of the Boston Global Forum and Michael Dukakis Institute for Leadership and Innovation, also a very active member of AIWS Standards and Practice Committee, has launched a new book:” International Relations in the Cyber Age “.

“The international system of sovereign states has evolved slowly since the seventeenth century, but the transnational global cyber system has grown rapidly in just a few decades. Two respected scholars – a computer scientist and a political scientist-have joined their complementary talents in a bold and important exploration of this crucial co-evolution.”

– Joseph S. Nye, Harvard Kennedy School and author of The Future of Power

“Many have observed that the explosive growth of the Internet and digital technology have reshaped longstanding global structures of governance and cooperation. International Relations in the Cyber Age astutely recasts that unilateral narrative into one of co-evolution, exploring the mutually transformational relationship between international relations and cyberspace.”

– Jonathan Zittrain, George Bemis Professor of International Law and Professor of Computer Science, Harvard University

“Cyber architecture is now a proxy for political power. A leading political scientist and pioneering Internet designer masterfully explain how ‘high politics’ intertwine with Internet control points that lack any natural correspondence to the State. This book is a wake-up call about the collision and now indistinguishability between two worlds.”

– Laura Denardis, Professor. American University and author, The Global War for Internet Governance

“This book uniquely combines the perspectives of an Internet pioneer (Clark) and a leading political scientist with expertise in cybersecurity (Choucri) to produce a very rich account of how cyberspace impacts international relations, and vice versa. It is a valuable contribution to our understanding of Internet governance.”

– Jack Goldsmith, Henry Shattuck Professor, Harvard Law School

About this book

A foundational analysis of the co-evolution of the internet and international relations, examining resultant challenges for individuals, organizations, firms, and states.

In our increasingly digital world, data flows define the international landscape as much as the flow of materials and people. How is cyberspace shaping international relations, and how are international relations shaping cyberspace? In this book, Nazli Choucri and David D. Clark offer a foundational analysis of the co-evolution of cyberspace (with the internet as its core) and international relations, examining resultant challenges for individuals, organizations, and states.

The authors examine the pervasiveness of power and politics in the digital realm, finding that the internet is evolving much faster than the tools for regulating it. This creates a “co-evolution dilemma”—a new reality in which digital interactions have enabled weaker actors to influence or threaten stronger actors, including the traditional state powers. Choucri and Clark develop new methods of analysis. For example, one method is about control in the internet age, “control point analysis,” and apply it to a variety of situations, including major actors in the international and digital realms: the United States, China, and Google.  Another is about network analysis of international law for cyber operations. A third method is to measure the propensity of states to expand their influence in the “real” world compared to expansion in the cyber domain. In so doing so they lay the groundwork for a new international relations theory that reflects the reality in which we live—one in which the international and digital realms are inextricably linked and evolving together.

Authors

Nazli Choucri

Nazli Choucri is Professor of Political Science at MIT, Faculty Affiliate at the MIT institute for Data Science and Society, Director of the Global System for Sustainable Development (GSSD), and the author of Cyberpolitics in International Relations (MIT Press).

David D. Clark

David D. Clark is a Senior Research Scientist at the MIT Computer Science and Artificial Intelligence Lab and a leader in the design of the Internet since the 1970s.

TECH COMPANIES SHAPING THE RULES GOVERNING AI

IN EARLY APRIL, the European Commission published guidelines intended to keep any artificial intelligence technology used on the EU’s 500 million citizens trustworthy. The bloc’s commissioner for digital economy and society, Bulgaria’s Mariya Gabriel, called them “a solid foundation based on EU values.”

One of the 52 experts who worked on the guidelines argues that foundation is flawed—thanks to the tech industry. Thomas Metzinger, a philosopher from the University of Mainz, in Germany, says too many of the experts who created the guidelines came from or were aligned with industry interests. Metzinger says he and another member of the group were asked to draft a list of AI uses that should be prohibited. That list included autonomous weapons, and government social scoring systems similar to those under development in China. But Metzinger alleges tech’s allies later convinced the broader group that it shouldn’t draw any “red lines” around uses of AI.

Metzinger says that spoiled a chance for the EU to set an influential example that—like the bloc’s GDPR privacy rules—showed technology must operate within clear limits. “Now everything is up for negotiation,” he says.

When a formal draft was released in December, uses that had been suggested as requiring “red lines” were presented as examples of “critical concerns.” That shift appeared to please Microsoft. The company didn’t have its own seat on the EU expert group, but like Facebook, Apple, and others, was represented via trade group DigitalEurope. In a public comment on the draft, Cornelia Kutterer, Microsoft’s senior director for EU government affairs, said the group had “taken the right approach in choosing to cast these as ‘concerns,’ rather than as ‘red lines.’” Microsoft did not provide further comment. Cecilia Bonefeld-Dahl, director general for DigitalEurope and a member of the expert group, said its work had been balanced, and not tilted toward industry. “We need to get it right, not to stop European innovation and welfare, but also to avoid the risks of misuse of AI.”

The brouhaha over Europe’s guidelines for AI was an early skirmish in a debate that’s likely to recur around the globe, as policymakers consider installing guardrails on artificial intelligence to prevent harm to society. Tech companies are taking a close interest—and in some cases appear to be trying to steer construction of any new guardrails to their own benefit.

Harvard law professor Yochai Benkler warned in the journalNature this month that “industry has mobilized to shape the science, morality and laws of artificial intelligence.”

Benkler cited Metzinger’s experience in that op-ed. He also joined other academics in criticizing a National Science Foundation program for research into “Fairness in Artificial Intelligence” that is co-funded by Amazon. The company will not participate in the peer review process that allocates the grants. But NSF documents say it can ask recipients to share updates on their work, and will retain a right to royalty-free license to any intellectual property developed.

Amazon declined to comment on the program; an NSF spokesperson said that tools, data, and research papers produced under the grants would all be made available to the public. Benkler says the program is an example of how the tech industry is becoming too influential over how society governs and scrutinizes the effects of AI. “Government actors need to rediscover their own sense of purpose as an indispensable counterweight to industry power,” he says.

Microsoft used some of its power when Washington state considered proposals to restrict facial recognition technology. The company’s cloud unit offers such technology, but it has also said that technology should be subject to new federal regulation.

In February, Microsoft loudly supported a privacy bill being considered in Washington’s state Senate that reflected its preferred rules, which included a requirement that vendors allow outsiders to test their technology for accuracy or biases. The company spoke against a stricter bill that would have placed a moratorium on local and state government use of the technology.

By April, Microsoft found itself fighting against a House version of the bill it had supported, after the addition of firmer language on facial recognition. The House bill would have required that companies obtain independent confirmation that their technology worked equally well for all skin tones and genders before deploying it. Irene Plenefisch, Microsoft’s director of government affairs, testified against that version of the bill, saying it “would effectively ban facial recognition technology [which] has many beneficial uses.” The house bill stalled. With lawmakers unable to reconcile differing visions for the legislation, Washington’s attempt to pass a new privacy law collapsed.

In a statement, a Microsoft spokesperson said that the company’s actions in Washington sprang from its belief in “strong regulation of facial recognition technology to ensure it is used responsibly.”

Shankar Narayan, director of the technology and liberty project of the ACLU’s Washington chapter, says the episode shows how tech companies are trying to steer legislators toward their favored, looser, rules for AI. But, Narayan says, they won’t always succeed. “My hope is that more policymakers will see these companies as entities that need to be regulated and stand up for consumers and communities,” he says. On Tuesday, San Francisco supervisors voted to ban the use of facial recognition by city agencies.

Washington lawmakers—and Microsoft—hope to try again for new privacy and facial recognition legislation next year. By then, AI may also be a subject of debate in Washington, DC.

Last month, Senators Cory Booker (D-New Jersey) and Ron Wyden (D-Oregon) and Representative Yvette Clarke (D-New York) introduced bills dubbed the Algorithmic Accountability Act. It includes a requirement that companies assess whether AI systems and their training data have built-in biases, or could harm consumers through discrimination.

Mutale Nkonde, a fellow at the Data and Society research institute, participated in discussions during the bill’s drafting. She is hopeful it will trigger discussion in DC about AI’s societal impacts, which she says is long overdue.

The tech industry will make itself a part of any such conversations. Nkonde says that when talking with lawmakers about topics such as racial disparities in face analysis algorithms, some have seemed surprised, and said they have been briefed by tech companies on how AI technology benefits society.

Google is one company that has briefed federal lawmakers about AI. Its parent Alphabet spent $22 million, more than any other company, on lobbying last year. In January, Google issued a white paper arguing that although the technology comes with hazards, existing rules and self-regulation will be sufficient “in the vast majority of instances.”

Metzinger, the German philosophy professor, believes the EU can still break free from industry influence over its AI policy. The expert group that produced the guidelines is now devising recommendations for how the European Commission should invest billions of euros it plans to spend in coming years to strengthening Europe’s competitiveness.

Metzinger wants some of it to fund a new center to study the effects and ethics of AI, and similar work throughout Europe. That would create a new class of experts who could keep evolving the EU’s AI ethics guidelines in a less industry-centric direction, he says.

ARTIFICIAL INTELLIGENCE CAN NOW COPY YOUR VOICE: WHAT DOES THAT MEAN FOR HUMANS?

It takes just 3.7 seconds of audio to clone a voice. This impressive—and a bit alarming—feat was announced by Chinese tech giant Baidu. A year ago, the company’s voice cloning tool called Deep Voice required 30 minutes of audio to do the same. This illustrates just how fast the technology to create artificial voices is accelerating. In just a short time, the capabilities of AI voice generation have expanded and become more realistic which makes it easier for the technology to be misused.

Artificial Intelligence Can Now Copy Your Voice: What Does That Mean For Humans?

Capabilities of AI Voice Generation

Like all artificial intelligence algorithms, the more data voice cloning tools such as Deep Voice receive to train with the more realistic the results. When you listen to several cloning examples, it’s easier to appreciate the breadth of what the technology can do including being able to switch the gender of the voice as well as alter accents and styles of speech.

Google unveiled Tacotron 2, a text-to-speech system that leverages the company’s deep neural network and speech generation methodWaveNet. WaveNet analyzes a visual representation of audio called a spectrogram to generate audio. It is used to generate the voice for Google Assistant. This iteration of the technology is so good; it’snearly impossible to tell what’s AI generated and what voice is human generated. The algorithm has learned how to pronounce challenging words and names that would have been a tell-tale sign of a machine as well as how to better enunciate words.

These advances in Google’s voice generation technology have allowed for Google Assistant to offer celebrity cameos. John Legend’s voice is now an option on any device in the United States with Google Assistant such as Google Home, Google Home Hub, and smartphones. The crooner’s voice will only respond to certain questions such as “What’s the weather” and “How far away is the moon” and is available to sing happy birthday on command. Google anticipates that we’ll soon have more celebrity cameos to choose from.

Another example of just how precise the technology has become, a Jordan Peterson (the author of 12 Rules for Life) AI model sounds just like him rapping Eminem’s “Lose Yourself” song. The creator of the AI algorithm used just six hours of Peterson talking (taken from readily available recordings of him online) to train the machine learning algorithm to create the audio. It takes short audio clips and learns how to synthesize speech in the style of the speaker. Take a listen, and you’ll see just how successful it was.

This advanced technology opens the door for companies such asLyrebird to provide new services and products. Lyrebird uses artificial intelligence to create voices for chatbots, audiobooks, videos games, text readers and more. They acknowledge on their website that “with great innovation comes great responsibility” underscoring the importance of pioneers of this technology to take great care to avoid misuse of the technology.

How This Technology Could Be Misused

Similar to other new technologies, artificial voice can have many benefits but can be also be used to mislead individuals as well. As the AI algorithms get better and it becomes difficult to discern what’s real and what’s artificial, there will be more opportunities to use it to fabricate the truth.

According to research, our brains don’t register significant differences between real and artificial voices. In fact, it’s harder for our brains to distinguish fake voices than to detect fakes images.

Now that these AI systems only require a short amount of audio to train in order to create a viable artificial voice that mimics the speaking style and tone of an individual, the opportunity for abuse increases. So far, researchers weren’t able to identify a neural distinction for how a brain can distinguish between real and fake. Consider how artificial voices might be used in an interview, news segment or press conference to make listeners believe they are listening to an authority figure in the government or a CEO of a company.

Raising awareness that this technology exists and how sophisticated it is will be the first step to safeguard listeners from falling for artificial voices when they are used to mislead us. The real fear is that people can be fooled to act on something that is fake because it sounds like it’s coming from somebody real. Some people are attempting to find a technical solution to safeguard us. However, a technical solution will not be 100% foolproof. Our ability to critically assess a situation, evaluate the source of information and verify its validity will become increasingly important.