World Leadership Alliance – Club de Madrid took part in the United Nation’s conversation on preventing violent conflict

In March 2018, World Leadership Alliance – Club de Madrid (WLA-CdM) took part in a conversation between the United Nations and the World Bank on approaches to preventing violent conflict held in Washington, D.C.

WLA-CdM , one of BGF and MDI’s closest partners in developing AIWS initiative and especially AIWS 7-layer model, launched the United Nations’ and World Bank’s joint study “Pathways for Peace: Inclusive Approaches to Preventing Violent Conflict”.

Attendees addressed the nature of contemporary conflict, the need of states as well as policies to cope with crisis and possible outcomes. In general, participants realized a cultural shift in the politics of prevention as it is relevant to both political and technical aspects.

The United Nations and The World Bank highlighted their partnership and responsibility on the path of keeping world peace.

Michael Dukakis Leadership Fellow 2018-2019 Announcement

Every year, we carefully select several outstanding scholars, whose achievements in their fields make them promising leaders, and who have displayed an early commitment to promoting global peace and security. This year, our fellows come from a diversity of disciplines, including the Media, Computer Science/Artificial Intelligence, and Psychology, but they share a common interest in promoting human well-being.

We are delighted to announce four members of Michael Dukakis Leadership Fellow 2018-2019:

Walter Langelaar

Programme Director Media Design at Victoria University of Wellington, New Zealand

Co-founder of SAM, an AI Politician

Walter Langelaar is an artist and researcher whose work in media arts and design questions our digitally networked cultures and infrastructure through sculpture, installation, online performance and critical intervention. Walter studied Fine Art at the AKI in Enschede, graduating with a double major in Conceptual Art and New Media Art, before continuing his studies in Media Design (MA) at the Piet Zwart Institute in Rotterdam, under supervision of Matthew Fuller and Florian Cramer.

In 2017, he co-founded SAM, an AI Politician, which aims to raise awareness as well as pose critical perspectives on AI cloud infrastructure, blockchains and social media mining while contextualising these tools in relation to contemporary Internet culture, political science and e-governance. His broader research agenda is concerned with the plethora of recording devices employed in the post-Snowden spheres of networked interaction design, and how we may subvert the use of these devices from their initial states of surveillance towards new modes of awareness and cultural relevance.

Walter’s work is shown in numerous venues across the European and international media arts scene such as transmediale and CTM, Videotage, Medialab Prado, DEAF and v2, FILE Festival, Ars Electronica, iMAL, Montevideo/NiMK, Piksel.no, and in more traditional art institutes such as MuseumsQuartier Vienna, the Hammer Museum, Dokumenta, Jeu de Paume, Casino Luxembourg, Museo del Traje and Kunsthallen Nikolaj. Walter received several awards for his personal and collaborative projects, including the Internet Society (ISOC) award for ‘Internet and the Arts’, Rene Coelho award, Prix Ars Electronica and the Virtueel Platform ‘Best Practice Award’.

Angela Schoelig

Professor of University of Toronto

Angela Schoellig, leads the Dynamic Systems Lab at the University of Toronto, and has developed algorithms that allow robots to learn together. Her algorithms are helping self-driving and self-flying vehicles move around more safely.

As a PhD student at the Swiss Federal Institute of Technology in Zurich, Schoellig worked with others to develop the Flying Machine Arena, a 10-cubic-meter space for training drones to fly together in an enclosed area.

In 2010, she created a performance involving a fleet of UAV’s that flew synchronously to music. To do so, she developed algorithms that allowed the drones to adapt their movements to match the music’s tempo, and to coordinate with one another to avoid collision, without manual control by researchers. In 2017, she was named as one of 35 Innovators under 35 by MIT Technology Review.

Sarah Cotterill

Secretary of AIWS Standards and Practice Committee, Michael Dukakis Institute for Leadership and Innovation

Harvard Fellow

Sarah Cotterill, PhD, College Fellow in Psychology at Harvard University. She conducts research on political misinformation, as well as decision-making in the context of charitable giving, using experimental and machine learning techniques. She has an equal interest in statistics and methodology, including deep learning, regression (and variants of the generalized linear model), and survey/experimental design. Her written work, featuring insights from her research and from the broader field of psychology, has been published in The New York Times, The Boston Globe, and Psychology Today.

She is currently the Secretary of MDI’s AIWS Standards and Practice Committee, which is established to ensure ethical development of AI worldwide.

Kevin Roose

Technology columnist, The New York Times

Kevin Roose is a technology columnist for The New York Times and a writer-at-large for The New York Times Magazine. His column, “The Shift,” examines the intersection of tech, business, and culture.

He is the author of many interesting articles concerning tech and business, and is the New York Times bestselling author of two books, “Young Money” (2014) and “The Unlikely Disciple” (2009).

He has appeared on “The Daily Show with Jon Stewart,” CNN, NPR, MSNBC, CNBC, and many other TV and radio outlets, is a regular guest on “The Daily,” and has appeared on Longform and other podcasts. He is part of a team that won the 2018 Gerald Loeb Award for breaking news.

In 2015, he was named to Forbes’s “30 Under 30” list and Time’s list of the 140 best Twitter feeds, and his work has been featured in The Best American Business Writing, GQ, Esquire, Vanity Fair, and other publications.

 

Ethics are extremely important as technologies thrive

Mustafa Suleyman, Co-founder of DeepMind, calls for the awareness of ethical standards for technology development.

Rapid technological advances continue to make contributions to society’s well-being, but we must also  be mindful of public opinion. The founder of DeepMind wrote in his article on RSA Journal (Royal Society for the encouragement of Arts, Manufactures and Commerce) that public concern over technological changes and advances should not be simply dismissed, and warned that there is likely a gap between attitudes towards technology amongst developers and users.

In the critique, measures are suggested to address societal challenges that might arise as a result of technological advances. One challenge is the asymmetry of information between developers and the general public regarding how technology works, leading to potential conflicts between governments, activists and technologists … instead of these groups working together. Hence, it is essential to have new technical solutions that enable wide range of stakeholders to understand how their data is used, and how algorithms work.

The Michael Dukakis Institute for Leadership and Innovation (MDI) has developed the AIWS 7-Layer Model to allow future generations to address precisely the types of ethical concerns raised by DeepMind.

List of concrete problems in AI safety

As machine learning and artificial intelligence (AI) creates fast-paced innovation, a paper named “Concrete Problems in AI Safety,” originally released 2 years ago, presents problems related to AI use and misuse. .

Two years ago, researchers from Google, Stanford, UC Berkeley, and Open AI published “Concrete problems in AI Safety”- which remains one of the most important documents on AI Safety. The problems include unintended and harmful behaviors leading to possible negative side effects; reward hacking; scalable oversight; and safe exploration and robustness to distributional change. The paper uses the example of a cleaning robot to illustrate possible approaches to address these problems.

1. Avoiding negative side effects

AI development can lead to possible negative side effects. Two solutions are proposed to address this problem. Firstly, the algorithm might penalize actions by the robot that have negative impacts on the environment during the task completion progress. Secondly, developers might train the agent to recognize possible side effects in order to avoid them.

2. Reward hacking

Similar to negative side effects problem, this issue arises from objective misspecification. Thus, developers should ensure that algorithms do not exploit the system, but rather complete the given objective.

3. Scalable Oversight

This issue indicates a lack of supervision in training progress. The learning agent is not provided sufficient feedback on the safety implication of the agent’s actions. A direction to tackle this problem is simply to provide more a informative view of the environment, and feedback for every action, not just the performance on the entire task.

4. Safe exploration

The AI agent explores its environment as it learns, but during that exploration, it might harm itself or the environment. A way to deal with this problem might be to limit the extent of an agent’s exploration of a simulated environment.

5. Robustness to Distributional Change

Over the course of AI development, the agent might encounter a never before seen situation, which might lead it to take actions harmful to itself or the enviroment. Researchers might explore how to design systems that are able to safely transfer knowledge acquired in one environment to another.

How to be empowered by AI?

Max Tegmark – the professor at MIT, a member of MDI’s AIWS Standards and Practice Committee who spoke at The BGF-G7 Summit of Boston Global Forum in April, 2018 – was invited to talk about AI in TED 2018.

As AI research has advanced at a fast pace, many researchers expect AI to eventually be able to outsmart humans at all tasks. Max Tegmark analyzed the opportunities and threats, and proposed measures that should be taken to ensure AI will be peaceable used to advance societal well-being.

He argued that AI researchers and thinkers should carefully discuss how to keep AI beneficial to humanity, and should consider negative outcomes like a possible AI arms race and lethal autonomous weapons. In addition to that, researchers should consider how to protect users from threat like hackers and software malware and malfunction.

In conclusion, if we steer the development of AI carefully, and keep it under control, AI holds great promise and great potential to enhance societal well-being.