Ethics Review Board – What is it and Why do we need it?

Are we letting AI make Life-or-Death Judgments?

At the Cybernetic AI Self-Driving Car Institute, AI software for self-driving cars is being developed. One crucial aspect to the AI of self-driving cars is the need for the AI to make “judgments” when in specific driving situations; some of them might lead to life-and-death outcome.

During our driving, it is possible to encounter unexpected situation, indicated by many factors: traffics, sign, pedestrians, and the terrain many actions can be taken result in multiple outcomes and we only have seconds to decide with the balance of people’s lives at stake. In the analysis given by Dr. Lance Eliot, CEO, Techbrium Inc. and the AI Trends Insider, in a circumstance, as we are driving on the highway with cars behind and in other lanes, then suddenly a shadowy figure of a pedestrian appears. In this case, no matter what the action is, casualty and damage is inevitable as we might hit the pedestrian or other cars. It is said that AI driver could experience the same situation human driver is involved in. Despite arguments says that self-driving car will ensure nothing like that will happen, but it is impossible to tell during the limitation of sensors, and incompletion of technologies.

Should we leave the decision making to AI?

If we do, we need to design tests for ethical decision-making or judgment of AI to avoid potential destruction, since automated softwares do not have human sense of reasoning, it will not be able to perform action when the situation arises. Furthermore, we tackle another challenge in term of ethics when comes to factor prioritizing. How can we tell which one is the best course of action when a certain moment occurs?

To avoid similar case in the future, we should not leave this to the technology developer; an Ethics Review Board should be used. Ethics Review Boards might be established at a federal level and/or a state level. They would be tasked with tasks of trying to guide how the AI should do when encountering moments (providing the policies and procedures, rather than somehow “coding” such aspects). They might also be involved in incident assessment with AI self-driving cars going outside the scope of the learned knowledge.

As technology is developing in unprecedented speed, not many people pay attention to the ethics. Regarding the automation of car, it will also require careful consideration and sampling and to figure out monitoring regulations; and the ethical framework for AI which is something that Michael Dukakis Institute’s experts in partnership with AI Trends are working on.

A smartphone app can evaluate your depression’s symptom

An app recently found can analyze your phone using habit to test for mental illness. Mindstrong Health, a startup in Palo Alto, California, is attempting to seek the linkage between depression and the device in our pocket.

A smartphone app is used to gather people’s cognition and emotional health by look into their way of using smartphones. After the installation of the app, it monitors things like the way the person types, taps, and scrolls while using other the device. It encrypts and analyzes remotely using machine learning, and the results are shared with the patient and the patient’s doctor.

According to MindStrong’s research, how you interact with your phone reveals important clues to your mental health. For example, a relapse of depression; with details collected from the app, users are informed when something may be amiss and can then check in with the patient by sending a message through the app and the other way around. From a research into a hundred and fifty subjects, it is found that the behaviors can tell a lot about you health.

“There were signals in there that were measuring, correlating—predicting, in fact, not just correlating with—the neurocognitive function measures that the neuropsychologist had taken,” said Paul Dagum Mindstrong, founder and CEO. For example, memory problems, which are common hallmarks of brain disorders, can be discovered by investigating the typing speed and error frequency (such as how frequently you delete characters), as well as by how fast you scroll even when you’re just using the smartphone’s keyboard. You’re switching your attention from one task to another all the time—for example, when you’re inserting punctuation into a sentence.

The app can be a great innovation for people tackling depression, or schizophrenia, substance abuse and a lot of greater benefits on the way. The company has data to confirm its science and technology. It is continuing to perform numerous studies, and this past March it began working with patients and doctors.

However, the fact that the app is currently collecting user’s health record could lead to violation, theft, attack if it is not carefully supervised and protected. According to the first layer of AIWS 7-layer Model developed by Michael Dukakis Institute, a set of ethical standards for technology, transparency in application and design of the technology need to be transparent.

Can fake news be eradicated?

As Facebook and Google have become an essential part of our daily lives – a tool that cannot be missed, researchers around the world have been working on a way to identify fake news. Facebook has 20,000 people to monitor the content posted by its users. But there can never be enough to supervise all the phony stories.

Scientists at many institutions, including MIT, the University of Michigan and Clemson University, are trying to train computers to spot such fakes, using the AI. Marten Risius, an assistant professor of management at Clemson, had the same idea not long after the 2016 election. He and a colleague in Germany, Christian Janze, went to work on an automatic fake news detector.

Risius and Janze obtained a collection of more than 2,000 articles about the election posted on Facebook and compiled by the news site Buzzfeed.

Since fake news stories always based on the truth, they had their software look for common characteristic of stories. In the case of Risius and Janze, the found fake stories by spotting a large number of capitalized words and exclamation points. Clues can be found in the way other Facebook reader’s response to the news. It is likely that fake news tends to get more “love” or “laughter” icons but not many shares compare to the truth.

After its training, the software on another batch from the Buzzfeed compilation that it had not seen before, and Risius said it accurately identified the fakes 88 percent of the time.

Another effort to deal with fake news is Ramy Baly’s. He is a postdoctoral student at MIT and also develops software to identify trustworthy news sources on the Internet. Baly has spotted a host of subtle indicators to suggest a news site might not be on the level.

In December 2018, Boston Global Forum and Michael Dukakis Institute will also organize the Fourth Annual Global Cybersecurity Day, event title ‘AI solve Disinformation’, to explore the current state of cyber issues and the threat posed by disinformation and fake news, as well as effective defense mechanisms against these activities.

Education for Shared Societies Policy Dialogue, Lisbon, Portugal

The Education for Shared Societies (E4SS) Policy Dialogue took place on October 16-17, 2018 in Lisbon, Portugal under the cooperation of World Leadership Alliance – Club de Madrid (WLA-CdM) and the Calouste Gulbenkian Foundation. Mr. Nguyen Anh Tuan, Director of Michael Dukakis Institute, attended this event.

Mr. Nguyen Anh Tuan with Mr. José Manuel Barroso (right), former President of the European Commission, former Prime Minister of Portugal, and Adviser at BGF-G7 Summit Initiative 2016

The WLA-CdM began its Shared Societies Project (SSP) a decade ago with the hope of building peace and democracy in political, social, economic and environmental dimensions. The project has been making efforts to raise a sense of belonging and shared responsibility from everyone in the shared society. This year, the organization focuses on educational engagement for all.

The three main topics of the E4SS are to indicate and demonstrate some current top global issues including Refugees, Migrants and Internally Displaced People (IDPs); Preventing Violent Extremism (PVE); and Digital Resilience. With the presence of many leaders and duty bearers, the discussion on policy will change for global action in the near future.

Governments’ preparation for cyber threat

Cybersecurity experts shared their experiences in combat cyber threat.

Estonia was the first victim of cyberattack in the world, and since then so many measures has been taken. Mr. Toomas Hendrik Ilves, who was the President of Estonia from 2006 to 2016, worked as a diplomat and journalist and was awarded with the World Leader in Cybersecurity Award by Boston Global Forum and Michael Dukakis Institute in 2017. In the interview with GovInsider, he shared his personal experience in dealing with cyberwarfare with other Estonian experts. As the first victim of fake news and information black out, he listed three main key points:

Awareness of cybersecurity

Cybersecurity usually requires complex knowledge on the mechanism of the system. But a complicated process would be simple if people knew it mechanism. Data stored in an outdated system can easily be hacked taking the case of Singapore. Ilves addressed methods of authentication, such as passwords, are no longer safe, “if we want minimal security, you need to go over to two-factor authentication.”

Collaboration

The second key point is collaboration. Nations need deeper discussion figures it out solutions to this global problem, Ilves suggested, “we need far more collaboration and cooperation in this field than we have seen up till now.”

To have a plan

Kevin Mandia, chief executive officer of FireEye, shared four main methods to tackle cyber threats.

The first one is deterrence. A system is to identify the time and place of an attack as well as the attacker. “If you know who compromised you, that’s the only way to enact policy; it’s the only way to hold nations accountable,” said Mandia.

Secondly, collating all information spread across the nation to acquire a shield during times of geopolitical tensions for certain industries and system not be able to defend a cyberattack.

Thirdly, establishing rules of engagement on the internet is a crucial step. “We have to start holding people accountable, and we have to make it so that nations that abide by the rules of engagement are all going to live with and have a good internet experience,” he added.

Finally, governments should prioritize protecting its systems first, critical infrastructure next, and then the nation.

Nazli Choucri

Member of Boston Global Forum’s Board of Thinkers

Cyber-politics Director, Michael Dukakis Institute

Professor of Political Science, MIT

Nazli Choucri is Cyber-politics Director of The Michael Dukakis Institute for Leadership and Innovation and Professor of Political Science. Her work is in the area of international relations, most notably on sources and consequences of international conflict and violence. Professor Choucri is the architect and Director of the Global System for Sustainable Development (GSSD), a multi-lingual web-based knowledge networking system focusing on the multi-dimensionality of sustainability. As Principal Investigator of an MIT-Harvard multi-year project on Explorations in Cyber International Relations, she directed a multi-disciplinary and multi-method research initiative. She is Editor of the MIT Press Series on Global Environmental Accord and, formerly, General Editor of the International Political Science Review. She also previously served as the Associate Director of MIT’s Technology and Development Program.

The author of eleven books and over 120 articles, Dr. Choucri is a member of the European Academy of Sciences. She has been involved in research or advisory work for national and international agencies, and for a number of countries, notably Algeria, Canada, Colombia, Egypt, France, Germany, Greece, Honduras, Japan, Kuwait, Mexico, Pakistan, Qatar, Sudan, Switzerland, Syria, Tunisia, Turkey, United Arab Emirates and Yemen. She served two terms as President of the Scientific Advisory Committee of UNESCO’s Management of Social Transformation (MOST) Program.

Sarah Cotterill

Secretary of AIWS Standards and Practice Committee, Michael Dukakis Institute for Leadership and Innovation

Harvard Fellow

Sarah Cotterill, PhD, College Fellow in Psychology at Harvard University. She conducts research on political misinformation, as well as decision-making in the context of charitable giving, using experimental and machine learning techniques. She has an equal interest in statistics and methodology, including deep learning, regression (and variants of the generalized linear model), and survey/experimental design. Her written work, featuring insights from her research and from the broader field of psychology, has been published in The New York Times, The Boston Globe, and Psychology Today.

She is currently the Secretary of MDI’s AIWS Standards and Practice Committee, which is established to ensure ethical development of AI worldwide.

Thomas Creely

Member of AIWS Standards and Practice Committee, Michael Dukakis Institute

Associate Professor of Ethics, U.S. Naval War College

Director of Ethics & Emerging Military Technology Graduate Program

Dr. Creely, Associate Professor of Ethics, is Director of Ethics & Emerging Military Technology Graduate Program. Serves on NATO Science and Technology Organization Technical Team. At Brown University Executive Master of Cybersecurity, he is lead for leadership and ethics. Serves The Conference Board Global Business Conduct Council, Association for Practical & Professional Ethics Business Ethics Chair, and Robert S. Hartmann Institute Board.

 

Research Contributions

Revue Internationale De La Compliance Et LÉthique Des Affaires (International Review of complaiance and Business Ethics) Focus: “Ethics and Emerging Technolog: Understanding the Emerging Threat.”
N ?13-14 Du 30 Mar 2017

Ethics and Technology: A Component to the Third Offset Strategy.
Fall 2016
Northern Plains Ethics Journal

The Impact of Ambient Intelligence Technologies on Individuals, Society and Warfare.
Fall 2016, Vol. 19
The Bridge: The Magazine of the Naval War College Foundation

Contributor to War and Religion: An Encyclopedia of Faith and Conflict (2016)
Santa Barbara: ABC-CLIO Publishers

Allan Cytryn

Member of AIWS Standards and Practice Committee, Michael Dukakis Institute

Representative of Boston Global Forum in New York

Allan M. Cytryn is with Risk Masters International, LLC, a consulting firm that advises clients on Risk Mitigation and Management, including business continuity planning, disaster recovery, and recovery from cyber attacks. He has been a senior Information Technology executive for more than 30 years. Prior to Risk Masters, Allan spent 15 years at Deloitte where he was a Director. His roles and responsibilities there included Regional CIO, National Director of Applications, and National Director of Technology for Audit and Enterprise Risk Services. Before joining Deloitte, Allan was the CIO of Simpson Thacher & Bartlett, a Vice President of Corporate Finance with Goldman Sachs, and a Vice-President of Information Technology with Bankers Trust. In all of these roles he led organizations through rapid operational and technological transformations and helped them adopt new and innovative technologies to support their core strategic objectives.

Allan additionally played a critical leadership role for Deloitte in managing the IT recovery from the 9/11/2001 attack in New York and for Simpson Thacher and Bartlett leading their recovery from the 1993 NatWest Tower Bombing in London.

Allan earned a BS in Electrical Engineering and Computer Science and an MS in Operations Research & Applied Mathematics from Columbia Engineering, as well as a M.Arch from Columbia’s Graduate School of Architecture, Planning and Preservation. He is active in alumni affairs, serving as the Chair of the Alumni Board of Visitors of the Columbia University School of Engineering and Applied Sciences and as the Treasurer of The Society of Columbia Graduates.

His son, Steven, graduated from Columbia College in 2006.