Club de Madrid’s Open Letter to the G20: ‘We, not They’

On November 30, Maria Elena Aguero, Secretary-General of the World Leadership Alliance-Club de Madrid (WLA-CdM), and her colleagues as the Members of the Mercy Corps European Leadership Council wrote an open letter to G20 leaders with the aim to claim the equal rights of refugees and migrants reducing the polarization.

People who served as cabinet ministers from two UK’s Parties including an international footballer, academics, diplomats, journalists and entrepreneurs, share a common interest to protect their descendants from discrimination – the commonly used word ‘they’ in the age of globalization.

“Although we are different, there is something we share: we reject the word ‘they’,” they wrote.

With the hope to remove barriers to grant equal opportunities to everyone, the open letter calls on G20 leaders to:

  • Use respectful, tolerant and compassionate language to refer to refugees and migrants.
  • Take a stand against global tariffs and competition that embed global inequalities and inequities of opportunity.
  • Commit to ensuring that foreign policy is conducted with the well-being of civilians at the forefront.
  • Consider building into future G20 priorities a roadmap for how to reduce polarization and bring people together.

The open letter is also published on WLA-CdM’s website.

The upcoming Global Cybersecurity Day 2018

On December 12, 2018, the Global Cybersecurity Day 2018 will take place at Loeb House, Harvard University under the moderation of Governor Michael Dukakis, Chairman of Boston Global Forum (BGF) and Michael Dukakis Institute (MDI).

Since the internet boom over the last decade, we have witnessed a significant change in our daily lives as well as how businesses have changed. This has been proved to be greatly beneficial, however, it poses threat to our privacy and safety.

The Global Cybersecurity Day was created to inspire the shared responsibility of the world’s citizens to protect the Internet’s safety and transparency. During the discussion, experts will explore the current state of cybersecurity and the threat posed by disinformation, anonymous sources and fake news as well as the role AI can play as an effective defense mechanism against these threats to truth and the principles of democracy.

The upcoming Global Cybersecurity Day is going to be held at Loeb House, Harvard University, 8:30 am – 12:00 pm, December 12, 2018. The moderator is Governor Michael Dukakis, Chairman of BGF and MDI.

Governor Michael Dukakis will announce the recipient of World Leader for Peace and Cybersecurity Award 2018.

The Minister of Foreign Affair of Japan Taro Kono will give a speech on cybersecurity and disinformation.

The keynote speaker, the AI World Society Distinguished Lecturer is Liam Byrne, Member of Parliament, and Shadow Digital Minister of the United Kingdom. The Agenda of Global Cybersecurity Day follow.

The Michael Dukakis Institute and the Boston Global Forum Announce Strategic Alliance with AI World Government

The Michael Dukakis Institute and the Boston Global Forum Announce Strategic Alliance with AI World Government. AI World Government to focus on AI Ethics and AI-Government as Public Sector Deployment of AI.

The Michael Dukakis Institute for Leadership and Innovation and the Boston Global Forum have announced that they will be a Strategic Alliance Host of the AI World Government Conference & Expo, being produced by Cambridge Innovation Institute. The event will be held on June 24-26, 2019 at the Ronald Reagan International Trade Center in Washington, DC.

According to conference founder and chair, Eliot Weinman, “We are pleased to continue our collaboration with The Michael Dukakis Institute for Leadership and Innovation and their Boston Global Forum (BGF). Former Governor Michael Dukakis has been an innovative global visionary for decades. For the past several years the BGF has conferred with government, research and technology experts to develop a framework for governments around the world to develop the proper AI ethics regulations. BGF, which created the concept of AI-Government on June 25, 2018 at Harvard University, and organized the first AI-government conference at Harvard September 20, 2018, will jointly develop a dedicated conference on this topic at AI World Government on June 24, 2019. AI World and AI Trends will also publish the continuing results of their research for our global audience.”

AI World Government Conference & Expo provides a forum to educate and inform federal, state, and local governments on the many benefits of deploying AI technologies. These agencies are like public and private “enterprises”, which AI World is established and known for. The government application of AI is already in its early adoption period and has been deployed in a wide variety of applications that have shown benefits. As AI continues to evolve at a rapid pace, the next generation of deployment will enable government agencies to:

• Provide better and enhanced services to its constituents
• Increase productivity and reduce costs
• Accelerate the overall digital transformation efforts underway throughout government agencies
• Introduce an AI-Government model

AI World Government is also the backdrop for shaping the discussion around ethics, safety, and regulatory requirements for machine learning, deep learning, computer vision, image and pattern recognition, and emerging intelligent automation solutions.

According to Nguyen Anh Tuan, CEO of the Boston Global Forum (BGF), “We have been collaborating with Mr. Weinman and his team at AI World throughout 2018 to support the event as an International Host. We have several activities already underway at AI World 2018, and will continue our strategic alliance for all AI World 2019 events. We will present an AI-Government model at AI World Government.”

About the Michael Dukakis Institute for Leadership and Innovation (MDI) (dukakis.bostonglobalforum.org)

Michael Dukakis Institute for Leadership and Innovation (MDI) was co-founded by Governor Michael Dukakis and Nguyen Anh Tuan. MDI generates important initiatives in AI in concert with the AI World Society 7-Layer Model, and AI-Government, World Leader in AI World Society Award. MDI is the publisher of Shaping Futures magazine. MDI Innovators are prominent leaders and scholars with Harvard, MIT, Brown, Tufts, Google, and The New York Times, with a Board of Leaders composed of: Chairman Michael Dukakis, Director Nguyen Anh Tuan, and professorship members Nazli Choucri, Thomas Patterson, David Silbersweig and John Savage.

At AI World Conference and Expo, December 4, 2018, MDI and BGF announced the AIWS Report About AI Ethics, and Government AIWS Ethics and Practices Index, the first AI Ethics Index about governments.

About AI World Government (AIWorldGov.com)

AI World Government Conference & Expo provides a forum to educate and inform public sector agencies (federal, state, and local governments) and its supply chain on the many benefits of deploying AI technologies. These agencies are like public and private “enterprises”, which AI World is established and known for. The government application of AI is already in its early adoption period and has been deployed in a wide variety of applications that have shown benefits. The event will be held at the Ronald Reagan International Trade Center in Washington, D.C
Media Contacts:

Boston Global Forum
Dick Pirozzolo
Communication Manager
[email protected]

Nguyen Anh Tuan
CEO
[email protected]

AI World
Lisa Scimemi
Cambridge Innovation Institute
MARCOM Director
[email protected]

Public by  EIN Presswire

AIWS Report about AI Ethics

Boston, December 3, 2018

By Michael Dukakis, Nguyen Anh Tuan,

Thomas Patterson, Thomas Creely, Nazli Choucri, Paul Nemitz, Derek Reveron, Hiroshi Ishiguro, Eliot Weinman, and Kazuo Yano

 

Governments of large countries have significant influences over the development of the world. Therefore, the lack of consistency and consensus in concepts, values and systems as well as the lack of mutual trust and cooperation between governments would likely endanger humanity in the Artificial Intelligence era.

AI can be a useful tool for humanity, helping humans develop better and overcoming the weaknesses of existing political systems. Some political systems, though being shown with greater efficiency and better results, still possess limitations and shortcomings that need correction or examination. So what should be done to ensure cooperation between major governments given the conditions of uncertainty and complexity in the
AI ecosystem? In this case, a unified vision of building ethical AI is needed so that governments can use AI as an effective tool to create better political systems to the benefits of their citizens.

Concepts and principles to create standards needed to follow the ultimate goals: for the people, for the human race, for the civilization and happiness of humanity. There must be common standards for an AI society around the world, from technology, laws, conventions, etc. to guarantee the interoperability among different frameworks and approaches between countries. It is the openness among countries that create beliefs, which are based on unified values, laws and conventions, which cannot be explained in its own way nor can it be assumed that each country has its own particularity to deny respect for common standards. If we do not reach a common accord of respect for the norms, laws, and conventions in the AI world, there will be no sustainable peace and security for humanity in the future. That is also the core content for an AI Accord between governments that Governor Michael Dukakis told the Associated Press on August 9, 2018.

The AIWS Report about AI Ethics, therefore, proposes the model of Government AIWS Ethics and Practices Index and looks at the strategies, activities and progresses of major governments (including G7 countries: Canada, France, Germany, Italy, Japan, the United Kingdom, and the United States and other influential countries such as Russia, China, India) in the field of AI.

The Report has three main parts:

Part I: Introduction to the AIWS Report about AI Ethics

Part II: Overview of Government AIWS Ethics and Practices Index

Part III: Announcement of the Government AIWS Ethics Index at AIWS Festival 2019

Appendix: The situation of G7, the European Union and major governments

 

⇒ Read full AIWS Report

AIWS Report about AI Ethics: Government AIWS Ethics and Practices Index

The Michael Dukakis Institute recently conducted a research called ‘AIWS Report about AI Ethics’ in effort to reach a common accord of respect for the norms, laws, and conventions in the AI world in a diversity approaches and frameworks in countries. The report proposes a model of Government AIWS Ethics and Practices Index. This Index measures the extent to which a government in its AI activities respects human values and contributes to the constructive use of AI.

The concept of AI-Government was developed by the Michael Dukakis Institute for Leadership and Innovation (MDI) and first presented in the MDI’s AIWS Conference in 2018. However, to guarantee the interoperability among different frameworks and approaches of governments and deal with normative differences among contexts and geographies, a model to develop, measure, and track the progress of ethical AI policy-making and solution adoption amongst governments is needed.

The AIWS Report about AI Ethics, then, proposes the model of Government AIWS Ethics and Practices Index and looks at the strategies, activities and progresses of major governments in the field of AI.

On December 3, 2018, MDI will publish the report at AI World Conference and Expo held by AI World in Boston. MDI hope to propose to global government leaders, but first and foremost, to heads of G7, OECD members, and countries with a population of over 80 million – the pioneers in the industrial revolution MDI is currently embarking on. Government leaders from G7 and OECD members can consider apply the Government AIWS Ethics and Practices Index and in the near future, the Index will contribute to the global consensus of AI development.

MDI also hopes to make contribution to the notion of dealing with these AI international problems through the United Nation. The United Nations plays a key role in regulating the actions of governments as well as people with the aim of maintaining international peace and security and promoting co-operation between countries.

The concept and criteria of the AIWS Ethics and Practices Index

The AIWS Ethics and Practices Index of the Michael Dukakis Institute measures the extent to which a government in its AI activities respects human values and contributes to the constructive use of AI. This Index has four categories.

The Index has four categories:

  1. Transparency: Substantially promotes and applies openness and transparency in the use and development of AI, including data sets, algorithms, intended impacts, goals, purposes.
  2. Regulation: Has laws and regulations that require government agencies to use AI responsibly; that are aimed at requiring private parties to use AI humanely and that restricts their ability to engage in harmful AI practices; and that prohibit the use of AI by government to disadvantage political opponents.
  3. Promotion: Invests substantially in AI initiatives that promote shared human values; refrains from investing in harmful uses of AI (e.g., autonomous weapons, propaganda creation and dissemination).
  4. Implementation: How governments seriously execute their regulations, law in AI toward good things. Respects and commits to widely accepted principles, rules of international law.

Methodology: Governments will be assessed in each category by the standards of the moment. AI is in an early stage, and governments are only beginning to address the issue through, for example, laws and regulations. Later on, as governments have more time to assess the implications of AI, more substantial efforts will be expected—for example, a more fully articulated set of AI-related laws and regulations.

The Index also points out some criteria for evaluation and control of ethics in AI:

  • Data sets: how to collect, where, whom, for what, by what. Data sets using for AI require accuracy, validation and transparency
  • Algorithm: transparency, fairness, non-bias
  • Intended impacts: for what, for whom, goals and purpose
  • Transparency in national resources
  • Refrains from investing in harmful uses of AI
  • Responsibility for mistakes
  • Transparency in decision making
  • Avoiding bias
  • Core ethical values
  • Data protection and IP
  • Mitigating social dislocation
  • Cybersecurity

Governor Michael Dukakis has opening remarks at AI World Conference and Expo 2018

AI World Conference and Expo on “Accelerating Innovation in the Enterprise”, held on December 3-5, 2018 in Boston, is focused on the state of the practice of AI in the enterprise. Governor Michael Dukakis, Chairman of Boston Global Forum and Michael Dukakis Institute for Leadership and Innovation, co-founder of the AIWS Initiative, will have an opening remarks.

The Michael Dukakis Institute is collaborating with AI World to publish some reports and programs on AI-Government, including AIWS Index and AIWS Products. It is a valuable opportunity for leaders and executives who seek knowledge of innovative implementations of AI in the enterprise through case studies and peer networking.

On December 1, 2018, AI World Conference and Expo officially opened the first session – AI World Executive Summit and Workshop with seminars on many subjects surrounding the application of AI. Governor Michael Dukakis, Chairman of Boston Global Forum and Michael Dukakis Institute for Leadership and Innovation, co-founder of the AIWS Initiative, will have an honor to make the opening speech in the inauguration of AI World Conference and Expo 2018.

The Michael Dukakis Institute for Leadership and Innovation is an international sponsor of the event and is collaborating with AI World to publish reports and programs on AI-Government including AIWS Index and AIWS Products.

Please join Governor Michael Dukakis, honorary advisory board member and a featured guest speaker, on Tuesday, December 4 at 8:55 am along with thousands of global 200 business executives at AI World.

For more information about the event, visit aiworld.com.

To register and receive a $200 discount off of a 2-3 day conference registration, click here and enter priority code 186800MDI.

To receive a complimentary pass to attend the expo, click here and enter priority code 186800XMDI.

An AI accelerator chip can transfer information at the speed of light

A startup called Lightelligence recently developed a new AI chip power machine learning using light instead of electrons.

Since the emergence of deep learning, it has proven to be of great use. For example, it enables machines to execute with more than just competitive tasks. The deep learning algorithms will give machines power to do sophisticated tasks like labeling images, translating text, etc. The algorithm requires a thorough training and a huge amount of data for AI to learn. Nowadays, companies are using this method to enhance their business.

If the information can be transferred at the speed of light, the AI algorithms will be capable of perform hundreds of times faster. Lightelligence has recently developed a new kind of chip powered by light instead of electrons to carry the core mathematical computations for machine learning. This chip can have a big impact on the world of AI.

CEO of Lightelligence, Yichen Shen, explained the key behind this technology. Since Photons are faster than electrons and their movements through the chips won’t overheat, though its behaviors are less predictable. They recently sent his chip’s design to a manufacturer.

This technology could offer huge opportunities for the world of AI. Yet, giving it so much power can result in a development speed beyond our control. It needed to be careful monitored and regulated for machine as well as its developers. There are organizations such as the Michael Dukakis Institution (with the AIWS Initiative and the AIWS 7-Layer Model) are constantly researching and raising people’s awareness to ensure the future of AI.

3 Misconception about AI, and what should be done

AI has been thriving and living among us for the last decade. It has enhanced the way human’s efficiency greatly. Even so, there is still a widespread fear over the threat of AI based on people’s common misconceptions. Here are the three key confusions need to be addressed. 

AI is robot

It is the entertainment industry to blame when it comes to human’s notion about AI. When we think of robots, the picture of Bicentennial Man, Wall-E, or Sophia, the digital humanoid, appears. However, AI is much more advanced than that. It can anticipate natural disasters, diseases, do chores, test the efficiency of drug, etc. It also can increase the work’s efficiency and productivity.

AI is going to replace humans

Due to the domination of negative predictions on the news, people are terrorized by statistics. Thus, they believe AI is going take over human’s job markets. In fact, AI is developed by human to serve human needs. Furthermore, it is excellent at repetitive tasks. The same is not true for empathy, judgment, and general life experience.

Another reason that machines can’t replace human is that the rise in number of machines also means rise in the jobs surrounding them as the working progress of AI requires human intervention and supervision.

AI is dangerous in the wrong hands

AI can be used for good or evil as with any other technology. However, the problem is never actually the technology, it is the people who use it. Taking the example of the 2016 Election, it is the responsibility of the platform wielder to ensure its user’s safety.

To sum up, we are developing AI, and it is up to us make sure AI make the best decisions. To give AI the abilities to do all of these tasks, it requires monitoring, regulations as well as ethical frameworks which are something the Michael Dukakis Institution is on.