Introduction of the AIWS Ethics and Practices Index

On December 12, 2018, the Global Cybersecurity Day 2018 took place at Loeb House, Harvard University, MA, organized by the Boston Global Forum (BGF) and the Michael Dukakis Institute (MDI). One of the most important parts of the event was the introduction of AIWS Ethics and Practice Index delivered by Dr. Thomas Creely, Member of the AIWS Standards and Practice Committee.

Dr. Thomas Creely is a member of The AIWS Standards and Practice Committee. He serves as an Associate Professor of Ethics, Director of Ethics & Emerging Military Technology Graduate Program.

On behalf of the authors group, Dr. Thomas Creely presented the AIWS Report about AI Ethics and published the Government AIWS Ethics Index. This index measures the extent to which a government in its Artificial Intelligence (AI) activities respects human values and contributes to the constructive use of AI in such unprecedented pace of development. The report has been conducted in effort to reach a common accord of respect for the norms, laws, and conventions in the AI world in a diversity approaches and frameworks in countries.

There are 4 main categories in the Index:

  1. Transparency: Substantially promotes and applies openness and transparency in the use and development of AI, including data sets, algorithms, intended impacts, goals, and purposes.
  2. Regulation: Has laws and regulations that require government agencies to use AI responsibly; that are aimed at requiring private parties to use AI humanely and that restricts their ability to engage in harmful AI practices; and that prohibit the use of AI by government to disadvantage political opponents.
  3. Promotion: Invests substantially in AI initiatives that promote shared human values; refrains from investing in harmful uses of AI (e.g., autonomous weapons, propaganda creation and dissemination).
  4. Implementation: How governments seriously execute their regulations, law in AI toward good things. Respects and commits to widely accepted principles, rules of international law.

Further discussion took place after Dr. Creely’s presentation, there are questions on the future of AI. Though it is difficulty to anticipate how AI will change humanity, it’s believed that we have great scientists and scholars who are continuously working and preparing us for whatever is coming.

Watch full speech of Dr. Thomas Creely at the Global Cybersecurity Day 2018

An innovative design of neural network can be a solution to many challenges in AI

David Duvenaud – an AI researcher in the University of Toronto and his collaborators at the university and the Vector Institute design a brand-new prototype neural network that can overcome its previous models.

At first, his idea was to create a deep-learning algorithm that could predict a person’s health over time. However, the data given by medical’s record is a bit complicated since each check-up gives different record with different reasons and measurements. Conventional machine-learning method finds it struggling to model continuous processes, especially those are not measured often. Because this method finds the patterns in data by stacking layers of simple computational nodes, the discrete layers keep it from providing the exact outcome. To be more specific, traditional machine-learning model follow its common process, known as supervised learning, which means collecting a lot of data’s layer to figure out a formula to solve other issues with similar traces. For instance, it mistakes a cat for a dog due to the fact that they both have floppy ears. However, there are various types of dog and cat with diverse features. Hence, it might produce inaccurate results.

In response to the difficulty, they allowed the network find formulas match the description of each stage of the process, each stage represents a layer of data. Taking the example of differentiating the two pets above, the first stage might take in all the pixels and use a formula to find out which ones are most similar for cats versus dogs. A second stage might use another to construct larger patterns from groups of data to tell whether the picture has whiskers or ears. Each subsequent stage would identify a feature of the animal, after collecting a sufficient data of layers, it will identify the animal’s picture. This step-by-step breakdown of the process allows a neural net to build more sophisticated models in order to produce a more accurate prediction.

Yet, in terms of the medical field, it will require us to classify health records over time into discrete steps for instance period of years or months. So, the only way to model these medical records is to specify it even more, it might encounter the same problems as the traditional model does. To make actual breakthroughs, they still need to dig deeper into the method with more experiments and research.

“The paper will likely spur a whole range of follow-up work, particularly in time-series models, which are foundational in AI applications such as health care,” said Richard Zemel, the research director at the Vector Institute.

No matter how far the algorithm has been advanced, there is still further risk that the rate at which AI advances will outpace the continuing development of ethical and regulatory frameworks. Layer 4 of the AIWS 7-Layer Model developed by MDI focuses on policies, laws and legislation, nationally and internationally, that govern the creation and use of AI and which are necessary to ensure that AI is never used for malicious purposes.

The first AI World Society House and AI World Society Innovation Program in Vietnam

On December 16, 2018, Nguyen Anh Tuan, Director of the Michael Dukakis Institute for Leadership and Innovation (MDI), Co-founder and Chief Executive Officer of the Boston Global Forum (BGF) discussed with leaders and scholars from Dalat University (Vietnam) to establish the AI World Society House and AI World Society Innovation Program at Dalat University.

On November 22, 2017, the Artificial Intelligence World Society (AIWS) was established by the Michael Dukakis Institute for Leadership and Innovation (MDI) with the goal to advance the peaceful development of AI to improve the quality of life for all humanity. Ever since, MDI has constantly working to fulfill the mission of the AIWS.

Recently, the Michael Dukakis Institute for Leadership and Innovation (MDI) has announced its partnership with Dalat University (DLU) to build the AIWS House and design the AIWS Innovation Program at DLU, as a “nucleus” for DLU to become a pioneer in research, teaching and application of AI in Vietnam. In this collaborative support mechanism, DLU will operate and manage the activities of the AIWS House and MDI will advise and supervise to ensure quality, efficiency and achievement of goals. Nguyen Anh Tuan, Director of MDI represented MDI to work with DLU’s leaders and will be in charge of this project.

Leaders from Dalat University expressed their gratitude to the assistance of MDI. They hoped that the establishment and of the AIWS House and the AIWS Innovation Program at Dalat University will attract AI-leading professors and scientists to teach and share knowledge for lecturers as well as students in the university in particular; develop the AI application programs and initiatives for socio-economic development in Vietnam in general.

EU’s first draft on AI ethics guidelines

The European Commission (EC) published their first document on technology guidelines and policy and is looking for people’s feedback.

The draft composed by a group of 52 experts from academia, business, and civil society, serves as a guideline for AI developers to follow.

“AI can bring major benefits to our societies, from helping diagnose and cure cancers to reducing energy consumption. But, for people to accept and use AI-based systems, they need to trust them, know that their privacy is respected, that decisions are not biased.” said EC vice-president and commissioner for the Digital Single Market, Andrus Ansip.

There are two key elements in the guideline to create trustworthy AI: one is to respect rights and regulation, ensuring an ‘ethical purpose’; the other is the robustness and reliability of the AI. The 37-page document contains issues of bias and the importance of human values. It points out the potential benefits as well as threats AI brings about.

The draft guidelines are open for comment for one month until 18 January 2019 and will finally be presented in March.

Developing rules and orders for AI are also the aim that the AIWS is working on. The AIWS has continuously build the AIWS 7-Layer Model, a set of ethical standards for AI to guarantee that this technology is safe, humanistic, and beneficial to society.

Minister Taro Kono, Ministry of Foreign Affairs of Japan at the Global Cybersecurity Day 2018

On December 12, 2018 The Global Cybersecurity Day 2018 was held at Loeb House, Harvard University, by Boston Global Forum (BGF) and Michael Dukakis Institute (MDI). In the event, MDI owned the honor to have Japanese Minister for Foreign Affairs Taro Kono as a guest speaker.

Taro Kono is a Japanese politician belonging to the Liberal Democratic Party. He is a member of the House of Representatives, and has served as Minister for Foreign Affairs since a Cabinet re-shuffle by Prime Minister Shinzo Abe on 3 August 2017.

Despite his absence at Loeb House, he delivered his speech virtually to the audiences. Mr. Taro Kono congratulated the Boston Global Forum for their achievement and was excited about the Global Cybersecurity Day this year. In addition, he gave his opinion on the current situation; with a lot of changes bring about benefits as well as threats of emerging technologies especially in term of cybersecurity.

In his speech, he emphasized the need of ethical standards in technology innovation. If ethics are not prioritized, it might lead to unexpected loss to the economy and the whole society since the technology itself can be misused for malicious purposes by bad actors. Minister Taro Kono mentioned that Japan is placing cybersecurity as one of its top priorities to protect safe trade and transfer on cyberspace. He hopes to join a global effort in protecting people’s safety on cyberspace.

Vaira Vike-Freiberga’s Statement on the Imperial Springs International Forum

On the occasion of the 40th anniversary of China’s opening up and reform process, the 2018 Imperial Springs International Forum (ISIF) under the theme of “Advancing Reform and Opening-Up, Promoting Win-Win Cooperation” occurred in Guangdong on December 10-11th with the presence of Vice-President of the People’s Republic of China, Wang Qishan and around 30 prominent leaders, distinguished experts from over the world.

“I am proud to report that the 2018 Imperial Springs International Forum (ISIF) was a great success.” This is an official statement from Vaira Vike-Freiberga, President of the World Leadership Alliance – Club de Madrid and former President of Latvia, on the 2018 Imperial Springs International Forum

The Imperial Springs International Forum has become an important platform for dialogue between China and the rest of the world, where leaders and experts constructively discuss ways to enhance global governance.

The Forum was jointly organized by the Chinese People’s Association for Friendship with Foreign Countries (CPAFFC), the People´s Government of Guangdong Province, the Australia China Friendship Association (ACFEA), and the World Leadership Alliance – Club de Madrid. The event left the attendees impressed by its dialogue.

“I strongly believe that in our global system, it is important for China to understand more about the world and our partners, and also for the world to understand China,” said Dr. Chau Chak Wing, Chair of the Asia-Pacific Region World Leadership Alliance – Club de Madrid President’s Circle.

“As an Australian businessman doing business in China, I am proud to play a role in supporting the ‘opening up’ of China as it means more opportunities for Australia and the world.” He added.

Highlights from The Global Cybersecurity Day 2018

On December 12, 2018, The Global Cybersecurity Day 2018 was held at Loeb House, Harvard University, under the moderation of Governor Michael Dukakis – Chairman of the Boston Global Forum (BGF) and the Michael Dukakis Institute (MDI).

The goal of Global Cybersecurity Day is to inspire shared responsibility of the world’s citizens to protect the Internet’s safety and transparency. This year’s conference revolved around the theme “AI solutions solve disinformation.” During the discussion, experts explored the current state of cybersecurity and the threat posed by disinformation, anonymous sources, and fake news, as well as the role AI can play as an effective defense mechanism against these threats to truth and the principles of democracy.

Highlighted in this event were the online speeches of prominent speakers: President of Finland Sauli Niinistö, Japanese Minister for Foreign Affairs Taro Kono, and Rt. Hon. Liam Byrne MP, Member of Parliament for Birmingham. Especially, Liam Byrne delivered the first AI World Society Distinguished Lecture, which had strongly impact to the audiences. Their speeches at the symposium stated the current situation as well as called for the public awareness of protecting their safety on Internet.

At this event, Dr. Thomas Creely, Associate Professor of Ethics, Director of Ethics & Emerging Military Technology Graduate Program, U.S. Naval War College, on behalf of the authors group, presented the AIWS Report about AI Ethics and published the Government AIWS Ethics Index. This proposal is expected to build common standards for an AI society around the world, from technology, laws, conventions, etc. to guarantee the interoperability among different frameworks and approaches between countries.

The presentation of Cameroon Hickey from Information Disorder Lab, Shorenstein Center for Media on fake news also received much of the audience’s attention. The report brought people conscience about the definition and classification of information disorder and its challenging impact to our society.

Liam Byrne delivered the first AI World Society Distinguished Lecture

Rt. Hon. Liam Byrne MP, Member of Parliament for Birmingham, Hodge Hill, Shadow Digital Minister, Chair of the All-Party Parliamentary Group on Inclusive Growth, delivered the first AI World Society Distinguished Lecture at Global Cybersecurity Day 2018.

Liam’s lecture was held at Loeb House, Harvard University, one of the most prestigious universities. Despite his absence at the symposium, he delivered his lecture virtually with all of his spirit. His lecture was extremely inspiring to all audiences, most of whom were scholars from Harvard University, MIT, Tufts University, etc. In the speech, he focused on the changes occurred in our society the last decade, and how it affected the society and market along with the risks and challenges. In conclusion, he called for the join force of lawmakers and changemakers to reshape those constraints.

Watch full AI World Society Distinguished Lecture of Rt. Hon. Liam Byrne MP

After his enlivening lecture, the questions of scholars posed at the conference were delivered and answered directly by Liam Byrne MP online. The atmosphere of the conference became buoyant with challenging issues arising around the management of the slowness of the regulatory mechanisms to adaptive changes, the mental wiring of tech companies to incorporate social benefits, the definition of digital rights, etc.

The AI World Society Distinguished Lectures was proposed to honor those who have made outstanding contributions associated with the Artificial Intelligence World Society (AIWS) 7- Layer Model. These excellent achievements are dedicated to one of the seven layers of the AIWS Initiative. Simultaneously, the honor will help to introduce and increase awareness of the dedication of these outstanding, noble honorees among members of the global elite community.

Learning from medical records without violating patient data confidential

AI researchers have been created new techniques for training machine-learning algorithm while ensure patient’s privacy.

An innovative solution for privacy issues has emerged from the absorption of DeepMind’s health division in Google.

In response to the privacy concerns about the disclosure of patient data as Google moved its subsidiary DeepMind Health division into the main company, a split neutral network has been introduced as the latest advanced method to process data with confidentiality.

Its basic operation mechanism is similar to the process of data encryption. The neutral networks divide the data processing into separate stages that can be carried out by different people: one person starts training the machine learning model then another person can finish it. Based on this mechanism, patients’ raw data could be processed in hospitals and other medical institutions to form the training of the partway model in the first stage. These half-trained models would then be sent to a centralized location, either the clouds services, Google, or another company, to be completed in the final stages of training.

The superior advantage is that due to the obfuscation of medical data in the first stage, the centralized location can only be seen the output of the half-processed model plus the model itself. The raw patient data, therefore, is secure with the initial constitutions and the hospitals would benefit from a final model trained on a combination of every participating institution’s data.

This approach with split neural networks has been found to involve considerably fewer computational resources to train while producing much more accurate models.

Privacy issues stemming from AI have raised worries among users. AI is just a tool, and it is essential for people to closely supervise and moderate its operation with transparency. The importance of AI applications in key sectors including healthcare is the focus of Layer 7 – Business Applications for All of Society of the AIWS 7-Layer Model developed by the Michael Dukakis Institute.