Human life will be improved by AI if it is controlled by standards – and humans need to prepare in advance

In the new century, people have more and more great innovations that can change the history of humanity, including the most brilliant inventions. And when it comes to intelligence, we will think of AI as the “hottest” trend of technology in the world in recent years.

AI or artificial intelligence is understood simply as the intelligence of machines created by humans. This intelligence can think and learn as human intelligence, process data at a broader, more systematic, more scientific level and faster than humans. But can AI completely replace humans?

Mr. Nguyen Anh Tuan, Director of The Michael Dukakis Institute for Leadership and Innovation (MDI), Founder and Editor-in-Chief of VietNamNet Newspaper, confirmed that in the “Coffee Morning” show of a popular Vietnamese television channel – VTV3 – that AI and robots cannot replace humans.

Although AI is increasingly being used in many fields and activities in daily life, humans are still irreplaceable, especially in the field of social management.

The AI-Government, an initiative launched by the MDI in June 2018, will help manage AI to serve citizens more intelligently, more automatically and more responsibly. The AI-Government is a government in which AI is widely and thoroughly applied in the management, decision making and policy making process of governing bodies rather than in just public services (human contact, streamlined payroll system, etc.).

For example, given the US-China trade war, the government of both countries needs to make the smartest decisions. Given the full data system, AI’s intelligent and optimized algorithms will recommend smart, convincing decisions. Furthermore, AI can help us make decisions very quickly.

But it is important that AI remains a tool, an “effective assistant” offering suggestions to people while people are the ones who will consider and make final decisions. Therefore, when using AI, human intelligence needs to be one level higher. Many people think that when there are robots, they will have nothing to do anymore. On the contrary, new and more demanding jobs will open up.

It is worth mentioning that when we put AI into application, we recognize that people have many good traits but also morally ambiguous traits, while AI is very honest. In Vietnam, Luu Quang Vu’s play Green Chrysanthemum on Marsh is a typical example: Nearly 40 years ago, Luu Quang Vu thought about robots that could help people look back and adjust themselves so that they became more honest and more warm-hearted. And so, humanity will need a standard to manage and control AI in general.

That is why the MDI developed the AI World Society Initiative (AIWS Initiative), published on November 29, 2017. According to Mr. Nguyen Anh Tuan, the basic purpose of this initiative is to establish a society with the best and most effective AI application, bringing good to humans.

To illustrate the need for this, he also reiterated the fact that cyber security is a “headache” to the world today. As we did not anticipate the development of the Internet and computers, we have left “holes” that are difficult to overcome. For AI, although the same problem has not been officially declared, we still need to prepare in advance; otherwise, as Prof. Stephen Hawking said, such “holes” would be a threat to humanity in the future.

Mr. Nguyen Anh Tuan also affirmed that the MDI and its associates who are experts from Harvard University, Massachusetts Institute of Technology (MIT), etc. agree to contribute their research, ideas, or initiatives on AIWS and AI-Government to serve humanity, creating a good society where AI is not harmful to humans.

Google forbids the development of AI-based software that can be used in weapons

While critics argued that Google was stepping closer to the “business of war” due to a contract with the US Defense Department, the company has responded by banning development for AI that could be used for weapons.

As AI becomes more and more powerful, Google’s leaders have shown their concerns by preventing the creation of AI software which can be used for weapons. The action is considered to set a new ethical guideline to technology companies around the world seeking for superiority in self-driving cars, automated assistants, robotics and military AI.

According to the Independent, to prevent AI from becoming harmful for international law and human rights, Google asserted that the company will not persist in developing AI. The Independent states, however, that cybersecurity, training, veterans’ health care, search and rescue, and military recruitment are some spheres in which Google is going to cooperate with governments.

It is unclear how the company would seek to follow its rules under the principle. Seven core tenets for AI application are referenced by chief executive Sundar Pichai, consisting of being socially beneficial, being built and tested for safety, and avoiding creating or reinforcing unfair bias. The company is to evaluate projects by examining how closely the technology developed can be adapted to harmful use.

In fact, Google’s Web tools are largely developed based on the use of AI, such as image searching or automatic translation. There are possibilities that the tools themselves could easily violate the ethical principles. For example, users of Google Duplex can use it to mimic someone’s voice over the phone to make dinner reservations.

However, the Pentagon’s technological researchers and engineers say other contractors will still compete to help develop technology for the military and national defense. According to John Everett, Deputy Director of Information Innovation Office of the Defense Advanced Research Projects Agency, organizations are free to choose to participate in the AI exploration.

AI should be used for good purposes. Toward this aim, MDI has built the AIWS Initiative to establish a society with the best and most effective AI application, bringing the best to humans.

The Public Voice: AI, Ethics, and Fundamental Rights – The debate on human rights in the age of AI

On October 23, 2018, the event of AI, Ethics, and Human rights will be held by the Public Voice Coalition in Brussels, Belgium. Mr. Nguyen Anh Tuan, Director of MDI and CEO of BGF, will attend and talk about AIWS and AI-Government Initiative.

In the modern world, AI is developing at such an unprecedented speed and technology has a massive effect on our lives and the lives of professionals in politics, science, etc. To be aware of risks, The Public Voice Coalition is hosting an event – an opportunity to review the impact of AI on legal frameworks to protect consumers, competition and human rights and figure out the connection between ethics and regulations for AI and possible solutions to these issues.

In the event, the content of the conference will mainly be delivered by two keynote speakers: Anita Allen, Vice Provost for Faculty and Henry R. Silverman, Professor of Law and Philosophy at University of Pennsylvania Law School. In addition, some other innovative leaders will be present in the event including Mr. Nguyen Anh Tuan, Director of MDI and CEO of BGF.

A text-to-speech feature consisting of 28 languages for market search apps has been launched by Google

On August 29, 2018 Google launched a feature for Google Go, which uses AI to read out loud articles in up to 28 different languages.

It is only a lightweight app for consumers in emerging markets as Google recently expanded Google Go in the past December to India and Indonesia and other regions including sub-Saharan Africa. The new feature is extremely useful in developing their market for  two reasons:

  • It increases Google Go’s value for users that rely on old-school network as this application can be optimized to work with 2G networks and later. According to GSMA, 96% of mobile connections in sub- Saharan Africa were 2G or 3G.
  • It creates accessibility for consumers who would rather not type or read text.

This latest update is considered a major step in Google’s effort to develop voice technology, giving Google a momentum to reach the next billion online users. But we certainly need specific standards to control the development of AI-based technologies. That is why MDI developed ethical frameworks and AI standards for AI World Society with other scholars and experts from Harvard University, MIT, Hitachi, Google, etc.

OECD confirms incoming conference on future blockchain

On August 29, The Organization for Economic Co-operation and Development (OECD) published the schedule of an international conference on blockchain technology the following month.

“Blockchain has the potential to transform how a wide range of industries function. Fulfilling its potential, however, depends on the integrity of the processes and requires adequate policies and measures while addressing the risks of misuse. Governments and the international community will play a significant role in shaping policy and regulatory frameworks that are aligned with the emerging challenges and foster transparent, fair and stable markets as a basis for the use of blockchain,” the news release says.

Hence, a talk about the full potential of blockchain is needed for co-operation to acknowledge and evaluate the chances and the risks of using blockchain. According to The Coin Desk’s article, the discussion will feature OECD Secretary-General Angel Gurria (who was titled “World Leader in AI World Society” in 2018 by MDI), the Prime Ministers of Serbia, Bermuda and the Republic of Mauritius and the State Secretary of Slovenia.

The conference will take place at OECD’s headquarters in Paris on 4th and 5th of September with the attendance of 400 leaders, innovators and executives in both private and public sectors. The event will promote the potential and impact of blockchain on the world economy, cybersecurity and ways to make use of blockchain, green growth and sustainability and enforcement practices.

Chief economist to Barack Obama has been appointed by the UK Government to lead a new panel in digital technology

British Finance Minister assumed that the experience of the former US chief economist, Prof. Jason Furman, would be invaluable for their market-regulating institutions in the digital age.

Prof. Jason Furman, chief economist in former U.S. President Barack Obama’s administration, has been hired by the UK Government to be responsible for a new panel which will look at competitions in the digital economy and how technological progress can be squared with the protection of privacy and society.

Prof. Furman served as chair of the US Council of Economic Advisers from 2013 to 2017 and is currently a professor of economic policy at Harvard Kennedy School in Cambridge, Massachusetts. At the moment, he is currently in the AIWS Standards and Practice Committee which was established by MDI.

After Brexit, keeping commitment with the European Union in terms of digital regulations is a matter of concern of Britain government. Prof. Furman said that their focus should be on ensuring the consumers’ continuous benefit from new technology while maximizing the innovative potential from the economy.

The panel chaired by Prof. Furman will operate from September 2018 to early 2019  and will publish a report on its findings next year.

The official open letter – the completion of negotiations of the Global Compact for Safe, Orderly and Regular Migration between members of WLA-CdM

A document serving the Global Compact for Safe, Orderly and Regular Migration was published through a democratic procedure and plays an important role in providing a frame work for both regular and irregular migration.

The co-facilitators, H.E Juan José Gómez Camacho, Permanent Representative of Mexico and H.E. Jürg Lauber, Permanent Representative of Switzerland together with Member States have recently accomplished Global Compact for Safe, Orderly and Regular Migration, which will guarantee the benefits of all interested party, if the twenty-three commitments are satisfied.

The agreement addresses these following issues (1) negative perceptions and misperceptions of migration, especially in receiving countries, and (2) the attitudes and relationships between migrants and other communities in the same area. The compact provides clarity and certainty to tackle the potential problems originating from ignorance of existing concerns.

According to the report by Club de Madrid, a close partner of MDI, the compact take effects to achieve sustainable solution, this has proven to be vital and feasible which has been practiced around the world.

Announcing AIWS Standards and Practice Committee newest members

The AIWS Standards and Practice Committee welcomes the two newest members: Professor Jason Furman and President Marc Rotenberg.

The Committee is responsible for:

  • Updating and collecting information on threats and potential harm posed by AI.
  • Connecting companies, universities, and governments to find ways to prevent threats and potential harm.
  • Engaging in the audit of behaviors and decisions in the creation of AI.
  • Creating both an Index and Report about AI threats – and identifying the source of threats.
  • Creating a Report on respect for, and application of, ethics codes and standards of governments, companies, universities, individuals and all others…

There are 21 members of AIWS Standards and Practice Committee found by Michael Dukakis. Recently, the board has welcomed two innovative leaders to AIWS Standards and Practice Committee.

The first one is the professor of the Practice of Economic Policy at Harvard Kennedy School (HKS) — Prof. Jason Furman. He is also nonresident senior fellow at the Peterson Institute for International Economics. This followed eight years as a top economic adviser to President Obama, including serving as the 28th Chairman of the Council of Economic Advisers from August 2013 to January 2017, acting as both President Obama’s chief economist and a member of the cabinet.

The second is Mr. Marc Rotenberg, President of the Electronic Privacy Information Center (EPIC), an independent public interest research organization in Washington, DC. Professor Rotenberg has served on advisory panels for the American Bar Association Section on Criminal Justice, the American Association for the Advancement of Science (AAAS), the Institute of Medicine (IOM), the International Telecommunications Union (ITU), the Internet Corporation for Assigned Names and Numbers (ICANN), the National Academy of Science (NAS), the Organization of American States (OAS), UNESCO, and the OECD. He is a former chair of the Public Interest Registry, which established and manages the .ORG domain.

 

 

Innovated machines need your assistance

On July 8th, 2018 Em Tech by MIT Technology invited Manuela Maria Veloso to discuss the capabilities of AI systems nowadays and to make predictions regarding the future of AI.

Manuela Maria Veloso is the Head of the Machine Learning Department at Carnegie Mellon University & Herbert A. Simon University Professor in the School of Computer Science at Carnegie Mellon University. She discussed in her speech the current situation of robots—including the fact that there are now examples of AI in most of our devices (e.g., cellphones, computers with censors, cognitive systems).

However, she pointed out, there are also not many actual mobile robots around us. And robots nowadays are typically only equipped with cameras as a visual sensor. A challenge for mobile machines, then, is the fact that they must process the sensory data and use it to make decisions. Furthermore, mobile machines usually encounter uncertainties in making decisions with the actual state regarding stocks, weather, direction… but they will be more certain as they collect more data to make decision faster.

One possible solution is to enable AI to ask humans for help, a practice demonstrated at Carnegie Mellon by a robot tasked with fetching a cup of coffee. While it could not do all physical activities, it ask for human or other machines’ help in executing the task. In the future, to enhance human – AI interaction, Prof. Manuela Maria Veloso emphasized the need to create transparency, as robots are programmed in code, which “is a cryptic to human”. A current obstacle, known as verbalization, is the need to translate this code into natural language. A system enables machine to verbalize its activities and data processing.

If feasible, this would be a major development for robots. According to the first layer in MDI’s 7-layer model—a set of ethical standards for AI—it is vital that automation and intelligence are transparent so that human can understand a robot’s movements and actions.