Professor Marc Rotenberg: Algorithmic transparency will be a key in the AI Age

Professor Marc Rotenberg, President of Electronic Privacy Information Center (EPIC), Member of AIWS Standards and Practice Committee emphasized the importance of algorithmic transparency in policy formulation in the AI Age at AIWS Conference on September 20, 2018 at Harvard University Faculty Club.

“Many of the debates around the employment of AI techniques have the same focus with the debates associated with the use of computing technology and government agencies back in the 1960s and 1970s”, said Prof. Marc Rotenberg, President of Electronic Privacy Information Center (EPIC). In Prof. Rotenberg’s opinion, the core interest and the protection of privacy is not about secrecy or confidentiality, it is about the fairness of the processing concerning data on individuals. Part of the problem is that as these systems have become more sophisticated, they have also become more opaque. These systems are widespread and have enormous impact on the lives of individuals, for that reason, individuals have the right to know the movement of automated decisions. Together with EPIC, Prof. Rotenberg is sending the message to the United States Congress that algorithmic transparency will be key in the AI Age to foster public participation and policy formulation.

He mentioned that the OECD (The international Organization for Economic Cooperation and Development) has already commenced work on AI guidelines and the Japanese government – one of its members has put forward principles about the AI policies for R&D. Therefore, Prof. Rotenberg urges for deployment as there is a rapidly growing gap between informed government decision-making and the reality of our technology-driven world warning that “governments may ultimately lose control of these systems” if they don’t take action.

A new AI project is being launched aiming to enable the adaptability of AI

The Pentagon has planned to invest over two billion dollars in a program called “AI Next” to work on AI’s adaptive stance.

“Today, machines lack contextual reasoning capabilities and their training must cover every eventuality – which is not only costly – but ultimately impossible” said Steven Walker, the Director of the US Defense Advanced Research Projects Agency (DARPA).

Taking the approach into consideration, Dr. Walker desires to discover how machines can acquire human ability to improvise in an unexpected situation, The DAPRA is set to spend billions on the new AI program named “AI Next” to enable machines to be adaptive to situations.

The mentioned development of AI is defined as “the Third Wave”. The first wave of AI, explained by DARPA, allows machines reasoning over simple issues with low level of certainty; while the second wave enables to create models and train them on big data, but only with minimal reasoning. However, the third wave is claimed to permit machines to adapt to changing situations. For example, adaptive reasoning can help computer algorithm note the difference between 2 words ‘principal’ and ‘principle’ based on the analysis of surrounding words to define the context.

A survey conducted at the Joint Multi- Conference of Human Level Artificial Intelligence indicated that 37% of the respondents believe that “the Third Wave” will be achievable within five to ten years.

Giving AI the capability of reasoning and adaptability will be a breakthrough for the industry. but it also means we are giving AI a significant control of itself and its actions, which without thorough consideration, might lead to unexpected consequences. This problem requires monitoring regulations on AI. Currently, professors and researchers at MDI is working on building an ethical framework for AI to guarantee the safety of AI deployment.

AI World Conference and Expo 2018

From December 3rd to 5th, AI World Conference and Expo 2018 will take place in Boston, the three-day conference will discuss AI strategy and applications involved with several concerns such as Implementing Enterprise AI, AI in Healthcare, Pharma or Cognitive Computing, etc.

As AI becomes an essential part of our daily lives, it is important to keep track of the changes and developments of emerging technology. This year, expanding the collaboration with key international hosts and sponsors such as the Canadian government, the Michael Dukakis Institute, XPRIZE, MIT CSAIL, IDC, MIT Sloan Management Review, and many others, AI World Conference and Expo will be held for the third time with the aim of implementing and developing AI for businesses and leaders.

The conference is set out to inform business executives about AI innovations with its implementation, enabling leaders to build strategies for their companies, as well as optimize costs and grasp new opportunities.

The three-days conference is expected to give attendees the opportunity to explore various angles of AI implementation in healthcare, pharma, medicine, and specific business strategies.

Governor Michael Dukakis, Chairman of the Michael Dukakis Institute for Leadership and Innovation (MDI) and the Boston Global Forum will attend this special event and give an opening remark at AI World Conference and Expo 2018. Currently, MDI is collaborating with AI World to publish reports and programs on AI-Government, including AIWS Index and AIWS Products.

More information: https://aiworld.com/

The way cancer spreads can be anticipated using AI

A team from The Institute of Cancer Research, London (ICR) and the University of Edinburgh has come up with a measure to foresee cancers expansion, which is believed to be a great support for cancer treatment.

The team developed a new method known as Revolver (Repeated Evolution of Cancer). The technique involves in identify patterns in DNA mutation within the tumour and using the information to forecast genetic changes.

One of the biggest obstacles in curing cancer is the nature of the tumour which could evolve its own resistance to drug. But if doctors can predict how a tumour will evolve, they could intervene earlier to increase the patient’s chances of survival. For example, when researchers examine breast tumours, which had sequence of errors in genetic material that codes for the tumour-suppressing protein p53, followed by mutations in chromosome 8, they realized these tumours survived in a shorter period than those with other similar trajectories of genetic changes.

In the research, 768 tumour samples from 178 patients were examined, the samples varied from lungs, breast, kidney, bowel… to accurately detect and compare changes in each type of cancer.

If the tumour developing progress follows a certain pattern, then this methodology could be a powerful tool to predict the future trajectory of tumour development.

Rethink Robotics suddenly closes their business

Rethink failed a deal with a Chinese company and fell into a bad financial situation.

Rethink Robotics was a pioneer in building robot that could be able to coordinate with humans in real work with assured safety. However, due to a business dilemma, the company closed down on Wednesday without any statements.

Founded in 2008 by Professor Rodney Brooks, Rethink used to be one of the world’s most powerful business in robotics. Rethink led the way in the sphere of developing “cobots”, or collaborative robots, which are designed to safely work alongside humans. The software of this robot was developed towards simplification for people to program and to use, making it suitable for even people who barely receive any training in robotics. The cobots are well equipped with sensor and software that help prevent them from accidentally causing harm to users.

Baxter and Sawyer are the names of two outstanding products of the company. They are designed with the ability to perform highly repetitive rote tasks. They received a big deal from China, which led them to a cash crisis since their customer suddenly denied the order and withdraw. Scott Eckert, Rethink chief executive, refused to give the name of this Chinese company. Sawyer robot was customized for Chinese market, and now when the deal couldn’t be closed, Rethink was left with unsold robots and unpaid bills.

Simultaneously, Rethink had to confront its strong rivals. One of them is Universal Robots, a Danish company owned by North Reading-based Teradyne Inc., who announced the sale of its 25,000th collaborative robot last month. “It’s tough to compete with Universal,” said Jeff Burnstein, president of the Association for Advancing Automation.

Rethink will begin to sell off its patent portfolio and other intellectual property, and the company’s 91 employees are expected to be in strong demand from other robotics firms.

Education for Shared Societies Policy Dialogue in Lisbon, Portugal

The Education for Shared Societies (E4SS) Policy Dialogue will take place on October 16-17th 2018 in Lisbon, Portugal with the participation of about 40 democratic formers leaders of states and governments. The dialogue is organized by the World Leadership Alliance – Club de Madrid (WLA-CdM) in partnership with the Calouste Gulbenkian Foundation.

The WLA-CdM began its Shared Societies Project (SSP) a decade ago, which aims to build peace and democracy including political, social, economic and environmental dimensions. The project has been making efforts to raise a sense of belonging and shared responsibility from everyone in the shared society. This year, the organization focuses on educational engagement for all.

Three mains of the E4SS will indicate and demonstrate some current top global issues including Refugees, Migrants and Internally Displaced People (IDPs); Preventing Violent Extremism (PVE); and Digital Resilience. The policy changes for each area will be discussed by educators, policy-makers, WLA-CdM Members, and experts during the event.

The outcomes of this international dialogue will be presented to the E4SS Joint Steering Committee to produce an E4SS Agenda by 2019, ensuring the continuity of the project globally.

At the same time, WLA-CdM has also been working closely with MDI to develop the AIWS 7-Layer Model to build Next Generation Democracy. This initiative is hoped to provide a baseline for guiding AI development to ensure positive outcomes and to reduce the risks of pervasive and realistic risks and the related harms that AI could pose to humanity.

Professor Joseph Nye addressed the problem of norms for AI at AIWS Conference 2018

Professor Joseph Nye, Member of Boston Global Forum’s Board of Thinkers and Distinguished Service Professor of Harvard University, addressed the problem of norms for AI at AIWS Conference on September 20, 2018 at Harvard University Faculty Club.

Gov. Michael Dukakis, Prof. Joseph Nye, Nick Burns, and Nguyen Anh Tuan

Prof. Joseph Nye opened his speech by talking about the expansion of Chinese firms in the US market and their ambition to surpass the US in the field of AI. Prof. Nye believes the idea of an AI arms race and geopolitical competition in AI that can have profound effects on our society. However, he says prediction that China will be ahead of the US on AI by 2030 is “uncertain” and “indeterminate” since China’s only advantage is having more data and little concerns for privacy. Talking about the norms for AI, Prof. Nye thinks that as people unleashes AI, which leads to warfare and autonomy of offensives, we should have a treaty to control it. One of his suggestions is that we have international institutions, which would essentially monitor the various programs in AI in various countries.

A wary AI’s ethics discussion is essential to ensure the future of AI and robotics

On September 20, AIWS Conference with the theme ‘AI-Government and AI Arms Races and Norms’ was held at Harvard University Faculty Club by Michael Dukakis Institute for Leadership and Innovation (MDI). The key message of this conference lies in the importance of moral standard for AI to ensure humanity’s sake.

Reported by The AI Trends, the conference took place in Harvard University Faculty Club with the presence of scientists, researchers, and standard-setters. It aims to figure out the solution for the root of AI’s threat – its unconstrained machine learning mechanism.

According to Matthias Scheutz, Director of the Human-Robot Interaction Lab at Tufts University, “We would like to ensure that AI and robotics will be used for the good of humanity. The greatest danger I see is from unconstrained machine learning, where the system can define goals not intended by the designer.”

“The best way to safeguard AI systems is to build ethical mechanisms into the algorithms themselves,” adds Dr. Scheutz. “We need to do ethical testing of the system without the system knowing it. That requires specialized hardware and virtual machine architecture.”

Besides, Marc Rotenberg, President of the Electronic Privacy Information Center (EPIC) takes the position that, “Knowledge of AI algorithms is a fundamental right.”

Prof. Joseph Nye, Distinguished Service Professor of Harvard University, anticipated an AI arms race in the future at this pace while AI is thriving like never seen before, and AI’s ethics still are not researchers’ priority.

“It’s not part of the job description,” said Nazli Choucri. The effort to create standards need to be international similar to the restriction of nuclear weapon.

“Ethics is essential to what we are doing,” said Tom Creely, a professor at the US Naval War College. “It’s an important topic in the military. And national security is no longer just the Defense Department’s problem. We all need to be part of the conversation.” AI should be a valuable tool to make our life better as it’s full of potential. It will not be destructive if we follow rules to ensure our own protection.

At AIWS Conference 2018, MDI also introduced its partnership with the AI World Conference & Expo (including The AI Trends). The partnership has the aim of developing, measuring and tracking the progress of ethical AI policy-making and solution-adoption among governments and corporations.

Agreement to ban on Killer Robots has been passed by the European Parliament

The European Council recently brought out the resolution to ban killer robots. It has called on its Member States to adapt the resolution to ensure human’s future. On September 12, 2018, 82% of the votes agree to ban lethal autonomous weapon systems (LAW) internationally.

The resolution called for an urgent legal binding instrument to prohibit autonomous weapons. The need for the negotiation came after the United Nations discussion, where nations couldn’t reach the conclusion whether to ban or not to ban LAWS.

With the help of scientists, there were many letters signed by AI researchers around the world agree to the prohibition of LAWS.

Two sections of the resolution stated:

“Having regard to the open letter of July 2015 signed by over 3,000 artificial intelligence and robotics researchers and that of 21 August 2017 signed by 116 founders of leading robotics and artificial intelligence companies warning about lethal autonomous weapon systems, and the letter by 240 tech organizations and 3,089 individuals pledging never to develop, produce or use lethal autonomous weapon systems,” and

“Whereas in August 2017, 116 founders of leading international robotics and artificial intelligence companies sent an open letter to the UN calling on governments to ‘prevent an arms race in these weapons’ and ‘to avoid the establishing effects of these technologies.’”

This is a remarkable progress for AI developers. It is noticeable that scientists including members of AIWS Standards and Practice Committee of MDI, are paying more attention to ethics of AI; and this has been recognized by The European Parliament. The risks of arms race will reduce as we can find the common voice in nations.