Can a robot farm operate with human workers?

An emerging autonomous farm with robots tends rows of leafy green under the control of a software named “The Brain”.

Recently, Iron Ox is opening its production line in San Francisco. The production line is set up in an 8,000-square foot hydroponic facility with the productivity of 26,000 heads of leafy greens a year. It hopes to run without human labor but filled with robotic arms and movers.

Iron Ox developed a software called “The Brain” to get machines collaborate; it watches over the farm, monitoring its condition and orchestrates robot and human when needed.

However, the human presence is still required for certain steps such as seeding and processing of crops, but Brandon Alexander, the firm’s co-founder, looks forward to automating these steps. The company is doing this in order to fill in the shortage of agricultural labor since farming industry has been witnessing a shortage of human resource.

The automation of agricultural processes will also require some monitoring regulations; the ethical framework for AI is something that MDI’s experts are actively researching and exploring.

BrainNet: A system can connect three people thoughts

A group of researchers at the University of Washington in Seattle has successfully connected human brains in the first brain-to-brain network.

The possibility of thoughts communication that used to be considered science fiction is now turned into reality. In 2015, Andrea Stocco and his colleagues at the University of Washington used his gear to connect people via a brain-to-brain interface. On September 29, 2018, he announced the success of the world first brain-to-brain network called BrainNet. The system allows a small group to play a puzzle like Tetris.

The tools run on the foundation of electroencephalograms (EEGs) to record electrical activity in the brain and transcranial magnetic stimulation (TMS) transmitted into the brain. BrainNet will measure number of electrodes placed on the skull and spot changes in brain signal for instance seeing a light flashing at 15 hertz causes the emitting of brain signal at the same frequency, when the light is switched to 17 Hz, the signal from the brain will change as well.

Stocco and his team have created a network allow three people to send and receive information from their brain using EEG and TMS, the experiment was carried by letting individuals in separated room without the capability to communicate conventionally. Two of them are senders wearing EEG can see the full screen, the game is designed so the descending block fit the row below, either it is rotated by 180 degrees or not, the senders have to make the decision on which shapes and broadcast to the receiver. See the senders control their brain signals by staring at LEDs on either side of the screen – one flashing at 15 Hz and the other at 17 Hz. The receiver is attached to an EEG and TMS can only see the upper half of Tetris and the block but not the way it is rotated. He can only decide by receiving signals via TMS saying “rotate” or “do not rotate”. The senders can see the two half can determine whether to rotate or not and transmit the signal to execute the action to the receiver.

As technology nowadays has more influential to our daily lives and function, we need to avoid accidental failures as it is attached to our safety, prosperity, and more. Researchers should guarantee the user’s safety by following ethical standards. Which is what AIWS is doing; one of their works is the AIWS 7-layer model for technology developers.

The US is aiming to make a national effort in protecting cyberspace

Under Trump administration, his advisers are in search for cybersecurity moonshot. Cyber Moonshot refers to a clear plan for securing the digital landscape over the next five years originated from the first moon landing of the US, but it lacked the vision for harnessing prowess and outcome.

Technology is developing in an unprecedented speed, while cybersecurity is not catching up with the pace due to several violations lately. “This current approach to cybersecurity isn’t working,” said Scott Charney, Vice Chairman of the President’s National Security Telecommunications Advisory Committee. “This is the beginning of a conversation.”

We can see many incidents such as the constant breaches in user’s information on Facebook and manipulation in election system in US. They don’t seem to slow down. So the call for Cyber Moonshot is extremely essential. We need to prepare for the worst can happen to its continual threat. By creating a systematic plan and cyber defenses, the moonshot can create a baseline level of confidence and readiness to cyberattack when it occurs.

Through his leadership, Scott is having a profound influence on current thinking in cybersecurity technology, policy, legal matters, and international relations. He was honored as the Business Leader in Cybersecurity by BGF in December 2016.

Global Governance for Information Integrity Roundtable in Riga: Addressing the information’s disruption on social media

On September 27, 2018 at Riga, a roundtable on Global Governance for Information Integrity hosted by the WLA-CdM took place at the Latvian Ministry of Foreign Affairs on the occasion of the 100th anniversary of Latvia. Mr. Nguyen Anh Tuan, CEO of BGF and Director of MDI, introduced AIWS Initiative and AI-Government at this event.

According Director Nguyen Anh Tuan, AI may be a good solution to prevent disinformation – a type of untrue communication that is purposefully spread and represented as truth to elicit some response that serves the perpetrator’s purpose. The AIWS and the AI-Government are initiatives of MDI, aiming to create a society in which humans and AI citizens can co-exist peacefully, and AI will be used for good purposes under the strict control.

Global Governance for Information Integrity Roundtable focused on the first pathway to global action: protecting the integrity of political information through global governance. A discussion between global political leaders and international experts is needed to address the issue of fake news in the information space. In the era of thriving communication, social media has had a huge influence on politics. It brought about many opportunities as well as challenges concerning transparency and accountability of political information.

Professor Nazli Choucri on the subject of AI-Government at the AIWS Conference

On September 20, the AIWS Conference on the theme: AI- Government and AI Arms Races and Norms took place. In the conference, Professor Nazli Choucri, Cyber Politics Director of MDI, Professor of Political Science at MIT, Member of MDI’s AIWS Standards and Practice Committee shared her ideas on AI Government and how to make it work.

In the context of AI’s emergence, the use of AI for government has great potential, as AI can help bring about a great deal of efficiency and consistency in monitoring and alignment. Aside from the benefits, the feasibility of AI governance poses many challenges to human.

We have a long history of human governance; it is no longer a strange concept to people everywhere. However, cultures in countries are diverse, whereas AI works based on data and knowledge it learned, hence, it is extremely difficult to have one AI for every institution as we would need to come up with a common concept for every system to make this work. Prof. Nazli Chourci emphasized two important aspects of AI world that are fundamental for our purposes for governance: data and algorithms which require a huge amount of time and effort for AI to work properly and retain transparency.

From the perspective of government, there are a several tasks that need to be done well to achieve a functional government. The government needs to be capable of being regulative, extractive, distributive, responsive, and symbolic and ensuring its people’s security. She also mentions that the government would have to deal with the stress of the ratio between the loads on it and its capabilities to perform functions. “If we are applying AI to government, these are the generic functions. Consider that is the matter of rules, rules have to be made, to be communicated, there have to be agencies to implement them on the operational level. The interface between AI and the government abilities becomes the first stage that keeps them both together,” said Prof. Chourci.

Due to the connectivity of the internet to everything, AI could be an essential tool for governance. However, there are limitations as well: AI is very good at analysis, targeting and execution but poor in interpretation and considering consequences. Especially when it comes to malfunctions and accidental failures, people can be in great danger if the outcomes are not carefully considered. As a result, Ethics of AI is the primary focus of innovators and practitioners in building AI-Government.

According to Prof. Chourci, there are some ethical imperatives of AI for government that need to be carefully considered:

  • Responsibility in Use
  • Accountability in Performance
  • Avoidance of oppression
  • Prevention technological conflict – no AI race
  • Provision of human oversight for critical AI operations

And most importance is the improvement of responsiveness through constant feedback for policy and check on excessive use of AI for control.

Experts anticipate the threat of AI resulting in research lockdown

According to AI News, world leaders in innovation are giving warnings of potential AI catastrophea which may lead to lockdown of research.

Recently, autonomous robotic industries have been developing at a remarkable speed, and at the same time, have caused considerable damage over multiple incidents. Autonomous vehicles  take up a chunk of incidents, such as the casualty involving an Uber self-driving vehicle. Soon, when there are more and more autonomous AI, there will be a lot of responsibility on researchers’ shoulders regarding the safety of users.

“If an AI disaster happens – and that would, for instance, be an autonomous car killing people due to serious bugs – then at some point AI systems are going to be locked down for development, at least in the US,” said Andrew Moore, the new head of AI at Google Cloud at the Artificial Intelligence and Global Security Initiative.

It is agreed that AI should not be used for military weapons, however it seems to be inevitable since “there will always be players willing to step in.” “[AI] comes with colossal opportunities, but also threats that are difficult to predict. Whoever becomes the leader in this sphere will become the ruler of the world,” said Russian President Vladimir Putin.

Concerned about accidental and adversarial use of AI, Governor Michael Dukakis, Chairman of the Michael Dukakis Institute (MDI) believes global accord is needed to ensure the rapidly growing technology is used responsibly by governments around the world. For that reason, he co-founded Artificial Intelligence World Society, a project that aims to bring scientists, academics, government officials and industry leaders together to keep AI a benign force serving humanity’s best interests. At the moment, MDI has been developing the concept of AI-Government and AIWS Index in Ethics as two components of AIWS.

Quantum Hegemony? China’s Ambitions and the Challenge to U.S. Innovation Leadership

“Within the past several years, Chinese researchers have achieved a track record of consistent advances in basic research and in the development of quantum technologies. The quantum ambition is intertwined with China’s national strategic objective to become a science and technology superpower. The United States must recognize the trajectory of China’s advances in these technologies and the promise of their potential military and commercial applications.” This is part of the introduction of the Report “Quantum Hegemony? China’s Ambitions and the Challenge to U.S. Innovation Leadership” published in September, 2018.

In the newly published report, authors Elsa B. Kania and John Costello show the basics of quantum technology, China’s related efforts in the field, and what measures the United States should pursue to preserve its technological leadership.

As China realized the strategic potential of quantum science with the ambition to strengthen the nation’s economy and military, it has become China’s prioritization in technology. Despite being a latecomer in the race, China seems to be one step ahead of “the second quantum revolution”. In the past years, China has achieved breakthroughs in the development of quantum technologies, including quantum cryptography, communications, and computing, as well as reports of progress in quantum radar, sensing, imaging, metrology, and navigation.

According to the report, China’s advances in quantum science could impact the future military and strategic balance. In fact, by investing on navigation China is striving to be a true peer competitor in these technological frontiers of military to U.S. Under this context, it is necessary for the US to build upon and redouble existing efforts to protect its position. One of the recommendations is that the Department of Defense should consider further experimentation on these technologies to leverage its advantages in innovation.

Since technology is advancing in an unprecedented pace, people should not ignore the potential damages when it gets out of hand. Therefore, it’s needed to have a set of moral standards for developers to protect the safety and prosperity of human—which is the purpose of AIWS initiative’s establishment.

⇒ Read The Full Report Quantum Hegemony? China’s Ambitions and the Challenge to U.S. Innovation Leadership HERE

Authors: Elsa Kania, Adjunct Fellow, CNAS Technology & National Security Program; John K. Costello, Director, Office of Strategy, Policy, and Plans at the Department of Homeland Security’s (DHS) National Protection and Programs Directorate.

Facebook is making an effort to suppress false news

Since the incident in 2016 election, false news has been suppressed on Facebook, while Twitter has yet to act.

According to a paper released recently, Facebook engaged with 570 fake news sites, with approximately 200 million monthly engagements with these sites at its peak in 2018. After 2 years, Facebook’s efforts paid off with more human and technological power to restrict unreliable news. With more content moderators, new offices and AI software, the amount of fake news engagement witnessed a significant drop, there being 70 million engagements with false news in July 2018. On the contrary, Twitter remains at 4 to 6 million engagements from 2016 to 2018.

This study shows a great amount of fake news, however indicates Facebook’s attempt to curb this trend. Reviewed by MIT Technology Review, Facebook seems to be moving the platform in the right direction.

On Global Cybersecurity Day December 12, 2018 at Loeb House, Harvard University, BGF will discuss How AIWS solution to solve disinformation and fake news.

 

An instrument to deal with biases in AI algorithms

IBM recently released a tool called ‘Fairness 360’ which can detect biases in algorithms to adjust the code.

For AI to work properly, a vast range of unbiased data is required. IBM is making a move in this bias problem, stepping in with an instrument called Fairness 360. According to the AI News’s article, the software will be cloud-based and open source, it will also work with various common AI frameworks including Watson, Tensorflow, SparkML, AWS SageMaker, and AzureM.  The system searches for signs of bias in algorithms to recommend solutions to correct the problems.

Humans have natural biases, and that means a developer’s bias can creep into his or her algorithm. The problem is that AI’s developers do not know what exact decision their AIs can make. Therefore, with this IBM tool, they can see what factors are being used by their AIs.

This invention can play a vital role for developers to ensure accountability and transparency for further technology development, which is also the long-term aim of the AIWS Initiative.