PRESS RELEASE: Dr. David Bray will present AI World Society Distinguished Lecture as United Nations Academic Impact Charter Day Lecture on United Nations Charter Day

ALERT TO MEDIA:

Register in advance with: [email protected]

Executive Director for the People-Centered Internet coalition [www.peoplecentered.net], a Eisenhower and Marshall Memorial Fellow, World Economic Forum Young Global Leader, and one of the top “24 Americans Who Are Changing the World under 40”, Dr. David Bray is named to deliver a future-focused AI World Society Distinguished Lecture at the United Nations Headquarters on United Nations Charter Day June 26th, 2019.

This AI World Society Distinguished Lecture named as United Nations Academic Impact Charter Day Lecture.

The Lecture will be held at the United Nations Headquarters in New York City on United Nations Charter Day June 26th, 2019 by the Boston Global Forum and United Nations Academic Impact (UNAI). Dr. David Bray will deliver this keynote address on “Artificial Intelligence, the Internet and the Future of Data: Where Will We Be in 2045?”. He will focus on looking towards 2045: rapid technological change, global questions of governance, and the future of human co-existence.”

The AI World Society is honored to have Dr. Bray deliver this Distinguished Lecture. Since 2017, Dr. Bray has served as Executive Director for the People-Centered Internet coalition co-founded by Vint Cerf and focused on providing support and expertise for community-focused projects that measurably improve people’s lives using the Internet. Dr. Bray is both a World Economic Forum Young Global Leader and a Faculty Member for Singularity University focused on Impact and Disruption. He also is Chief Strategy Officer for the advanced geospatial company MapLarge and serves on the advisory boards of companies espousing human-centric solutions in a rapidly changing world. He also is a member of the Social Data Science Advisory Board at the Oxford Internet Institute, University of Oxford.

Dr. Bray’s keynote address will explore of how advances in the Internet, artificial intelligence, and data technologies transform communities and societies. By 2045, the United Nations will be 100 years old and this distinguished lecture will consider what possible changes will have occurred in the world and human societies by then.

Business Insider named Dr. Bray one of the top“24 Americans Who Are Changing the World under 40” and he is a Senior Fellow with the Institute for Human-Machine Cognition. He has served in a variety of leadership roles in turbulent environments, including bioterrorism preparedness and response from 2000-2005, humanitarian efforts in 2009, Executive Director for a bipartisan National Commission on research and development in 2013, and as a non-partisan Senior Executive where he received the global CIO 100 Award in both 2015 and 2017. He also was named an Eisenhower Fellow and a Marshall Memorial Fellow to Europe focused on Trans-Atlantic relations. Dr. David Bray is a member of AI World Society Standards and Practice Committee.

Maher Nasser, Director of the Outreach Division of United Nations, is the moderator

Dr. Bray’s talk will be followed by reflections of discussants and a larger conversation with the audience. The invited discussants include:

  • Fabrizio Hochschild, United Nations Under Secretary-General and Special Adviser on the Preparations for the Commemoration of the Seventy-Fifth Anniversary of the United Nations
  • David Silbersweig, Stanley Cobb Professor of Psychiatry at Harvard Medical School
  • Nam Pham, Department of Business Development and International Trade, Massachusetts Government.

The AI World Society Distinguished Lectures

The Boston Global Forum (BGF) and the Michael Dukakis Institute (MDI) for Leadership and Innovation organize the AI World Society (AIWS) Distinguished Lecture to honor people who have made outstanding contributions in AI that are associated with fostering a set of norms and best practices for the development, management, and uses of AI so that this technology is safe, humane, and beneficial to society.

The AIWS Distinguished Lectures focus on the ideas and visions of the honorees that brought them to their current position of achievement and highlight actions need to help shape a better world the future. The lectures are retained as part of the historical records at AIWS House, published in an e-book and featured in a special section of the Shaping Futures Magazine. BGF and MDI promote work on a 7-layer model for AI and society [https://bostonglobalforum.org/wp-content/uploads/The-BGF-G7-Summit-Report.pdf].

About the Boston Global Forum

Based in Boston, Massachusetts, BGF was co-founded by Governor Michael Dukakis and professors, scholars of Harvard University to bring together the world’s thought leaders and experts to participate in open public forums to discuss and illuminate the most critical issues impacting the world at large.

BGF’s principal mission is to provide an interactive and collaborative world forum for identifying and developing action-based solutions to our most profound problems. Its method is to host gatherings of thought leaders and experts to identify and explore the most pressing societal concerns and propose creative and practical solutions.

FAKE NEWS NO MORE?

Estonian President Toomas Hendrik Ilves delivers keynote speech on Global Cybersecurity Day December 12, 2017 at Loeb House, Harvard, organized by Boston Global Forum.

The goal of AI is to make it think like humans, act like humans, and do things like humans. The Turing Test is that if we cannot tell when talking to “someone” on the other side of the wall whether that is a real person or a machine, we have achieved artificial intelligence.

Enters fake news. With today’s AI advances, fake news can be produced by an AI program, so human-like and trustworthy sounding that is difficult for us to detect. We need to be ready for a future that is coming with both the goods and bad of AI.

Researchers at Allen Institute and the University of Washington have created “Grover,” a program that both creates convincing fake articles but also is able to detect them. The essential idea is that “machines that generate fake text leave a trace or signature in the way they predict word combinations” and “a neural network constructed in the same way as the network that makes the fake text automatically spots those idiosyncratic artifacts”.

The authors have decided to release all their code to the public. “At first, it would seem like keeping models like Grover private would make us safer,” they observe. But, “If generators are kept private, then there will be little recourse against adversarial attacks.” More information about the Grover project can be found here.

The AIWS has also made efforts toward addressing fake news and its impact. In 2018, its parent organizations, the Boston Global Forum and the Michael Dukakis Institute, organized the 4th Annual Global Cybersecurity Day December 12, 2017, with an event entitled ‘AI Solve Disinformation’ to explore the current state of cyber issues and the threat posed by disinformation and fake news, as well as effective defense mechanisms against these activities. In 2017, the BGF also wrote a policy proposal on fake news for consideration at the 2017 G-7 Summit in Taormina, Italy.

How AI can help humans, not replace them

If you’re using Artificial Intelligence (AI) just to do something faster, you’re not harnessing its true potential.

“The real power is unlocked when used hand-in-hand with human skills and uniquely human talents,” said Michelle Sipics, senior editor at Accenture Labs and the panel moderator of an AI-and-machine-learning-focused session at this year’s Introduced by Technical.ly conference during Philly Tech Week 2019 presented by Comcast.

But as AI penetrates more and more sensitive areas of our lives, there are real pitfalls to avoid. AI can seem like an impartial decision maker, but humans are the ones who choose the data that it will learn from. Biased data leads to biased AI.

“It gives people an opportunity to experiment with the technology, get comfortable with what it can do and establish the kind of trust that you need,” she said, “so that down the road, you’re ready for those even larger, transformative opportunities.” According to Michael Dukakis Institute for Leadership and Innovation (MDI) and AI World Society (AIWS), AI technology can be an essential tool and transformative solution to serve and strengthen human rights, as well as bring a huge benefit to human well-being and happiness.

The AI Breakthrough Will Require Researchers Burying Their Hatchets

The next Artificial Intelligence (AI) breakthrough might require ending a longtime rivalry.

For years, AI researchers have generally taken one of two approaches when creating problem-solving algorithms: symbolism, or rule-based AI, which is centered on manually encoding concepts, rules, and logic into computer software; and connectionism, which is based on artificial neural networks and digital representations of the brain that develop their behavior organically by comparing many examples over time.

A difficult challenge for AI is the task of visual question-answering (VQA), in which you show the AI an image and ask it questions about the relation between the different elements present. The MIT and IBM researchers used the Neuro-Symbolic Concept Learner (NSCL) to solve VQA problems. The NSCL uses neural networks to process the image in the VQA problem and then to transform it into a tabular representation of the objects it contains. Next, it uses another neural network to parse the question and transform it into a symbolic AI program that can run on the table of information produced in the previous step.

The new AI solution with visual question-answering is a great example of AI application for human life to relieve us of resource constraints and arbitrary/inflexible rules and processes, which has been initiated and promoted by AI World Society (AIWS).

AIWS STANDARDS AND PRACTICE COMMITTEE MEMBERS TO SPEAK AT AI WORLD GOVERNMENT 2019

AIWS Standards and Practice Committee members including Governor Michael Dukakis, Professor Thomas Patterson, Professor Nazli Choucri, and Professor Marc Rutenberg will attend as speakers at the AI World Government forum, taking place at Ronald Reagan Building, Washington, DC, from June 24 to June 26.


AI World Government provides a comprehensive three-day forum to educate and inform public sector agencies on the strategic and tactical benefits of deploying AI and cognitive technologies. With AI technology at the forefront of our everyday lives, data-driven government services are now possible from federal, state, and local agencies. AI World Government gathers leaders from across government, technology innovation, business, and research to present the state of the practice and state of the technology to assist the public sector in leveraging advanced intelligent technologies to enhance government services.

The rapid acceleration of AI technology has led to a global call for AI governance and ethics. One of the core issues facing the adoption of AI is centered on how to ensure that these advanced technologies can be deployed in a fair and non-biased way that can serve the betterment of mankind. Currently, there are hundreds of efforts underway globally, working in silos, to standardize around how business and governments can engage in ethical AI practices. AI Governance, Big Data and Ethics brings together a key group of global thought leaders who will present the challenges, discuss solutions and lead networking roundtables to help global government and industry leaders better collaborate with each.

Topics Include:

  • Conference Introduction: Michael J. Dukakis, JD, Chairman, The Michael Dukakis Institute for Leadership and Innovation
  • The State of Governance, Big Data, and Ethics
  • Model of AI Government
  • Panel: Big Data Shapes World Economics, Regulation, and Services
  • Establishing Ethics Standards for Data Use by AI and Possible Regulations
  • Panel: Current State of Ethics Standards Development
  • Panel: Setting the Stage for Responsible AI: Steps Organizations Should Take
  • Panel: What Actions Can We Take Today for a Better AI Future Tomorrow?

The Michael Dukakis Institute for Leadership and Innovation (MDI) has secured a discounted registration fee for you to attend AI World Government in Washington, DC. Register today and take $200 off the current rate. To receive your special discount, enter the discount keycode 1991MDI when registering online.

The Health Care Benefits of Combining Wearables and AI

In southeast England, patients discharged from a group of hospitals serving 500,000 people are being fitted with a Wi-Fi-enabled armband that remotely monitors vital signs such as respiratory rate, oxygen levels, pulse, blood pressure, and body temperature.

Under a National Health Service pilot program that now incorporates Artificial Intelligence (AI) to analyze all that patient data in real time, hospital readmission rates are down, and emergency room visits have been reduced. What’s more, the need for costly home visits has dropped by 22%. Longer term, adherence to treatment plans have increased to 96%, compared to the industry average of 50%.

The AI pilot is targeting what Harvard Business School Professor and Innosight co-founder Clay Christensen calls “non-consumption.”  These are opportunity areas where consumers have a job to be done that isn’t currently addressed by an affordable or convenient solution. The AI solutions on healthcare bring a huge benefit to human well-being and happiness, which has been highlighted and promoted in AI Ethics report by AI World Society (AIWS) and Michael Dukakis Institute for Leadership and Innovation (MDI).

WHAT WILL SOCIETY LOOK LIKE WHEN AI IS EVERYWHERE?

We have seen AI application everywhere, but the ultimate goal is general AI, a self-teaching system that can outperform humans across a wide range of disciplines. Some scientists believe it is centuries away; some talk about 30 years.

“AIs will colonize and transform the entire cosmos,” says Juergen Schmidhuber, a pioneering computer scientist based at the Dalle Molle Institute for Artificial Intelligence in Switzerland, “and they will make it intelligent.”

But what about us? Bill Gates and Elon Musk, have warned about AIs either destroying the planet in a frenzied pursuit of their own goals or doing away with humans by accident—or not by accident. A recent article published on Smithsonian Magazine draw some scenarios as to how our society will look like when AI will pull even with human intelligence.

The article suggested five scenarios for the year 2065, ten years after general AI arrives, assumingly: Superhuman Rights, Ultramodern Romance, Live Long & Prosper, Resistance Is Costly, and “Bigger” Brother.

The full (interesting) article is here.

AI World Society (AIWS) is an initiative launched by Michael Dukakis Institute for Leadership and Innovation, striving to minimize the harm and threats to humans caused by artificial intelligence, and using all the benefits to reform and renovate the social political system in each country and the world towards honesty, integrity, compassion, responsibility and justice.

 

THE DANGER OF AI IN CYBERATTACKS AND NUCLEAR WEAPONS

This week, AIWS Weekly Newsletter introduces a viewpoint from Russia about the potential danger of AI when used together with cyberattacks and nuclear weapons. Pavel Sharikov, a research fellow at the Institute for U.S. and Canada Studies at the Russian Academy of Sciences, recently offers his perspectives on Stratfor, an American geopolitical intelligence platform.

He considers one of the most pressing problems the vulnerability that nuclear command and control systems may be attacked by sophisticated cyber weaponry that is enhanced by AI technology. At least, it is of special concern for U.S.-Russia relations. “The arms control regime created during the Cold War can no longer guarantee strategic stability. The existence of new technologies, such as cyber capabilities paired with AI, will only amplify this destabilizing trend,” Sharikov wrote.

Looking for a way forward, he suggested that something similar to the Anti-Ballistic Missile (ABM) treaty could be a model for a similar agreement on AI cyber weapons. New norms for negotiation are urgently needed but at the time, he noted, “the current state of U.S.-Russian relations leaves little room for any agreement”, and so called for involvement of expert communities from multiple countries.

The AI World Society (AIWS) welcomes Sharikov’s suggestions (more of his opinion is published here). AIWS can serve as an effective platform for brainstorming toward pathways of conflict escalation and de-escalation.

AI World Society (AIWS) is an initiative launched by Michael Dukakis Institute for Leadership and Innovation, striving to minimize the harm and threats to humans caused by artificial intelligence, and using all the benefits to reform and renovate the social political system in each country and the world towards honesty, integrity, compassion, responsibility and justice.

Samsung deepfake AI could fabricate a video of you from a single profile pic

Software for creating deepfakes – fabricated clips that make people appear to do or say things they never did – usually requires big data sets of images in order to create a realistic forgery. Now Samsung has developed a new Artificial Intelligence (AI) system that can generate a fake clip by feeding it as little as one photo.

The technology, can be used for fun, like bringing a classic portrait to life. The Mona Lisa, which exists solely as a single still image, is animated in three different clips to demonstrate the new technology. A Samsung artificial intelligence lab in Russia developed the technology, which was detailed in a paper earlier this week.

However, these kinds of techniques and their rapid development also create risks of misinformation, election tampering and fraud, according to Hany Farid, a Dartmouth researcher who specializes in media forensics to root out deepfakes. When even a crudely doctored video of US Speaker of the House Nancy Pelosi can go viral on social media, deepfakes raise worries that their sophistication would make mass deception easier, since deepfakes are harder to debunk. According to Michael Dukakis Institute for Leadership and Innovation (MDI), the AI applications for ethical values and transparency has been initiated and enforced to avoid bias and refrain from harmful uses, especially on media misinformation from fake data.