The Explosion of Digital Uncertainty

Artificial Intelligence is the new threat that the world is contemplating now; but this is only the beginning

Recent advances in Generative Artificial Intelligence (AI) have captured the imagination of the public, businesses and governments alike. The Government of India has also, very recently, released a comprehensive report on the opportunities afforded by this current wave of AI. Leaders of the IT industry in India are almost certain that this wave of AI will lead to fundamental changes in the skills landscape, and implicitly, in terms of underlying threats and dangers.

 

Scant understanding of the implications

Concurrently, there is an exponential explosion of digital uncertainty. Few are able to fully comprehend the nature of the new threat, the likes of which have not been witnessed in past decades, if not centuries. Few also realize the grave implications of what it means to have our lives and our economies run on what may be described as fertile digital topsoil. Even fewer realize the kind of intrinsic problems that result from this.

It is oft-repeated that digital infrastructure is built on layers upon layers of omniscient machine intelligence, human coded software abstractions, and undependable hardware components. Each of the layers interconnect through complex and deeply embedded protocols. The narrow aperture of understanding of such aspects means that the vast majority of people are ignorant of the implications. Even less understood is that complexity of this kind begets vulnerabilities.

While cyber has, no doubt, attracted a measure of attention, there is little — or true — understanding of the nature of today’s cognitive warfare. Cognitive warfare truly ranks alongside other elements of modern warfare such as the domains of maritime, air and space. Cognitive warfare puts a premium on sophisticated techniques that are aimed at destabilizing institutions, especially governments, and manipulation, among other aspects, of the news media by powerful non-state actors. It entails the art of using technological tools to alter the cognition of human targets, who are often unaware of such attempts. The end result could be a loss of trust apart from breaches of confidentiality and loss of governance capabilities. Even more dangerous is that it could alter a population’s behavior using sophisticated psychological techniques of manipulation.

Given the maze of emerging technologies, both businesses and governments today confront an Armageddon of sorts. The methods employed are highly insidious. For example, today, with almost a third of companies in the more advanced countries of the world investing more in intangible assets than the physical one, they are putting themselves directly at risk from AI. Another estimate is that with over 50% of the market value of the top 500 companies sitting in intangibles, they too are deeply vulnerable. As firms, large and small, spend billions of dollars to migrate to the Cloud, and more and more sensors constantly send out sensitive information, the risks go up in geometrical progression. All this portends a dark, rather than a brave, new world order that we hope to inhabit.

Hence, digital uncertainty is morphing into radical uncertainty and rather rapidly. Today, government and government agencies are spending significant resources to undo the impact of misinformation and disinformation, but this may not be enough. There is not enough understanding of how the very nature of information is being manipulated and the extent to which AI drives many of these drastic transformations. All this contributes to what can only be referred to as ‘truth decay’.

 

The emergence of AGI

If AI is the grave threat that the world is currently contemplating, we are only witnessing the tip of the iceberg. As growing numbers of people — cognitively and psychologically — become dependent on digital networks, AI is able to influence many critical aspects of their thinking and functioning. What is simultaneously exhilarating and terrorizing is the fact that many advances in AI are now being birthed by the machine itself. Sooner rather than later, we will witness the emergence of Artificial General Intelligence (AGI) — Artificial Intelligence that is equal and or superior to human intelligence, which will penetrate whole new sectors and replace human judgment, intuition and creativity.

The impending dawn of AGI is far more disruptive and dangerous than anything else that we have encountered thus far. There is real fear that it could alter the very fabric of nation-states, and tear apart real and imagined communities across the globe. Social and economic inequalities will rise exponentially. Social anarchy will rule the streets as we see happening in some of the cities closest to the epicenter of technological innovation. It has an inherent capacity to flood a country with fake content masquerading as truth, and for imitating known voices with false ones that sound eerily familiar. This could lead to a breakdown of the concept of trust — of what is said, read, or heard — and could lead to overturning the trust pyramid with catastrophic consequences.

AGI will enable highly autonomous systems that outperform humans in many areas, including economically (valuable) work, education, social welfare and the like. AGI systems will have the potential to be able to make decisions that are unpredictable and uncontrollable which could have unintended consequences, often with harmful outcomes. It is difficult to comprehend at this point its many manifestations, but job displacements and economic displacements would be initial symptoms of what could become a tsunami of almost all human-related activity. Digital data could in turn become converted into digital intelligence, enlarging the scope for disruption and the reining in of entire sectors. It would enhance inequalities and exacerbate social disparities, and worsen economic disparities.

Hence, AGI could prove to be as radical a game-changer in the world of the 21st century as the Industrial Revolution was in the 18th century. It is almost certain to lead to material shifts in the geo-political balance of power, and in a way never comprehended previously. The specter of digital colonization looms large with AGI-based power centers being based in a few specific locations. Consequently, AGI-driven disruption could precipitate the dawn of the age of digital colonialism. This would lead to a new form of exploitation, viz., data exploitation. In its most egregious form, it would entail export of raw data and import of value-added products that use this data. In short, AGI-based concentration of power would have eerie similarities to the old East India Company syndrome.

We could possibly be at the cusp of an ‘Oppenheimer Moment’, when the world is at a crossroads in the science of computing, communicating and engineering, and the ethics of a new technology whose power and potential we do not fully comprehend. Reining in, or even halting, the development of the most advanced forms of AGI, or disallowing unfettered experimentation with the technology may not be easy, but the alternative is that it has the potential to shape the nature of the world in a manner well beyond what can be anticipated. Today, AGI seems to imitate forms of reasoning with a power to approximate the way humans think. This is a new kind of arms race, but of a different kind, and it has just begun. It, perhaps, calls for more intimate collaboration between states and the technology sector, which is easier said than done.

 

The Hamas-Israel conflict

A final word. AI can be exploited and manipulated skilfully in certain situations, as was possibly the case in the current Hamas-Israeli conflict, sometimes referred to as the Yom Kippur War 2023. Israel’s massive intelligence failure is attributed by some experts to an overindulgence of AI by it, which was skillfully exploited by Hamas. AI depends essentially on data and algorithms, and Hamas appears to have used subterfuges to conceal its real intentions by distorting the flow of information flowing into Israeli AI systems. Hamas, some experts claim, was thus able to blindside Israeli intelligence and the Israeli High Command. The lesson to be learnt is that an over-dependence on AI and a belief in its invincibility could prove to be as catastrophic as ‘locking the gates after the horse has bolted’.

M.K. Narayanan, a former Director, Intelligence Bureau, a former National Security Adviser, former Governor of West Bengal, and formerly Executive Chairman of CyQureX Pvt. Ltd., a U.K.-U.S.A. cyber security joint venture

 

 

US, Singapore to launch bilateral AI governance group

In WASHINGTON on Oct 12, the United States and Singapore intend to launch a bilateral artificial intelligence governance group to complement the United States’ voluntary AI commitments and a potential multilateral Code of Conduct, the White House said on Thursday.

It said the group will focus on advancing shared principles and deepening information exchanges for safe, trustworthy, and responsible AI innovation.

Read more at:

https://www.reuters.com/article/usa-singapore-artificialintelligence/us-singapore-to-launch-bilateral-ai-governance-group-idUSS0N3AC02S

https://www.deccanherald.com/world/us-singapore-to-launch-bilateral-ai-governance-group-2724863

 https://www.youtube.com/watch?v=dhyTwgo7NHg

 

Boston Global Forum contributed to the Riga Conference 2023 the BGF Special Report “How to Govern in the Age of Global Tension,” read and download here:

https://rigaconference.lv/publications/

Global Enlightenment Leaders Thomas Patterson, Nazli Choucri, Alex Pentland, David Silbersweig congratulate Amma

Global Enlightenment Leaders, including esteemed professors Thomas Patterson, Nazli Choucri, Alex Pentland, and David Silbersweig, extend their heartfelt congratulations to Amma for her remarkable achievement as the recipient of the 2023 World Leader for Peace and Security Award. Their collective admiration for Amma’s tireless efforts in fostering compassion, humanitarian values, and global unity is profound. Amma’s legacy of selfless service and her dedication to a more harmonious world resonate deeply with these distinguished leaders. Their recognition underscores the importance of Amma’s spiritual and humanitarian values in shaping a brighter future for humanity, where technology and enlightenment intersect to create a more compassionate and enlightened world.

 

Nazli Choucri:

https://www.youtube.com/watch?v=jm45zKm1XXQ&t=23s

Thomas Patterson:

https://www.youtube.com/watch?v=OIvUH-IOq0U&t=59s

Alex Pentland:

https://www.youtube.com/watch?v=fU8C-XRoflY&t=7s

David Silbersweig:

https://www.youtube.com/watch?v=rAFFaUF7ty8&t=25s

 

Mata Amritanandamayi Devi (Amma) Receives 2023 World Leader for Peace and Security Award from the Boston Global Forum and Michael Dukakis Institute for Leadership and Innovation

September 27, 2023

 

Mata Amritanandamayi Devi, also known as Amma, has been honoured with the 2023 World Leader for Peace and Security Award by the Boston Global Forum (BGF) and the Michael Dukakis Institute for Leadership and Innovation (MDI). This award has been bestowed upon Amma in recognition of her remarkable contributions to global peace, spirituality, and compassion.

According to BGF and MDI, Amma’s profound spirituality, commitment to core values, and influential global leadership have earned her this esteemed accolade. Her dedication to traditional wisdom and spiritual principles perfectly aligns with the ideals of global unity and compassion. Serving as the Chair of the Civil 20 Engagement Group, comprising G20 civil society leaders during the 2023 G20 Summit in India, Amma embodied the G20 motto, ‘You are the Light.’ Her daily efforts towards fostering global unity, compassion, well-being, and a more just and sustainable Earth are unprecedented. Amma’s selflessness and commitment to justice and sustainability are helping to shape a more enlightened and compassionate world.

Former Governor of Massachusetts, Michael Dukakis, Chairman of the Boston Global Forum, expressed his admiration, stating, “We are profoundly honoured to recognize Amma as a World Leader for Peace and Security. Her tireless efforts to promote love, compassion, and global unity are truly exemplary. Amma’s legacy will continue to inspire our collective journey towards a more harmonious world, perfectly aligning with the AIWS initiative.”

The announcement of Amma being bestowed with this prestigious award was made on July 31 at the C20 Summit in Jaipur, India. Two significant events are planned to honour Amma: India’s celebration of her 70th birthday on October 3rd and a special conference at Harvard University’s Loeb House on November 2nd, during which Amma will deliver a unique global knowledge discourse. Additionally, BGF will organize tributes to Amma in a Global Entertainment Symposium on October 3rd.

Amma joins a prestigious list of past recipients of the World Leader for Peace and Security Award, including notable figures such as Ursula Von der Leyen, President of European Commission, Prime Minister Shinzo Abe of Japan, Chancellor Angela Merkel of Germany, United Nations Secretary-General Ban Ki-moon, President of European Commission Ursula von der Leyen, and President of European Commission, among others.

About the Boston Global Forum (BGF): The Boston Global Forum is an international think tank based in Massachusetts, dedicated to convening leaders, scholars, policymakers, and business leaders to collaborate on critical issues such as peace, security, and economic development.

 

https://amma70.org/news/2023/09/27/mata-amritanandamayi-devi-amma-receives-2023-world-leader-for-peace-and-security-award-from-the-boston-global-forum-bgf-and-michael-dukakis-institute-for-leadership-and-innovation-mdi/

 

Building a Global Culture in the Age of AI

Introduction

In the 21st century, the world is witnessing rapid advancements in Artificial Intelligence that are reshaping societies, economies, and cultures on a global scale. As AI becomes increasingly integrated into our daily lives, it poses both opportunities and challenges for the formation of a cohesive global culture.

Boston Global Forum and Michael Dukakis Institute contribute a research proposal aims to investigate the impact of AI on global culture, explore strategies for building a more inclusive and interconnected global culture in the age of AI, and align the research with the principles of AI World Society.

 

Research Objectives

The primary objectives of this research, in alignment with AIWS principles, are as follows:

  1. To Understand the Impact of AI on Global Culture: Analyze how AI technologies, within the framework of AIWS, are influencing cultural practices, values, and identities worldwide.
  2. To Identify Challenges and Barriers: Investigate the challenges and barriers that AI, as part of AIWS, may pose to the development of a harmonious global culture.
  3. To Explore Strategies for Building a Global Culture: Develop preliminary strategies and recommendations for fostering a global culture that embraces the positive aspects of AI while mitigating potential negative consequences, in accordance with AIWS values.

 

Methodology

To achieve these objectives within the shortened timeframe, a focused research approach will be employed:

  1. Literature Review: Conduct a rapid review of academic literature, reports, and case studies related to AI’s impact on culture, focusing on both global and regional perspectives and incorporating AIWS principles.
  2. Surveys and Interviews: Administer expedited surveys and conduct focused interviews with individuals from diverse cultural backgrounds to gather key insights into their perceptions and experiences regarding AI and culture, considering AIWS values.
  3. Content Analysis: Analyze select cultural content, such as films, literature, art, and social media, to provide snapshots of how AI is represented and interpreted within various cultural contexts, while aligning with AIWS guidelines.
  4. Comparative Cultural Studies: Conduct rapid comparative studies of AI adoption and cultural adaptation in different regions to identify initial commonalities and differences, with a view to harmonizing cultural diversity within AIWS.

 

Expected Outcomes

This research is expected to provide preliminary insights and recommendations, in accordance with AIWS principles:

  1. Insights into AI’s Impact on Culture: Initial understanding of how AI technologies, within the framework of AIWS, are reshaping cultural practices and identities on a global scale.
  2. Identification of Challenges: Identification of potential initial challenges and barriers that AI, guided by AIWS, may pose to the development of a cohesive global culture.
  3. Preliminary Strategies for Building Global Culture: Development of initial practical strategies and recommendations for building a more inclusive and interconnected global culture in the age of AI, in alignment with AIWS principles.

 

AI just beat a human test for creativity. What does that even mean?

Large language models are getting better at mimicking human creativity. That doesn’t mean they’re actually being creative, though.

AI is getting better at passing tests designed to measure human creativity. In a study published in Scientific Reports September 14, AI chatbots achieved higher average scores than humans in the Alternate Uses Task, a test commonly used to assess this ability.

This study will add fuel to an ongoing debate among AI researchers about what it even means for a computer to pass tests devised for humans. The findings do not necessarily indicate that AIs are developing an ability to do something uniquely human. It could just be that AIs can pass creativity tests, not that they’re actually creative in the way we understand. However, research like this might give us a better understanding of how humans and machines approach creative tasks.

Researchers started by asking three AI chatbots—OpenAI’s ChatGPT and GPT-4 as well as Copy.Ai, which is built on GPT-3—to come up with as many uses for a rope, a box, a pencil, and a candle as possible within just 30 seconds.

Their prompts instructed the large language models to come up with original and creative uses for each of the items, explaining that the quality of the ideas was more important than the quantity. Each chatbot was tested 11 times for each of the four objects. The researchers also gave 256 human participants the same instructions.

https://www.technologyreview.com/2023/09/14/1079465/ai-just-beat-a-human-test-for-creativity-what-does-that-even-mean/?utm_source=engagement_email&utm_medium=email&utm_campaign=wklysun&utm_term=09.17.23.nonsubs_eng&utm_content=TR35-2023-ENG&mc_cid=de2d212dd4&mc_eid=be5202f3c7

IBM Advances watsonx AI and Data Platform with Tech Preview for watsonx.governance and Planned Release of New Models and Generative AI in watsonx.data

On Sept. 7, 2023 IBM announced plans for new generative AI foundation models and enhancements coming to watsonx – its AI and data platform with a set of AI capabilities designed to help enterprises scale and accelerate the impact of AI. These enhancements include a technical preview for watsonx.governance, new generative AI data services coming to watsonx.data, and the planned integration of watsonx.ai foundation models across select software and infrastructure products.

Developers will be able to get their hands on many of these new capabilities and models September 11-14 at TechXchange, IBM’s premier technical learning event in Las Vegas.

The new IBM and third-party generative AI models coming to watsonx.ai include:

  • Granite series models: IBM plans to introduce its Granite series models later this month. The Granite models use the “Decoder” architecture, which underpins the ability of today’s large language models (LLMs) to predict the next word in a sequence, and can support enterprise NLP tasks, such as summarization, content generation and insight extraction. IBM plans to provide a list of the sources of data as well as a description of the data processing and filtering steps that were performed to produce the training data for the Granite series of models. (Planned availability Q3 2023)
  • Third-party models: IBM is now offering Meta’s Llama 2-chat 70 billion parameter model and the StarCoder LLM for code generation in watsonx.ai on IBM Cloud. (Available now)

Please read the full article full at: https://newsroom.ibm.com/2023-09-07-IBM-Advances-watsonx-AI-and-Data-Platform-with-Tech-Preview-for-watsonx-governance-and-Planned-Release-of-New-Models-and-Generative-AI-in-watsonx-data

The Global Enlightenment Mountain Program (GEM), pioneered by the Boston Global Forum, brings together esteemed research centers, leading university laboratories, and cutting-edge technology innovation companies from around the world.

Just as Silicon Valley has been at the forefront of innovation, GEM endeavors to reinvent and redevelop the tech hub in alignment with the values and aspirations of the AI World Society. By combining the spirit of Silicon Valley’s constant reinvention with the ideals of the Global Enlightenment movement, GEM seeks to establish a virtual ecosystem that serves as a beacon of progress, fostering transformative ideas and advancements for the benefit of humanity in the AI and Digital era.

Governor Michael Dukakis speaks at the BGF Conference on Global Enlightenment Mountain at Harvard Faculty Club, April 26, 2023

Researcher Meredith Whittaker says AI’s biggest risk isn’t ‘consciousness’—it’s the corporations that control them

The former Googler and current Signal president on why she thinks Geoffrey Hinton’s alarmism is a distraction from more pressing threats.

Fast CompanyLet’s start with your reaction to Geoffrey Hinton’s big media tour around leaving Google to warn about AI. What are you making of it so far?

Meredith Whittaker: It’s disappointing to see this autumn-years redemption tour from someone who didn’t really show up when people like Timnit [Gebru] and Meg [Mitchell] and others were taking real risks at a much earlier stage of their careers to try and stop some of the most dangerous impulses of the corporations that control the technologies we’re calling artificial intelligence.

So, there’s a bit of have-your-cake-and-eat-it-too: You get the glow of your penitence, but I didn’t see any solidarity or any action when there were people really trying to organize and do something about the harms that are happening now.

Please read full here:

https://www.fastcompany.com/90892235/researcher-meredith-whittaker-says-ais-biggest-risk-isnt-consciousness-its-the-corporations-that-control-them

 

Meredith Whittaker [Photo: Patricia De Melo Moreira/AFP/Getty Images]

 

Boston Global Forum contributed the concept of AI-Government for G7-Summit 2018 as a part of AI World Society, and AIWS was recognized by the Civil 20-G20 Communique, India July, 2023.

AI World Society introduced AIWS Assistants and Framework for Global Governance of AI at BGF High-level Conference on Global Governance of AI at Harvard University Faculty Club on April 26, 2023.

 

https://dukakis.org/innovation/the-concept-of-ai-government/

 

If ChatGPT had a brain, this is what it would look like

BY REBECCA BARKER

A new data visualization tool from Kim Albrecht reveals the unexpected way ChatGPT organizes its information.

When ChatGPT launched in November 2022, Kim Albrecht, like millions of others, created an account. Albrecht, whose research focuses on data visualization, wanted a firmer grasp on the kind of information that the artificial intelligence contained—and also what it lacked.

Over the past 10 months, Albrecht has submitted more than 1,700 prompts to ChatGPT’s application programming interface, asking the platform what knowledge it possesses. His research culminated with the launch of Artificial Worldviews, an interactive map that resembles a galaxy and contains more than 32,000 stars symbolizing answers given by the app on various topics.

Photo of Kim Albrecht

Read the full article here: https://www.fastcompany.com/90940143/if-chatgpt-had-a-brain-this-is-what-it-would-look-like?utm_source=facebook&utm_medium=social&fbclid=IwAR3i5ZEnQi8F6VbbgMacxfNg1-f_ED0-Zv-r1b5V-DeSJB4xD_aIwE_SZN4_aem_AS15dwSV0Qa4iZOyXjXWhQi0brrdVYbnL_oFyFf35_W7sRv4VolZm5-iRMLPpP97Y3ftDsmctHnN7cUWPVwq80oc

 

Boston Global Forum contributed the concept of AI-Government for G7-Summit 2018 as a part of AI World Society, and AIWS was recognized by the Civil 20-G20 Communique, India July, 2023.

AIWS introduced AIWS Assistants and Framework for Global Governance of AI at BGF High-level Conference on Global Governance of AI at Harvard University Faculty Club on April 26, 2023.

https://dukakis.org/innovation/the-concept-of-ai-government/