Building the Economic and Political Framework for AIWS – The Age of Global Enlightenment

From Boston, the catalyst of the American Revolution, the Boston Global Forum and its contributors have discussed and built the Economic and Political Framework for the AI Age.

They have held meetings at Harvard University and MIT to propose ideas for the framework:

  • An Innovative and Virtuous Economy Empowered by AI and Data
  • The pursuit of an advanced economic and political framework for the AIWS envisions a creative, innovative and virtuous economy, bolstered by the utilization of AI and data. This framework seeks to provide equal opportunities for all citizens to become creators, innovators, fostering a society where self-reliance takes precedence over dependence on social welfare. Even retirees are encouraged to contribute to the betterment of society until their departure.
  • Central to this vision is the establishment of a market that directly connects producers and creators with consumers, minimizing intermediaries and ensuring transparency. Such an arrangement prevents manipulation by brokerages, dealers, or lobbyists, guaranteeing a fair playing field for all participants, special in finance.
  • An essential aspect of the proposed framework is the equilibrium of economic power, effectively curbing the influence of totalitarian and autocratic states. By preventing leaders and autocratic regimes from exploiting national resources to the detriment of humanity, the framework safeguards the common good.
  • Moreover, adherence to the cultural values of AIWS is crucial within this economy. Companies must uphold these principles without compromising or supporting totalitarian regimes or autocratic leaders. In doing so, the framework fosters an environment that rewards creators and contributors, promoting a compassionate, peaceful, humane, civilized, and benevolent society. Individuals who actively contribute to such a society are entitled to a fulfilling life, both materially and spiritually.
  • Robust cybersecurity measures, internet safety, data protection, and creativity promotion are integral to the framework, benefiting all organizations and citizens alike.
Mount McKinley, Denali National Park

From July 2023, BGF will continue to build the model for the AIWS Economy – Global Enlightenment Economy and encourage its practice.

 

Thomas Kehler for Towards Data Science: “A Framework for a Human-Centered AI Based on the Laws of Nature”

Read the paper in its full form at Towards Data Science.

A Framework for a Human-Centered AI Based on the Laws of Nature

Integrating natural and artificial intelligence

By Tom Kehler

Presented to the Boston Global Forum High-level Conference “AI Assistant Regulation Summit: Fostering a Tech Enlightenment Economy Alliance” at the Harvard Faculty Club. The paper presented here is an expansion of that talk.

We are at many crossroads. The one in sharp view in recent months is AI, resulting in a spectrum of responses from terror to glee. No doubt you have by now experienced the delight of playing with ChatGPT. Many have joined the rush to adoption. Others suggest this current expression of AI is yet another race to the bottom where we throw caution to the wind because we must. Everyone else is doing it, so we must do so as well. Aggregate bad behavior that no one wants — but exists because no one knows how to build trust is a shadow that comes with technological advances. Technology is not the enemy. Failure to collaborate and come together in trust leads to reckless adoption that could lead to harm.

In this brief overview, I hope to provide you with a framework for an AI future that builds trust and reduces risk.

That framework was first unveiled by the founders of science and the scientific method dating back to the Enlightenment. The scientific method that followed formed the foundation for building trusted knowledge — a collaborative process totally dependent on collective human intelligence and trust in the emergent elegance offered in nature.

We recommend employing the power of collective human intelligence and the intelligence built into the physics of living systems to guide us forward.

For nearly 70 years, the scientific pursuits of AI centered on building handcrafted models of the natural intelligence and cognitive skills of humans using the tools of symbolic representation and reasoning. They were capable of explaining how they solved a problem. Trust was built by observing their reasoning.

For the past 20 years, Statistical Learning from the explosion of data offered by the Internet yielded spectacular results — from self-driving cars to the Large Language Models that bring us together today. In particular, transformer deep learning architectures unlocked generative AI’s powerful potential, which has created the impressive results we see today.

The concern that brings us here today relates to three fundamental problems. For the first time in the history of information technology, we are not enforcing the concept of data provenance. Thus, these tremendous generative powers can be persuasive purveyors of misinformation and undermine trust in knowledge. The second concern is explainability — the systems are black boxes. The third concern is they need a sense of context.

These three points of weakness go crossways with the three pillars of the scientific method of citation, reproducibility, and contextualization of results. What do we do?

Judea Pearl says, ‘You are smarter than your data,’ We agree. The human capacity for counterfactual thinking is far more powerful than anything we can learn from correlative patterns in our past data.

Large Language Models and deep learning architectures in general develop models of intelligent behavior based on pattern recognition and correlation models from data. Generative output from LLMs employ human’s in the loop to filter and train the results. The risk remains however that generation of content containing misinformation may not be caught in the filtering process.

Figure 1 (Image by author)

Five years ago in a MIT Technology Review interview, one of the fathers of deep learning, Yoshua Bengio, stated:

“I think we need to consider the hard challenges of AI and not be satisfied with short-term, incremental advances. I’m not saying I want to forget deep learning. On the contrary, I want to build on it. But we need to be able to extend it to do things like reasoning, learning causality, and exploring the world in order to learn and acquire information.”

It is extremely unlikely that current models based on correlation of patterns in historical data capture the complexity of the human brains abilities. The imaginative power of the human brain and its abilities to generate cause models based on experience must be engaged as an integral part of future AI models. We propose an approach that incorporates human collective intelligence and a model of the human brain.

Larry Page, Serge Bryn, and Terry Winograd found that citation indexing could lead to a scalable way to order information on the web. The PageRank algorithm brought order to the web. The mathematics of citation indexing brings order to understanding information sharing in human collaboration.

Figure 2: Image by author

A next generation of AI that integrates human collective reasoning, developed in the past eight years, uses a citation indexing approach as a knowledge discovery process. It allows knowledge discovery at scale, supporting citation, reproducibility, and contextualization. We propose this as part of a framework going forward.

Collective reasoning seeks to learn a community or group’s aggregated preferences and beliefs about a forecasted outcome. Will a product launch create the results we want? If we change our work-from-home policy, will we increase or decrease productivity? What policy for using ChatGPT and LLMs will be best for our organization? All of these questions require learning a group’s ‘collective mind’ on the predicted outcome. The collective reasoning process employs AI technology to learn a model of the collective mind. The process is single-blind, reducing bias. The system was tested for four years on groups of 20 to 30 experts/investors predicting startup success, and they/it was >80% accurate). Those beliefs and predictions are mapped into collective knowledge models — Bayesian Belief Networks.

We can embed the critical elements of the scientific knowledge discovery process in how we co-create or collaborate to solve complex problems. Rather than have AI undermine trust in knowledge, we propose using AI to learn collective knowledge models, causal models that retain provenance, explainability, and context. This is a critical component of a new enlightenment — bringing the scientific method to collaboration.

Collective reasoning allows learning the intentions of a group. An agent-based simulation is useful in forecasting the impact of a proposed solution. Synthetic models of populations based on public data allow scaling and forecasting the impact of co-created solutions, and we propose that as part of the framework. One of the partner companies in this initiative has built a significant capability to simulate impact at scale, applying it to the social implications of disease propagation.

What about the foundation of AI going forward? What have we learned in 68 years since the summer of 1956 when AI was born? The first few decades developed the components that form the current AI landscape. The mathematics of cooperative phenomena and the physics of magnetism plays an exciting role in linking it all together. Hopfield, in 1982, demonstrated that the emergent collective computational capabilities of artificial neural networks mapped directly to the mathematical physics of spin glasses. The same mathematics of cooperative phenomena describes the emergence of order out of chaos as shown in the murmuration of starlings photo at the beginning of this article.

Recently, Lin, Rolnick, and Tegmark at MIT showed that the reason deep and cheap learning work so well is tied to the the laws of physics. Bayesian learning is reformulated as a fundamental method used in quantum and classical physics — The Hamiltonian. Explicitly focusing on the roots of AI in natural laws should be the focus of future AI development.

Central to it all is learning order out of disorder. A new wave of studies in the brain takes learning at the order/disorder boundary to a theory for creating living intelligence systems — the Free Energy Principle.

The FEP is a framework based on Bayesian learning. The brain is thought of as a Bayesian probability machine. If sensory inputs do not match expectations, a process of active inference seeks a way to minimize the uncertainty going forward. The difference between what we expect and what we sense is called surprisal and is represented as free energy (energy available for action). Finding a path with minimum free energy is equivalent to finding a path that reduces surprise (uncertainty).

An AI based on FEP adapts locally and scales based on variational free energy minimization principles used throughout the physical and biological sciences. Bioform Labs is building out a Biotic AI that adapts and learns. Unlike second-generation AI, which requires massive training data sets and complicated cost functions, AI based on the physics of living systems is adaptive and lives within an ecosystem. It can be designed to respect the states that lead to the needs of living systems.

Technology to get started on this new framework is applicable today. We don’t need to halt the development of AI. Collective reasoning applies to questions we need to ask ourselves about the impact of AI in a wide variety of specific contexts. How will AI affect investments in technology? How will it change our hiring practices? What impact will it have on our community?

Figure 3 (Image by author)

In addition, it is possible to engage ChatGPT and LLMs in ideation while retaining a privacy boundary. Ideas streamed in from an LLM can be curated and employed in specific private contexts. Curated and contextualized contributions are managed in a patented private LLM environment.

Collective reasoning learns intentions and possible solutions. Agent-based simulations forecast impact. We no longer need to think of organizations as rigid. New types of organizational governance, based on active inference, support adaptively learning a survival path forward. We believe this framework is a vision for the future that will

Figure 4 (image by author)

We can then set out to build a new AI-empowered Enlightenment that reconnects with collective human intelligence that led to the human progress we have enjoyed. As the Enlightenment freed science from the tyranny of religious authority, a new initiative, AI-empowered Enlightenment, provides a path to collaborate and co-create solutions — to free us from unintended consequences of the current wave of AI frenzy.

In conclusion, Large Language Models provide highly useful capabilities that are unfolding at an impressive rate. Read the warning labels! ChatGPT does warn not to implicitly trust the results but to use critical thinking. Don’t expose private data. For private data Figures 3 and 4 demonstrate a way to experiment by allowing ChatGPT or other ‘agents’ to provide inputs to a curated collaboration with human experts with results kept in a privately managed LLM context. This approach allows exploring the generative power of LLMs while retaining control of private intellectual property.

Read the paper in its full form at Towards Data Science.

OpenAI CEO and President of South Korea call for global cooperation to regulate AI

Sam Altman, the CEO of ChatGPT maker OpenAI, used a high-profile trip to South Korea on Friday to call for coordinated international regulation of generative artificial intelligence, the technology that underpins his famous chatbot.

“As these systems get very, very powerful, that does require special concern, and it has global impact. So it also requires global cooperation,” Altman said at an event in Seoul, ahead of a meeting with South Korean President Yoon Suk Yeol.

In a Friday June 9, 2023 statement, President Yoon stressed the importance of international standards to prevent unwanted “side effects” related to platforms such as ChatGPT, saying there was a need to act “with a sense of speed.”

https://edition.cnn.com/2023/06/09/tech/korea-altman-chatgpt-ai-regulation-intl-hnk/index.html

Since 2017, Boston Global Forum (BGF) has been a pioneer and has been tireless in AI Global Governance with AI World Society (AIWS) Initiative to shape futures of global security.

Now, it is urgent to take actions to shape the concepts and principles of the Social Contract for the AI Age, AI-Government, AI-Citizen, AI International Accord, BGF Framework for AI Global Governance and model of AIWS into reality, especially as AI become more prominent in public discourse.

On May 29, 2023, Boston Global Forum announced the Statement of the Boston Global Forum in Actions to build AI Legal Framework, AI International Accord for Global Security.

Sam Altman, CEO of OpenAI, speaking at a fireside chat in Seoul on June 9, 2023

The book “Social Credit – the Warring States of China’s Emerging Data Empire” recommended by MIT Professor Nazli Choucri

“Social Credit – the Warring States of China’s Emerging Data Empire” by Vincent Brussee, Mercator Institute for China Studies, Berlin, Germany is recommended by MIT Professor Nazli Choucri, a Global Enlightenment Leader.

China’s Social Credit System has fundamentally re-shaped of surveillance worldwide, with discussions of it making it into hundreds of media headlines and all the way into European Union legislation and the United Nations. Social Credit offers one of the first comprehensive assessments of this infamous system. It is aimed at the many experts and professionals – both scholarly and more broadly – that have to deal with its fallout on a regular basis. In a concise format, it covers the questions that have garnered the most attention worldwide: from social credit scoring and blacklists to its history and theoretical foundation. Throughout, its core thesis is that more often than not, even China’s government is at a loss what to do with this messy and complex initiative. This has caused fragmented and low-tech implementation, but where insufficient legal safeguards can have far-reaching implications for the normal market order and for human rights.

Link: https://link.springer.com/book/10.1007/978-981-99-2189-8#about-this-book

Professor Nazli Choucri is one of the key Boston Global Forum leaders in building the AI International Accord. On May 29, 2023, BGF announced the Statement of Boston Global Forum in Actions to build AI Legal Framework, AI International Accord for Global Security.

The Global Enlightenment Mountain: GADG’s Architect for the Databank

In order to unlock the full potential of data and AI economies for the prosperity of countries’ economies and societies, while upholding the standards and norms outlined in the Social Contract for the AI Age, it is crucial to have a strategic approach. Countries that embrace and abide by these principles can gain a competitive edge, driving advancements in data and AI that fuel technological innovation, economic growth and societal progress.

To realize this vision, the Global Alliance for Digital Governance (GADG) will be an architect for the Databank. With the guidance of esteemed leaders such as MIT Professor Alex Pentland and Thomas Kehler, GADG will design a robust and secure architecture for the Databank. This architecture will ensure the security, protection, and ethical utilization of data, while upholding the principles of the Social Contract for the AI Age.

GADG will monitor companies to safeguard the value of individual data, bolstering global security in AI and data. By implementing rigorous mechanisms and protocols, GADG will ensure that data is managed transparently and responsibly, fostering trust and confidence in data management practices.

Furthermore, GADG remains steadfast in its commitment to empowering every citizen to become an innovator and creator. By establishing data and AI ecosystems that prioritize transparency and responsibility, GADG will provide accessible platforms, tools, and resources. Through these means, individuals will be equipped to harness the potential of data and AI for innovation and creativity. GADG will foster an environment that encourages responsible and ethical practices in data management, innovation, applications, and participation in the marketplace.

By bringing together individuals, governments, organizations, and institutions, we can collectively work towards the establishment of a fair, inclusive, and secure AI ecosystem. We extend an invitation to all stakeholders to actively participate in shaping the AI Global Legal Framework and endorsing the principles enshrined in the AI International Accord. Through collaboration and shared responsibility, we can create an environment that harnesses the power of AI for the betterment of society, safeguarding individual rights and global security.

Let us unite our efforts in building a future where data and AI technologies are leveraged for the benefit of all, adhering to the fundamental principles of transparency, responsibility, and innovation. Global society can shape the Global Enlightenment Mountain, guided by GADG’s architect, and pave the way for the Age of Global Enlightenment.

MIT Professor Alex Pentland

Global Enlightenment Mountain coordinates with India Stack

On May 17, 2023, Sharad Sharma, iSPIRT Foundation, presented at Connection Science of MIT Media Lab and Global Enlightenment Mountain (GEM) with the topic “India Stack: What it is? Its relevance to Digital Economy?” MIT professor Alex Sandy Pentland, co-founder of the Global Enlightenment Mountain (GEM), was the moderator.

 

Sharad highlighted:

Digital Public Infra (DPI) = India Stack + Open Networks

India Stack, Foundational Digital Public Infrastructure:

– Flow of information

Low digital transaction cost, Personal Data & Training Data

Flow of people

– Flow of money

Societal Benefits:

1) Creation of New Market Ecosystems

aka Open Networks:

– Open Credit Enablement Network (OCEN)

– Open Health Services Network (OHSN) aka UHI

– Open Network for Digital Commerce (ONDC)

2) Reform of Social Service

Direct Benefits Transfer, CoWIN

3) Delivers Inclusion

 

The Global Enlightenment Mountain (GEM) supports and coordinates India Stack.

Mr. Sharad Sharma

 

AI Act: a step closer to the first rules on Artificial Intelligence

Global Enlightenment Mountain (GEM) is very pleasure to introduce this good news from European Commission.

AI Policy is one of four pillars of GEM. We will encourage Global Enlightenment Community to discuss and contribute to create rule of AI.

To ensure a human-centric and ethical development of Artificial Intelligence (AI) in Europe, MEPs endorsed new transparency and risk-management rules for AI systems.

On Thursday May 11, 2023, the Internal Market Committee and the Civil Liberties Committee adopted a draft negotiating mandate on the first ever rules for Artificial Intelligence with 84 votes in favour, 7 against and 12 abstentions. In their amendments to the Commission’s proposal, MEPs aim to ensure that AI systems are overseen by people, are safe, transparent, traceable, non-discriminatory, and environmentally friendly. They also want to have a uniform definition for AI designed to be technology-neutral, so that it can apply to the AI systems of today and tomorrow.

Risk based approach to AI – Prohibited AI practices

The rules follow a risk-based approach and establish obligations for providers and users depending on the level of risk the AI can generate. AI systems with an unacceptable level of risk to people’s safety would be strictly prohibited, including systems that deploy subliminal or purposefully manipulative techniques, exploit people’s vulnerabilities or are used for social scoring (classifying people based on their social behaviour, socio-economic status, personal characteristics).

Link:

https://www.europarl.europa.eu/news/en/press-room/20230505IPR84904/ai-act-a-step-closer-to-the-first-rules-on-artificial-intelligence

GEM Pioneering Programs

The Global Enlightenment Mountain (GEM) was officially launched on April 26, 2023, during the BGF High-level Conference, “AI Assistant Regulation Summit: Fostering a Tech Enlightenment Economy Alliance” at Harvard University Faculty Club. As part of the GEM pioneering programs, the following initiatives have been introduced:

  1. Design The Third Generation of AI and AI Assistants: The GEM is committed to advancing AI technologies that prioritize ethical and responsible use. The T-Lead, T-Kindness, and Enligh-T initiatives aim to design the third generation of AI and AI assistants that prioritize human values and ethical principles.
  2. Solve Misinformation and Disinformation Issues: The GEM recognizes the impact of misinformation and disinformation on society and aims to address these issues through technology and education. The program focuses on developing solutions to prevent the spread of false information and promote fact-based knowledge.
  3. Implement Frameworks for AI Global Governance: concrete actions based on the BGF Framework for AI Global Governance.
  4. Building and developing the Global Enlightenment Community: The GEM recognizes the importance of collaboration and engagement among global leaders and thinkers to address the challenges and opportunities presented by AI. This program aims to build and develop the Global Enlightenment Community, an international network that fosters thought, creativity, and ethical behaviors among leaders, thinkers, scholars, innovators, artists, and business leaders worldwide.

GEM calls and welcomes individuals, organizations, institutions, companies to work with GEM in these Pioneering Programs.

Professor David Silbersweig and Nguyen Anh Tuan present the first programs of GEM

A framework for human-centered AI based on nature’s laws

Thomas Kehler, BGF High-level Conference April 26, 2023

 

AI Assistant Regulation Summit: Fostering a Tech Enlightenment Economy Alliance

 Second-Generation AI Diverges from Natural Intelligence

The first wave of AI studied how humans solved problems and built computer programs mirrored human intelligence. Expert systems, for example, made the cover of Time Magazine in the 80s, and many Global 2000 companies invested in AI to convert human expert knowledge into software systems. Many of the projects of that era led to techniques routinely used today. Examples include complex tasks like fraud detection, configuration management of multiplatform systems, and massive-scale inventory management (airline ticket pricing). First-generation AI applications were rooted in modeling human knowledge and rules and were transparent – they could explain their thinking in forms understandable by human experts.

The second wave of AI, today’s AI, began with the advent of the global internet producing massive amounts of data. Neural networks and learning patterns from data performed massive categorization and opened the door to a new frontier of automation – including winning at chess and self-driving cars. However, learning automated methods from data without explanation led to an AI that cannot be trusted or understood because its methods are hidden in complex mathematical models. Second-generation AI has a high potential for generating persuasive misinformation. Without curation, it will lead to harm.

The Internet and AI exist primarily due to Defense Advanced Research Project Agency (DARPA) funding. DARPA is now calling for a new wave of AI that embraces transparency through explainability. In addition, it is critical to include life-supporting ecosystems in the call for contextual adaptation. Probabilistic computing approaches like those proposed by the Free Energy Principle and Probabilistic Graphical Networks provide frameworks for supporting incorporating mechanisms based on nature and integrating human reasoning.

 

“You are smarter than your data. Data do not understand causes and effects; humans do.” ― Judea Pearl.

 

Guiding Principles of Third-Generation AI

  1. The guiding principle for AI must be supporting an ecosystem of life globally – that enables human development and nature to thrive. Therefore, monetary and military motives must come under the values of a global system supporting human potential.
  2. Any initiative to create artificial intelligence must be co-created with human guidance and adherence to transparency, explainability, and principles guided by natural laws. The concept of a ‘transcendent’ intelligence where a non-living mechanism emerges as superior to humans is explicitly rejected.
  3. Our mission is to create an open platform that demonstrates its value through applications of co-created intelligence to support social impact initiatives that include radically inclusive forms of decentralized representative governance.

This technology allows groups of any size to collaborate on solutions to problems, creating a living, replayable model of collective knowledge. The focus is on ideas, and the participant’s identity is masked, similar to the scientific review process. By prioritizing alignment on knowledge rather than identity, this approach enables group prediction of future outcomes to make decisions and create strategies for solving challenging problems (e.g., investment decisions, policy formation, breakthrough products.) This technology can also filter and manage misinformation generated by Large Language Model AI resources such as ChatGPT.

 

A new AI based on biology and nature’s laws

Deep learning, the breakthrough in AI over the past decade, owes its success to its linkage to the laws of physics. It is now recognized in the AI community that deep learning works because its mathematics parallel the physics of magnetism and natural cooperative phenomena (e.g., birds flocking). Commonly used in the physics of magnetism, the concept of free energy (energy available to do work) models the natural process of determining order from disorder.12 Learning alignment (order vs. disorder) underpins the2 Hopfield JJ. 1982. Neural networks and physical systems with emergent collective computational abilities. Proc. Natl Acad. Sci. USA 79, 2554-25581 Lin, H.W., Tegmark, M. & Rolnick, D. Why Does Deep and Cheap Learning Work So Well?.

J Stat Phys 168, 1223–1247 (2017). https://doi.org/10.1007/s10955-017-1836-5 AI technology for creating knowledge models of collective human intelligence discussed in the prior section. The key point is that deep learning and current AI methods borrow heavily from the laws of statistical physics. Third-generation AI must recognize this foundation and fully embrace the laws of nature in systems design.

In parallel, a growing body of work over the past decade focuses on a new intelligence model rooted in (free) energy. The new model provides a living systems foundation for developing intelligent systems that embrace collective intelligence and life.

 

Simulating the impact on global populations

For global impact, the social implications of innovations must be quickly modeled. Technology is now available to create active AI models based on demographic, purchase, mobility, sentiment, geospatial, health, biometric, and economic data. These data can be aggregated into agents of synthetic populations.

 

A global initiative for impact

The specific immediate focus of our initiative is to provide technology solutions that contain and focus AI assistants (ChatGPT) on the principles of transparency, uncertainty, and information integrity. We will provide the following capabilities:

  1. The ability to curate the output of LLMs through a protective firewall curated by human expertise to minimize the impact of misinformation. This technology exists now and is deployable to any organization.
  2. The ability to simulate the social impact of innovations or misinformation campaigns with intelligent agents based on a comprehensive synthetic population model. This technology is available now.
  3. New types of reflective and intentional AI agents based on ecosystem principles that identify and maintain the integrity of generated information from autonomous systems. We intend to provide open tools and technologies based on a living physics model that offers an ecosystem for co-creating solutions to our most challenging global problems. For the first time in human history, we can build a new initiative that embraces and extends what has already been the foundation of human progress – collective human intelligence amplified by technology.

 

The mission of this new initiative:

  1. Engage with organizations and populations to collectively co-create candidate solutions for our most demanding challenges.
  2. Explore new forms of representative governance.
  3. Create organizations that adaptively and learn based on AI technology rooted in natural laws.
  4. Create mini Artificial General Intelligence (agents) that observe and enforce boundaries in an emergent and generative manner.
  5. Demonstrate broad-scale impact.

 

In Support of Global Enlightenment Mountain (GEM)

Collective co-creation and creation of AI systems that function under constraints that support life and the collective good of all humanity aligns with the goals of GEM. The framework focuses on creating an ecosystem of shared, trusted knowledge. In this sense, we believe that AI technology should mirror the knowledge discovery procedures of The Enlightenment – the scientific process. GEM reinstates the importance of this in this current age where we are at a critical juncture.

Thomas Kehler