Microsoft bolsters quantum platform with gen AI, molecular simulation capabilities

The original article was published on CIO.

Microsoft has added generative artificial intelligence and other enhanced features to its quantum-computing platform as part of a larger strategy to deliver the game-changing technology to a broader range of users — in this case, the scientific community.

The company on Wednesday unveiled the release of Generative Chemistry and Accelerated DFT, which together expand how scientists in the chemicals and materials science industry can use its Azure Quantum Elements platform to help drastically shorten the time it takes them to do research, the company said in a blog post.

“Just as generative AI has unleashed new waves of creativity and improved productivity with collaborative tools like Copilot, we are now bringing AI and natural language processing capabilities to science,” according to the post, attributed to Jason Zander, EVP, Strategic Missions and Technologies.

Microsoft’s goal with Generative Chemistry in particular“is to integrate AI reasoning into every stage of the scientific method,” which requires the power of next-generation AI models to speed up the scientific process from hypothesis to results,” he wrote.

Bolstering HPC capabilities

Microsoft launched Azure Quantum Elements late last year as a platform that combines AI and high-performance computing (HPC) to help speed up the scientific methodology. In January, it demonstrated with the US Department of Energy’s Pacific Northwest National Laboratory (PNNL) how the platform narrowed down a data set of 32.6 million potential materials to replace lithium in batteries to just 18 in less than four days.

Generative Chemistry now expands the platform by allowing researchers to generate and explore novel molecules that are suited for industry-specific applications using AI models trained on hundreds of millions of compounds.

Researchers can ask Generative Chemistry for molecules with desired characteristics, as well as provide information about their targeted application and let the system help determine relevant molecular properties, according to Microsoft. The feature not only will provide them with candidates matching their parameters, but also suggest molecules that have not been seen before with useful properties tuned for a specific application, and whose synthesis is feasible in a reasonable number of steps.

Density Functional Theory (DFT) is a method used across a variety of molecular simulations that helps researchers to simulate and study the electronic structure of atoms, molecules and nanoparticles, as well as surfaces and interfaces. Such DFT simulations can be complex and compute-intensive to optimize and run, often requiring the use of supercomputers.

Microsoft has now added Accelerated DFT as a managed service to Azure Quantum Elements to run these simulations at what the company said is “an unprecedented speed;” that is, an order of magnitude faster compared to PySCF, a widely used open-source DFT code, according to the post.

Expansion of generative AI strategy

Adding generative AI to Azure Quantum Elements is a natural evolution for Microsoft’s overall AI strategy, which aims to integrate the technology into its entire product set across a broad range of users, notes Pareekh Jain, CEO of Pareekh Consulting.

While Microsoft’s Copilot was aimed at enterprise IT, “now these solutions are for the R&D community,” he observed, adding that there likely will be more user group-focused AI solutions from the company and its competitors in the future.

At the same time, Microsoft aims to keep pace with long-time competitors like Google and IBM in both quantum computing and generative AI, and its integration of the two in Azure Quantum Elements allows the company to do this.

Taken together, the new features are a boon for the scientific community and “will help in democratizing research and accelerate development of new solutions,” Jain added. “Today’s many pressing problems require new engineering R&D solutions, which Microsoft tools can accelerate,” he said.

Credit: Shutterstock

AI and the 2024 Elections

This is an excerpt of the article published by the Allen Lab of the Ash Center (HKS).

From misinformation to AI panic, experts joined the Allen Lab’s GETTING-Plurality event to discuss the threats the burgeoning technology poses to democracy.

Balancing privacy and protection

Sandy Pentland, Professor of Media Arts and Sciences at MIT, a Boston Global Forum’s Board Member, focused on the foundational role of identity and reputation in mitigating online threats and establishing trust. Both Allen and Pentland referenced Taiwan as a model for balancing privacy while protecting against disinformation and online crime. There, users are anonymous on digital media but verified as actual humans. Pentland noted that even crypto exchanges now require identification, which is then kept confidential.

“And so, what we have to do is we have to think about, ‘Can we do that in media?’” Pentland asked. “And the answer is pretty [much] yes. We have most of the infrastructure there already to do it.” He contended that the mechanisms used in Taiwan and in crypto exchanges offer a way to understand whom one interacts with without compromising privacy. “I would suggest that we have this sort of fairly radical principle: a complete anonymity in opposition to the ability to track down bad guys and have some sort of knowledge of who it is that we’re dealing with.”

AI-generated image via Adobe Firefly

 

Spiritual Values of Religions for “The Knowledge Platform for AI”

INTERRELIGIOUS CONFERENCE 2024, Rome, Italy May 31 – June 4

Center for Interreligious Dialogue – Focolare Movement

Nguyen Anh Tuan, Nguyen Phan Nguyet Minh

 

In the rapidly evolving landscape of artificial intelligence (AI), the need for a comprehensive knowledge platform has never been more pressing. As AI systems continue to permeate various aspects of our lives, from healthcare and finance to transportation and entertainment, the importance of having a centralized repository of knowledge to inform decision-making processes is paramount. Due to this necessity, the Boston Global Forum has conceived the Knowledge Platform for AI. This Knowledge Platform can serve as an essential resource, providing a foundation upon which AI applications and systems can reference, think critically, and make informed decisions.

Rooted in ethics and standards, this platform can offer guidance and reference points for individuals navigating the complexities of artificial intelligence. Emphasizing humanity, compassion, and moral judgement, it embodies a commitment to fostering ethical practices and responsible use of AI technologies. By integrating intellectual rigor with ethical considerations, the platform empowers users to make informed decisions that align with ethical principles and societal values, promoting integrity, fairness, and accountability in AI applications across various domains.

At its core, the Knowledge Platform for AI seeks to aggregate and organize vast amounts of data, information, and expertise from diverse sources, both modern and historical. It encompasses the Social Contract for the AI Age, standard values of AI World Society (AIWS), historical data, norms, ethics, and background information from politics, science, and the economy. By consolidating this wealth of information into a single, accessible platform, AI systems can draw upon a wide range of insights to enhance their understanding and decision-making capabilities.

 

Integration of Spiritual Values from Various Religions into The Knowledge Platform for AI:

A dimension of the Knowledge Platform is incorporating the spiritual values of various and diverse religions into its development, enriching its ethical framework and enhancing its ability to guide decision-making processes. The principles and values of world religious traditions, such as Catholicism, Hinduism, Islam, Buddhism, and Judaism, can contribute to the platform:

 

Catholicism

Human Dignity (Dignitatis Humanae): Catholic teachings on human dignity emphasizes the inherent dignity and worth of every human being, regardless of their background or circumstances. Everyone is made in God’s image, and can grow in virtue through their own actions with grace. This value underscores the importance of respecting individual autonomy, privacy, and rights in AI decision-making processes, but also the need to steer individuals toward a moral and ethical path.

Social Justice: Catholic social teaching, first emphasized in the Rerum Novarum, commits to the idea of a just society where the needs of the marginalized and vulnerable are prioritized, that all believers are equal. Integrating this value into the platform ensures that AI systems consider the broader societal implications of their actions and strive to promote equality and brotherhood among humanity.

 

Hinduism:

Ahimsa (Non-violence): Ahimsa is a central tenet of Hinduism, advocating for non-violence and compassion towards all living beings, as all have a spark of spiritual energy. A prominent practitioner of this principle was Mahatma Gandhi. This value encourages AI systems to prioritize peaceful and non-coercive methods in their interactions with humans, and for AI to not do harm.

Dharma (Virtue, Duty and Righteousness): Dharma emphasizes the importance of fulfilling one’s duty and upholding righteousness in all actions. By integrating this virtue, the platform encourages AI systems to act ethically and responsibly, considering the long-term consequences of their decisions.

 

Islam:

Justice (Adl): Adl conceptualizes justice within the individual, of having strong morals, integrity and moderation. Incorporating this value into the platform ensures that AI systems uphold principles of fairness and impartiality, treating all individuals with dignity and respect.

Mercy/Beneficient (Rahmah): Rahmah, a Name of God in Islam, is another key value, encouraging compassion and grace towards others. Divine mercy is extended to all of God’s creation. AI systems guided by this value can conceptualize understanding and mercy, leading to more empathetic and humane interactions.

 

Buddhism:

Selflessness/Compassion (Karuna): Karuna is a quality that needs to be honed on the path to enlightenment. By practicing this value, one can be more willing to let go of mortal hostilities and sorrows. This value encourages AI systems to prioritize the well-being of individuals and communities, promoting awareness and altruism in their decision-making processes.

Wisdom (Prajna): Wisdom, or more specifically insight or gained intuition, is key in Buddhism, as it is the ability to meditate and reflect on the nature of being or phenomena. Integrating this value into the platform ensures that AI systems make informed and judicious decisions that consider the broader implications and consequences.

 

Judaism:

Tikkun Olam (Repairing the World): This concept classically called for the maintenance of order and jurisdiction, but it has now been interpreted and expanded to the pursuit of social justice and righteousness, that Jews had a duty to both their individual/spiritual welfare and the welfare of society at large. Incorporating this value into the platform encourages AI systems to contribute positively to societal well-being and address pressing issues facing humanity.

Laws/Principles (Halakha): The Halakha refers to the body of religious laws, commandments, and traditions of Judaism. Through the lens of this Judaic tradition, which promotes ethical systems and guidance for the lives of believers, AI systems can be guided by ethical considerations that prioritize integrity, honesty, and accountability.

 

By embracing the spiritual values of these diverse religious traditions, The Knowledge Platform for AI becomes a more holistic and moral resource, capable of guiding AI decision-making processes in a manner that upholds ethical standards and respects human values.

Superintelligence to reshape human-machine collaboration

This article was originally published Korean Herald by Jie Ye-eun.

As humans are increasingly associating with artificial intelligence at home and in the workplace, global tech experts gathered Thursday at this year’s EmTech Korea to discuss what these technologies might look like in the next decade.

Orchestrated by the MIT Technology Review, the annual conference aimed at exploring the fusion of global technology trends and industries took place at the Coex convention and exhibition center in Seoul for the first time.

In a panel discussion moderated by Karen Hao, a tech writer, the three panelists — Tong Zhang, a computer science professor at the University of Illinois Urbana-Champaign; Lee Moon-tae, a lab leader at LG AI Research; and Yoon Kim, a partner at Saehan Ventures, talked about the rise of superintelligence agents and the changing nature of human-machine collaboration.

A superintelligence refers to a hypothetical AI or agent that can surpass human intelligence across the most economically valuable or intellectually demanding tasks. Some examples of artificial superintelligence include chatbots and AI tools such as Apple’s Siri, OpenAI’s ChatGPT and AlphaGo.

Tech professionals said that superintelligence agents will make humans’ lives much easier in general by doing things more accurately, efficiently, naturally and safely — but there are also limits to their capacities.

Before the panel discussions, in the session dubbed, “Large Language Models and the Prospects for Generative AI,” Zhang briefly introduced types of large language models and artificial general intelligence alongside his ideas on how they will likely change our lives in the next 5 to 10 years.

First, he suggested that generative AI for search would look different in the next decade, citing the example of the AI search engine Perplexity, which will give users summarized answers with attributed sources.

While artificial general intelligence refers to a type of AI that can understand, learn and apply knowledge across a broad range of tasks, much like humans can, LLMs’ performance would become more important in the AGI system, according to the expert.

In his presentation, Zhang forecasted that humans will see much more reliable AI agents in the next five years, as developers require feedback and cross-checking processes to reduce false results.

“We’ll see a lot of development in the next 10 years so that (agents) can actually understand physical work to enough extent that they can work with you. You will see the real intelligent robots and physical robots roaming around,” Zhang said.

Although human-machine collaboration could make life much more convenient, people cannot see themselves fully depending on the agents in the next decade, experts said.

“AI agents require extra human effort to utilize data and feedback for continuous machine learning even after ten years since large language models do not have memory span like humans,” Zhang added.

Meanwhile, a participant from the floor asked the experts what types of efforts humans and AI should make to preserve humanity, as AI agents will likely outnumber humans in the future.

“Humans really have a challenge of trying to think about what really makes humans humans. Certainly, AI is not going to help teach us how that is. I think humans need to understand themselves in that new realm of knowledge and sphere of information. I am very confident that humans will come up with very new, very exciting, unbelievable things that AI will never be able to make,” Kim said.

(Jie Ye-eun/The Korea Herald) Karen Hao (far left), a contributing writer for The Atlantic, speaks during a discussion session, titled “The Rise of Super-Intelligent Agents,” at the EmTech Korea conference held in Seoul on Thursday. On her left is Yoon Kim, a partner at Saehan Ventures; Tong Zhang, a professor at the University of Illinois Urbana-Champaign; and Lee Moon-tae, a fundamental research leader at LG AI Research.

Governing in the Age of AI: A New Model to Transform the State

This is an excerpt of the executive summary of the report published by the Tony Blair Institute for Global Change.

With costs mounting, backlogs growing and outcomes worsening, it should be clear to every political leader that the way government runs no longer works.

Outside the public sector, a great change is underway. The combination of massive volumes of data, ubiquitous cloud and powerful processors has created a self-reinforcing feedback loop of innovation and growth.

The latest iterations of artificial-intelligence systems – generative AI such as large language models (LLMs) – are matching humans for quality and beating them on speed and cost. Knowledge workers using the GPT-4 model from OpenAI completed 12 per cent more tasks, 25 per cent quicker, with a bigger boost in productivity for less-skilled workers. Businesses using AI tools are 40 per cent more efficient and have 70 per cent higher customer and employee satisfaction than businesses that do not.

And unlike previous generations of AI systems, which had to be custom-built, generative AI is general-purpose, opening up a wide range of applications.

In the private sector, the transformation is accelerating. Leading tech companies are reportedly planning investments of more than $250 billion in chips, compute and data centres. Corporate investment in AI since 2020 is close to $1 trillion. Adoption is soaring: within nine months of launch, ChatGPT was in use in 80 per cent of Fortune 500 companies. Spending on generative AI systems by European businesses is expected to grow by 115 per cent in 2024. By 2025, Goldman Sachs expects AI investment to reach $200 billion a year globally. By 2028, analysts expect the global market for AI to exceed $1 trillion in size.

In the public sector, profound changes are now possible. Harnessing AI tools could repair the relationship between government and citizens, put public services on a new footing and unlock greater prosperity.

This prospect should be exciting in its own right, but in reality it is the only path forward. The public sector is on its knees, with large backlogs and lengthy waits for services, a demoralised, unproductive workforce and a lack of long-term thinking as policymakers go from crisis to crisis. Adopting AI in the public sector is a question of strategic prioritisation that supersedes everything else. The UK cannot be consumed by old debates when the real issue is AI.

AI could make countless tasks performed by public-sector workers every day better, faster and cheaper. It could help them to match service supply to demand, accelerate processing of planning applications or benefits claims, upgrade investigations and analysis, communicate with citizens better, collect and process information for transactional services, model and intervene in complex systems, expedite research and support tasks, manage diaries, draft notes and much, much more.

In fact, the UK government believes that up to a third of public-sector tasks could be improved with AI. Now, TBI analysis shows that, after accounting for upfront and ongoing costs, the UK stands to gain £40 billion per year in public-sector productivity improvements by embracing AI, amounting to £200 billion over a five-year forecast.

The public sector cannot afford to leave this on the table.

Getty Images

Elon Musk predicts AI will overtake humans to the point that ‘biological intelligence will be 1%’

This is an excerpt from the article originally published in Euronews.

Elon Musk predicts that artificial intelligence (AI) will soon surpass human intelligence, becoming so ubiquitous that “intelligence that is biological will be less than 1 per cent”.

The billionaire behind SpaceX, Neurolink, and Tesla, and the owner of the social media X, made the comments during an on-stage appearance on Thursday at the 27th annual Global Conference organised by the Milken Institute.

Answering questions from the audience on topics including AI, he responded that “AI might be the most important question of all”.

“The percentage of intelligence that is biological grows smaller with each passing month. Eventually, the percentage of intelligence that is biological will be less than 1 per cent,” Musk said.

Musk did not appear to envisage how long this will take. For the moment, AI has a lot of shortcomings and still requires human assistance, or the other way around.

“Biological intelligence can serve as a backstop, as a buffer of intelligence. But in percentage, almost all intelligence will be digital,” he added.

Jordan Strauss/Invision/AP, File

Collaboration to build “The Data Sovereignty – Knowledge Platform for AI” by David Hall, TAMP’s CEO and AKT Health’s Managing Director

David Hall presented on the Data Sovereignty concept at the BGF Conference on April 30, 2024, Harvard University Loeb House. His presentation focused on the introduction to Data Sovereignty – Knowledge Platform for AI, emphasizing the essential influence of AI and its impact on various sectors such as healthcare and national security.

Introduction to Data Sovereignty: David Hall began by highlighting the significance of Data Sovereignty, emphasizing its role as a potential foundation of AI. He underscored the importance of understanding and asserting control over data in an era dominated by AI technologies.

The Knowledge Platform for AI: It is more than just a data repository, it is the wealth of human knowledge to be harnessed into a beacon for ethical AI, aligning with the Social Contract for the AI Age. The platform seeks to aggregate vast amounts of data, insights, and expertise from around the globe, supported by partnerships with esteemed institutions like Harvard, MIT, and Stanford.

Ethical Framework and Global Collaboration: Hall outlined the platform’s high standards for AI development, ensuring alignment with ethical practices and human values. He emphasized global collaboration through seminars, conferences, and discussions to foster innovation and dialogue.

Memorandum of Understanding – Strategic Goals: Hall discussed the formalization of collaboration between BGF, AIWS, TAMP, and AKT Health, establishing a unified commitment to enhance and support the Knowledge Platform for AI.

Technological and Educational Advances: Hall highlighted technological advancements such as blockchain for data integrity and smart contracts. He also emphasized educational initiatives and upskilling programs to promote ethical AI usage across diverse sectors.

Vision for the Future: Hall concluded his talk by outlining the vision for the future, aiming to empower and enhance human capabilities without compromising values or autonomy. He called on the global community to participate actively in ethical AI development, emphasizing the importance of collective responsibility in shaping the future of AI.

AIWS: Pioneering AI Governance and New Democracy

Yasuhide Nakayama

Martin Nkafu Nkemnkia

David Lovejoy

 

AIWS: Pioneering AI Governance and New Democracy

Since its inception in November 2017, the Artificial Intelligence World Society (AIWS), founded by the Boston Global Forum (BGF), has been at the forefront of shaping the governance of artificial intelligence (AI) and fostering new models of democracy. Through collaborations with global leaders and top AI thinkers, AIWS has introduced innovative initiatives and frameworks aimed at harnessing AI for the betterment of society.

World Leader in AI World Society Award

Boston Global Forum annually recognizes and honors distinguished leaders for their exemplary leadership and contributions in promoting AI for a better world with the World Leader in AIWS Award starting from 2018.

BGF organizes the Conference “Governing the Future: AI, Democracy, and Humanity” on April 30, 2024, at the prestigious Harvard University Loeb House to honor Alondra Nelson, former Deputy Assistant to President Joe Biden, former Acting Director of the White House Office of Science and Technology Policy, with the 2024 World Leader in AIWS Award. This conference served as a platform to recognize Nelson’s exceptional contributions to AI governance and to explore the intersection of AI, democracy, and humanity.

The prestigious award recognizes Dr. Nelson’s outstanding contributions to shaping global public policy, the governance of artificial intelligence (AI), and our understanding of the societal dimensions of AI development and deployment.

During her tenure at the White House Office of Science and Technology Policy (OSTP), Dr. Nelson spearheaded the development of the “Blueprint for an AI Bill of Rights,” which was incorporated into both President Biden’s historic executive order on artificial intelligence and enacted into policy for the federal government. In leadership at OSTP, she also provided guidance to expand taxpayer access to federally funded research, served as an inaugural member of the Biden Cancer Cabinet, strengthened evidence-based policymaking, and galvanized a multisector strategy to advance equity and excellence in STEM.

Prior recipients of the AIWS World Leader Award have included Ambassador Amandeep Singh Gill, the United Nations Envoy on Technology; Stavros Lambrinidis, European Union Ambassador to the United States; Internet pioneer Vint Cerf; and Sanae Takaichi, the Japanese Minister of State for Economic Security

AIWS Contributes AI Governance to G7 Summit from 2018: Next Generation Democracy

At the BGF-G7 Summit in 2018, AIWS unveiled the groundbreaking AIWS 7-Layer Model to Build Next Generation Democracy. This model serves as a roadmap for a future where AI is leveraged to enhance creativity, innovation, tolerance, democracy, and individual rights. It emphasizes the role of AI in assisting government decision-making and empowering citizens to participate in governance.

The following year, at the AIWS-G7 Summit Initiative in 2019, held at Harvard University’s Loeb House, AIWS introduced the concept of The Next Generation Democracy – AI World Society. This initiative envisions a society where AI fosters transparency, inclusivity, and citizen engagement in governance. It comprises three key components: AI-Government, AI-Citizen, and AI Government Index, aimed at promoting accountability and efficiency in government operations.

Collaborate with Club de Madrid from 2019: the Social Contract for the AI Age and AIWS Model

In 2020, the collaboration between Club de Madrid and BGF resulted in the Policy Lab ‘Transatlantic Approaches on Digital Governance,’ where former heads of state and government, along with experts, discussed global policies for managing digital technologies and AI. The lab announced the Social Contract for the AI Age and supported AI World Society, urging world leaders to endorse and implement it. 

Continuing its efforts, Club de Madrid and BGF launched a five-year initiative in 2021 to develop a human-centered agenda for digital transformation and governance. This initiative aims to build global consensus around principles that prioritize human rights and ethical considerations in digital societies.

At the Policy Lab ‘Fundamental Rights in AI & Digital Societies’ in 2021, the Global Alliance for Digital Governance (GADG) was established, fostering collaboration between BGF and the World Leadership Alliance-Club de Madrid to address the ethical and legal challenges posed by AI.

In 2020, the Boston Global Forum and the Riga Conferences collaborate for platforms for discussing AI governance and societal implications. Notably, the publication of the policy brief “Social Contract for the Artificial Intelligence Age” at the Riga Conference 2020 underscored the importance of safety, security, and sustainability in the AI era.

Collaboration with the United Nations in AI Governance from 2019: Remaking the World – Toward an Age of Global Enlightenment

In 2021, the Boston Global Forum and the Riga Conference 2021 collaborated to organize a special event: “Remaking the World – The Second Age of Enlightenment – The United Nations 2045”. This discussion was a notable highlight of the Rīga Conference 2021 and was co-organized by the Latvian Transatlantic Organisation and the Boston Global Forum.

The event aimed to commemorate the United Nations Centennial initiative, launched by the Boston Global Forum (BGF) and the United Nations Academic Impact (UNAI) in 2019. This initiative was established as the United Nations prepared to mark its 75th anniversary in the following year, with a vision to anticipate the world and the United Nations in 2045, the year of the world organization’s centennial.

Key messages from the event highlighted the core concepts outlined in the book “Remaking the World: Toward an Age of Global Enlightenment”. These concepts include the idea of a Social Contract for the Artificial Intelligence (AI) age, a framework for an AI international accord, an ecosystem for the “AI World Society” (AIWS), and a community innovation economy. The event brought together some of the finest minds of our times to envision a future where AI is harnessed to foster global enlightenment and societal progress.

The Riga Conference 2023 published the Boston Global Forum Special Report “How to Govern AI in an Age of Global Tension” as the Policy Brief of the Riga Conference 2023.

C20-G20 Summit 2023 in India: Recognize AI World Society and the Social Contract for the AI Age

At the C20-G20 Summit in India in 2023, the recognition of the Social Contract for the AI Age and AI World Society highlighted the global significance of AI governance and the need for international cooperation.

Through its pioneering initiatives and collaborations, AIWS continues to drive conversations and actions toward a future where AI is harnessed responsibly to benefit humanity and advance democratic principles on a global scale.      

President of European Commission Ursula von der Leyen highlighted in her speech at the Boston Global Forum Conference on December 12, 2020: “It is such an honour to be here with you today. At the Boston Global Forum and Michael Dukakis Institute for Leadership and Innovation, you are at the forefront of research and debate. And you definitely work on some of the world’s most pressing issues. You drive the discussion on digital policy and how a human-centric approach on AI could look like. This is an issue whose importance simply cannot be overestimated.”

Sovereign GPTs: Aligning Values in AI for Development

This is an excerpt from the full article published on the UNCTAD website.

Governments have taken note. In addition to new AI regulation and policy interventions (such as the EU AI Act and the US AI Executive Order), there are arising a series of sovereign AIs, which development and use are under a nation’s control. Billions of dollars US are now being allocated to the creation of large-scale AI systems that are independent of Big Tech by countries across Europe and Asia (Canada recently has weighed in, but it is unclear if they will pursue a fully sovereign AI).

Generative AI is the general-purpose AI technology that underpins not only LLMs, but also the ability to create “deep fake” videos and other content that can potentially reshape economies, or alter the course of public opinions and the numerous elections scheduled for 2024.  A number of countries have begun to say, effectively, ‘we need to better manage these systemic risks’, as well as ‘we want to embed our local values, our national cultures into the GPTs that we use’.

Governments of multiple nations have expressed concern over unconscious bias, on the one hand, and unilateral design decisions, on the other hand, originating from Silicon Valley-based companies influencing the performance and outputs of commercially available GPTs. Creation of sovereign GPTs is a trend that potentially represents a democratization of AI, and a nuanced understanding of the effects of digital technology on society.

Aldo Faisal, Professor of AI and Neuroscience, Imperial College London
DR A. ALDO FAISAL
Faculty of Engineering, Department of Bioengineering
Senior Lecturer in Neurotechnology
Dr Faisal is a Senior Lecturer in Neurotechnology (US equivalent: Associate Professor, tenured) jointly at the Dept. of Bioengineering and the Dept. of Computing at Imperial College London. He is also Associate Group Head at the MRC Clinical Sciences Center (Hammersmith Hosptial) and is affiliated faculty at the Gatsby Computational Neuroscience Unit (University College London).
Note: It is strongly recommended to visit the Faisal Lab’s web pages (www.FaisalLab.com) as that site is most up to date in terms of news and publications.
Dr Faisal’s lab combines cross-disciplinary computational and experimental approaches to investigate how the brain and its neural circuits are designed to learn and control goal-directed movements. The neuroscientific findings enable the targeted development of novel technology for clinical and research applications (Neurotechnology) for a variety of neurological/motor disorders and amputees. Key techinques include on the computational side machine learning & stochastic modelling techniques and experimentally we use psychophysics, eyetracking & kinematics, non-invasive electrophysiolog, robotic (with Brain-Computer Interfaces) and funcational imaging. We have featured regularly across global media (such as BBC Today Show, CNN, WIRED, TED, TEDx, New Scientist, Guardian, Times of India, etc.). In the acemic year 2013/2014 the lab comprised 15 Post-Docs and PhD students (see www.FaisalLab.com for more).