IBM and AMD Join Forces on Quantum and Next-Generation Computing

A landmark partnership was announced on August 26, 2025, as IBM and AMD revealed plans to combine their strengths in AI accelerators, quantum computing, and high-performance computing (HPC) to tackle some of the world’s most complex challenges.

The collaboration aims to create integrated computing platforms capable of advancing climate modeling, biomedical research, supply chain resilience, and national security. By merging AMD’s expertise in high-performance processors and accelerators with IBM’s leadership in quantum systems and AI research, the two companies envision a future where computing power can drive transformative breakthroughs.

This initiative aligns closely with the AI World Society (AIWS) vision of leveraging frontier technologies for humanity’s benefit. It demonstrates how public–private partnerships in advanced computing can lay the foundation for AIWS Government 24/7, ensuring that innovation is guided by ethics, responsibility, and a focus on solving global problems.

📖 Full coverage:

https://newsroom.ibm.com/2025-08-26-ibm-and-amd-join-forces-to-build-the-future-of-computing

https://www.amd.com/en/newsroom/press-releases/2025-8-26-ibm-and-amd-join-forces-to-build-the-future-o.html

USAi: America’s AI Experimentation Platform

The U.S. General Services Administration (GSA) has launched USAi, a secure generative AI evaluation suite described as the “infrastructure for America’s AI future.” This platform allows federal agencies to test and evaluate emerging AI technologies collectively, reducing duplication and ensuring efficiency.

Key features of USAi include:

  • Adoption of moderate FISMA security standards to balance trust and protection.
  • Empowering agencies to select and experiment with the tools best suited to their missions.
  • Establishing regulatory guardrails to guide responsible AI adoption.

USAi reflects the White House’s AI Action Plan and signals a commitment to winning the AI race by embedding responsibility, security, and innovation into government infrastructure.

AIWS Government 24/7 Perspective: USAi embodies elements of the AIWS Government 24/7 model, where technology is designed to continuously serve citizens with transparency, security, and inclusivity. By providing a shared, trusted platform for federal experimentation, USAi points toward the kind of always-on, ethical governance infrastructure that AIWS envisions—one that can respond in real-time to challenges, protect public trust, and make government smarter, more efficient, and more humane in the Age of AI.

📖 Full article here:

Photonic Computing in AI World Society

The newly launched Report on Emerging Computing Pathways in Quantum, Neuromorphic, and Photonic Computing by the Data Security Council of India (DSCI) highlights how photonic computing could revolutionize the future of information processing. By using light instead of electricity, photonic systems promise unprecedented speed, energy efficiency, and scalability — breakthroughs essential for advancing AI, secure communications, and global digital infrastructure. For the Boston Global Forum and AIWS, photonic computing represents not only a technological leap but also a pathway to shaping a future where innovation is aligned with ethics, resilience, and the values of an AI World Society.

Please see full here: https://www.apnnews.com/report-on-emerging-computing-pathways-in-quantum-neuromorphic-and-photonic-computing-launched-by-dsci/

The Trump-OpenAI AI Partnership: A Catalyst for Government Productivity and AI Infrastructure Investment

The Trump-OpenAI AI partnership, announced earlier this month, marks a seismic shift in how the U.S. government leverages artificial intelligence to modernize operations, reduce costs, and enhance national security. This groundbreaking collaboration, formalized in August 2025, secures access to OpenAI’s ChatGPT Enterprise for all federal agencies at a symbolic cost of $1 per agency annually. The initiative aligns seamlessly with the administration’s broader AI Action Plan, which prioritizes infrastructure investment, regulatory streamlining, and global AI leadership, positioning the U.S. as a frontrunner in the AI race.

Please see full here: https://www.ainvest.com/news/trump-openai-ai-partnership-catalyst-government-productivity-ai-infrastructure-investment-2508/

U.S. Regulation of Artificial Intelligence: Charting a Path Forward

As artificial intelligence becomes a central driver of economic growth, innovation, and national security, the United States is accelerating efforts to establish a regulatory framework that safeguards public interests while fostering technological leadership. A Bloomberg Professional Insights article, published on August 8, 2025, examines how policymakers, industry leaders, and regulators are navigating the complex challenges of AI governance—from ensuring transparency and accountability to addressing bias, safety, and misuse risks.

The piece outlines current legislative proposals, agency-led initiatives, and sector-specific guidelines emerging across the federal landscape, as well as the role of international cooperation in shaping global norms. It also highlights the delicate balance between enabling innovation and mitigating harm—an issue central to the Boston Global Forum’s AIWS 7-Layer Model.

For leaders and stakeholders in the AI ecosystem, these regulatory developments will influence not only compliance requirements but also strategic positioning in the Age of AI.

https://www.bloomberg.com/professional/insights/artificial-intelligence/us-regulation-of-artificial-intelligence/

“It’s a feature, not a bug” – How journalists can spot and mitigate AI bias

Consultant and executive coach Ramaa Sharma spoke to leading figures in the newsroom AI space to identify the risks and potential solutions of AI bias.

Mitigating bias as a process

Tackling bias in AI systems is not easy. Even well-intentioned efforts have backfired. This was illustrated spectacularly in 2024 when Google’s Gemini image generator produced images of Black Nazis and Native American Vikings in what appeared to be an attempt to diversify outputs. The subsequent backlash forced Google to temporarily suspend this feature.

Incidents like this highlight how complex the problem is even for well-resourced technology companies. Earlier this year, I attended a technical workshop at the Alan Turing Institute in London, which was part of a UK government-funded project that explored approaches to mitigating bias in machine learning systems. One method suggested was for teams to take a “proactive monitoring approach” to fairness when creating new AI systems. It involves encouraging engineers and their teams to add metadata at every stage of the AI production process, including information about the questions asked and the mitigations applied to track the decisions made.

The researchers shared three categories of bias that could occur along the lifecycle, with thirty-three presented in total, along with deliberative prompts to help mitigate them:

  1. Statistical biases arise from flaws in how data is collected, sampled, or processed, leading to systematic errors. A common type is missing data bias, where certain groups or variables are underrepresented or absent entirely from the dataset.

In the example of a health dataset that primarily includes data from men and omits women’s health indicators (e.g. pregnancy-related conditions or hormonal variations), AI models trained on this dataset may fail to recognise or respond appropriately to women’s health needs.

  1. Cognitive biases refer to human thinking patterns that deviate from rational judgment. When these biases influence how data is selected or interpreted during model development, they can become embedded in AI systems. One common form is confirmation bias, the tendency to seek or favour information that aligns with one’s pre-existing beliefs or worldview.

For example, a news recommender system might be designed using data curated by editors with a specific political leaning. If the system reinforces content that matches this worldview while excluding alternative perspectives, it may amplify confirmation bias in users.

  1. Social biases stem from systemic inequalities or cultural assumptions embedded in data, often reflecting historic injustices or discriminatory practices. These biases are often encoded in training datasets and perpetuate inequalities unless addressed.

For example, an AI recruitment tool trained on historical hiring data may learn to prefer male candidates for leadership roles if past hiring decisions favoured men, thus reinforcing outdated gender norms and excluding qualified women.

The process generated lively debate amongst the group and took considerable time. This made me question how practical it would be to apply this methodology in an overwhelmed/time-stricken newsroom. I also couldn’t help but notice that the issue of time didn’t seem to trouble the engineers in the room.

Please read full here:

https://reutersinstitute.politics.ox.ac.uk/news/its-feature-not-bug-how-journalists-can-spot-and-mitigate-ai-bias

 

Could Metasurfaces Be The Next Quantum Information Processors?

Researchers blend theoretical insight and precision experiments to entangle photons on an ultra-thin chip

In the race toward practical quantum computers and networks, photons — fundamental particles of light — hold intriguing possibilities as fast carriers of information at room temperature. Photons are typically controlled and coaxed into quantum states via waveguides on extended microchips, or through bulky devices built from lenses, mirrors, and beam splitters. The photons become entangled – enabling them to encode and process quantum information in parallel – through complex networks of these optical components. But such systems are notoriously difficult to scale up due to the large numbers and imperfections of parts required to do any meaningful computation or networking.

Could all those optical components could be collapsed into a single, flat, ultra-thin array of subwavelength elements that control light in the exact same way, but with far fewer fabricated parts?

Optics researchers in the Harvard John A. Paulson School of Engineering and Applied Sciences(SEAS) did just that. The research team led by Federico Capasso, the Robert L. Wallace Professor of Applied Physics and Vinton Hayes Senior Research Fellow in Electrical Engineering, created specially designed metasurfaces — flat devices etched with nanoscale light-manipulating patterns —  to act as ultra-thin upgrades for quantum-optical chips and setups.

Please see full here: https://seas.harvard.edu/news/2025/07/could-metasurfaces-be-next-quantum-information-processors

Operationalizing AI Ethics: From Principles to Practice

In the Shaping Futures section of this week’s BGF Weekly, we spotlight the influential article “Operationalizing AI Ethics Principles” by Dr. Cansu Canca, published in the Communications of the ACM.

Dr. Canca addresses one of the most pressing challenges in AI governance today: how to translate ethical principles into actionable practices within organizations developing and deploying AI. As ethical declarations proliferate, real-world mechanisms to enforce, monitor, and assess AI ethics remain limited. This article outlines pathways to embed ethics directly into AI development lifecycles, ensuring that principles are not just symbolic but operational and measurable.

At the Boston Global Forum (BGF) and within the AI World Society (AIWS), this work resonates deeply with our efforts — from the AIWS 7-Layer Model of AI Ethics to the Boston Finance Accord for AI Governance 24/7 — to build frameworks where ethics guide innovation systematically and transparently.

Dr. Canca’s approach offers valuable insights for leaders, innovators, and policymakers seeking to ensure that AI technologies are developed with accountability, fairness, and societal benefit at their core.

📌 Read the full article:
https://cacm.acm.org/opinion/operationalizing-ai-ethics-principles/

Elon Musk vows to start a new political party after Trump feud. Here’s why that’s harder than it sounds

Elon Musk’s threat to start a third major political party has been met with widespread skepticism, as critics pointed to numerous failed bids over decades to disrupt America’s two-party system.

As billionaire Elon Musk feuds with President Trump over his signature tax and domestic policy legislation, Musk has reupped his calls to launch a new political party — a daunting task even for the wealthiest person on Earth.

Musk first floated launching a third party, dubbed the “America Party,” earlier this month, part of a nasty back-and-forth between the president and the Tesla CEO that marked the likely end of their political alliance. Musk raised the idea again this week as lawmakers raced to send the One Big Beautiful Bill Act to Mr. Trump’s desk — and this time, Musk put a time limit on the plan.

“Only the richest person in the world could make a serious effort at creating a new American political party,” Brett Kappel, a veteran election lawyer, told CBS News.

Navigating 50 different state laws — and federal rules

“Political parties are creatures of the states,” Kappel said.

Each state has different legal rules for recognizing which political parties can appear on the ballot, and those hurdles “range from high to extraordinarily difficult to overcome,” he noted. In some cases, a nascent state party may need to get candidates onto the ballot by submitting large numbers of signatures, and then win a certain percentage of the vote across election cycles.

Please see full here:

https://www.cbsnews.com/news/elon-musk-new-america-political-party-trump-feud-harder-than-it-sounds/

https://www.washingtonpost.com/technology/2025/07/02/elon-musk-third-party-trump/