Sam Altman Predicts Artificial General Intelligence by 2030, Says AI Could Take Over 40% of Tasks

OpenAI CEO Sam Altman has projected that Artificial General Intelligence (AGI) could become a reality by the year 2030, a breakthrough he believes will reshape economies, societies, and governance worldwide. Altman suggested that AI systems may soon be capable of performing up to 40% of current human tasks, from healthcare and education to finance and logistics.

While acknowledging the transformative opportunities, Altman emphasized the need for safety, governance, and ethical oversight. Without effective frameworks, he warned, AGI could amplify inequalities or destabilize institutions.

Altman also reflected on technology’s relationship with President Donald Trump, pointing to the complex interactions between Silicon Valley and Washington. He noted both tensions and opportunities in aligning technological progress with national and global policy.

His predictions underscore a defining challenge of our time — one that the Boston Global Forum (BGF) and the AI World Society (AIWS) have been addressing for nearly a decade. BGF has laid essential foundations through:

  • The Social Contract for the AI Age (2020): A democratic framework ensuring AI serves human dignity, freedom, and inclusiveness.
  • AIWS Government 24/7: A model for transparent, accountable, AI-assisted governance.
  • The Boston Finance Accord for AI Governance 24/7: Ethical standards for AI-driven finance.
  • The Global Alliance for Democratic AI and Digital Governance (2025): Uniting trusted democracies to shape responsible global AI frameworks.

As Altman points toward 2030 as the AGI horizon, these initiatives provide the roadmap for democracies to ensure AI strengthens peace, democracy, and human well-being rather than undermining them. The urgency of this work has never been clearer.

https://www.techspot.com/news/109644-sam-altman-predicts-artificial-general-intelligence-2030-ai.html

The UNGA Science Summit 2025: A Glimpse into the Future of AI

At the United Nations General Assembly Science Summit 2025, experts, policymakers, and innovators came together to explore how artificial intelligence is reshaping our world. The discussions emphasized both the transformative opportunities and the urgent ethical challenges posed by AI. From breakthroughs in healthcare and climate resilience to debates on governance, equity, and global cooperation, the summit offered a forward-looking roadmap for how AI can be harnessed for the common good.

For the Boston Global Forum (BGF) and AI World Society (AIWS), the Science Summit’s insights resonate with our mission to ensure AI serves humanity with ethics, responsibility, and democratic values. Events like this reinforce the importance of frameworks such as AIWS Government 24/7 and the Boston Finance Accord in guiding AI’s integration into society.

🔗 Forbes – The UNGA Science Summit 2025 Offered A Glimpse on the Future of AI

https://www.forbes.com/sites/corneliawalther/2025/09/20/the-unga-science-summit-2025-offered-a-glimpse-on-the-future-of-ai/

 

AI Is Changing the Structure of Consulting Firms

By David S. Duncan, Tyler Anderson, and Jeffrey Saviano – Harvard Business Review, September 2025

Artificial intelligence is not just a tool for consultants—it is transforming the very structure of consulting firms. According to Duncan, Anderson, and Saviano, AI is reshaping how firms organize, deliver value, and engage with clients.

Traditionally, consulting has relied on a pyramid model of partners, managers, and associates, with large teams performing research and analysis. AI now automates much of this work—data collection, benchmarking, and even strategy simulation—flattening the pyramid and reducing reliance on junior consultants. This shift is leading firms toward leaner, more networked structures, where senior experts and AI systems work together to deliver insights faster and more efficiently.

The authors argue that AI empowers firms to move from reactive problem-solving to continuous, predictive strategy, offering clients real-time solutions and governance models rather than episodic reports. It also enables new business models: subscription-based advisory services, AI-enhanced platforms, and partnerships that merge technology with human expertise.

For the BGF–AIWS Family, this evolution highlights a broader truth: AI is not only transforming industries, but also the culture of leadership and governance. As consulting firms adopt flatter, AI-integrated structures, they become a model for how institutions—including governments—can adapt to the AIWS Government 24/7 framework: continuous, ethical, and responsive service in the Age of AI.

📖 Full article available on Harvard Business Review: “AI Is Changing the Structure of Consulting Firms”

https://hbr.org/2025/09/ai-is-changing-the-structure-of-consulting-firms

IBM and AMD Join Forces on Quantum and Next-Generation Computing

A landmark partnership was announced on August 26, 2025, as IBM and AMD revealed plans to combine their strengths in AI accelerators, quantum computing, and high-performance computing (HPC) to tackle some of the world’s most complex challenges.

The collaboration aims to create integrated computing platforms capable of advancing climate modeling, biomedical research, supply chain resilience, and national security. By merging AMD’s expertise in high-performance processors and accelerators with IBM’s leadership in quantum systems and AI research, the two companies envision a future where computing power can drive transformative breakthroughs.

This initiative aligns closely with the AI World Society (AIWS) vision of leveraging frontier technologies for humanity’s benefit. It demonstrates how public–private partnerships in advanced computing can lay the foundation for AIWS Government 24/7, ensuring that innovation is guided by ethics, responsibility, and a focus on solving global problems.

📖 Full coverage:

https://newsroom.ibm.com/2025-08-26-ibm-and-amd-join-forces-to-build-the-future-of-computing

https://www.amd.com/en/newsroom/press-releases/2025-8-26-ibm-and-amd-join-forces-to-build-the-future-o.html

USAi: America’s AI Experimentation Platform

The U.S. General Services Administration (GSA) has launched USAi, a secure generative AI evaluation suite described as the “infrastructure for America’s AI future.” This platform allows federal agencies to test and evaluate emerging AI technologies collectively, reducing duplication and ensuring efficiency.

Key features of USAi include:

  • Adoption of moderate FISMA security standards to balance trust and protection.
  • Empowering agencies to select and experiment with the tools best suited to their missions.
  • Establishing regulatory guardrails to guide responsible AI adoption.

USAi reflects the White House’s AI Action Plan and signals a commitment to winning the AI race by embedding responsibility, security, and innovation into government infrastructure.

AIWS Government 24/7 Perspective: USAi embodies elements of the AIWS Government 24/7 model, where technology is designed to continuously serve citizens with transparency, security, and inclusivity. By providing a shared, trusted platform for federal experimentation, USAi points toward the kind of always-on, ethical governance infrastructure that AIWS envisions—one that can respond in real-time to challenges, protect public trust, and make government smarter, more efficient, and more humane in the Age of AI.

📖 Full article here:

Photonic Computing in AI World Society

The newly launched Report on Emerging Computing Pathways in Quantum, Neuromorphic, and Photonic Computing by the Data Security Council of India (DSCI) highlights how photonic computing could revolutionize the future of information processing. By using light instead of electricity, photonic systems promise unprecedented speed, energy efficiency, and scalability — breakthroughs essential for advancing AI, secure communications, and global digital infrastructure. For the Boston Global Forum and AIWS, photonic computing represents not only a technological leap but also a pathway to shaping a future where innovation is aligned with ethics, resilience, and the values of an AI World Society.

Please see full here: https://www.apnnews.com/report-on-emerging-computing-pathways-in-quantum-neuromorphic-and-photonic-computing-launched-by-dsci/

The Trump-OpenAI AI Partnership: A Catalyst for Government Productivity and AI Infrastructure Investment

The Trump-OpenAI AI partnership, announced earlier this month, marks a seismic shift in how the U.S. government leverages artificial intelligence to modernize operations, reduce costs, and enhance national security. This groundbreaking collaboration, formalized in August 2025, secures access to OpenAI’s ChatGPT Enterprise for all federal agencies at a symbolic cost of $1 per agency annually. The initiative aligns seamlessly with the administration’s broader AI Action Plan, which prioritizes infrastructure investment, regulatory streamlining, and global AI leadership, positioning the U.S. as a frontrunner in the AI race.

Please see full here: https://www.ainvest.com/news/trump-openai-ai-partnership-catalyst-government-productivity-ai-infrastructure-investment-2508/

U.S. Regulation of Artificial Intelligence: Charting a Path Forward

As artificial intelligence becomes a central driver of economic growth, innovation, and national security, the United States is accelerating efforts to establish a regulatory framework that safeguards public interests while fostering technological leadership. A Bloomberg Professional Insights article, published on August 8, 2025, examines how policymakers, industry leaders, and regulators are navigating the complex challenges of AI governance—from ensuring transparency and accountability to addressing bias, safety, and misuse risks.

The piece outlines current legislative proposals, agency-led initiatives, and sector-specific guidelines emerging across the federal landscape, as well as the role of international cooperation in shaping global norms. It also highlights the delicate balance between enabling innovation and mitigating harm—an issue central to the Boston Global Forum’s AIWS 7-Layer Model.

For leaders and stakeholders in the AI ecosystem, these regulatory developments will influence not only compliance requirements but also strategic positioning in the Age of AI.

https://www.bloomberg.com/professional/insights/artificial-intelligence/us-regulation-of-artificial-intelligence/

“It’s a feature, not a bug” – How journalists can spot and mitigate AI bias

Consultant and executive coach Ramaa Sharma spoke to leading figures in the newsroom AI space to identify the risks and potential solutions of AI bias.

Mitigating bias as a process

Tackling bias in AI systems is not easy. Even well-intentioned efforts have backfired. This was illustrated spectacularly in 2024 when Google’s Gemini image generator produced images of Black Nazis and Native American Vikings in what appeared to be an attempt to diversify outputs. The subsequent backlash forced Google to temporarily suspend this feature.

Incidents like this highlight how complex the problem is even for well-resourced technology companies. Earlier this year, I attended a technical workshop at the Alan Turing Institute in London, which was part of a UK government-funded project that explored approaches to mitigating bias in machine learning systems. One method suggested was for teams to take a “proactive monitoring approach” to fairness when creating new AI systems. It involves encouraging engineers and their teams to add metadata at every stage of the AI production process, including information about the questions asked and the mitigations applied to track the decisions made.

The researchers shared three categories of bias that could occur along the lifecycle, with thirty-three presented in total, along with deliberative prompts to help mitigate them:

  1. Statistical biases arise from flaws in how data is collected, sampled, or processed, leading to systematic errors. A common type is missing data bias, where certain groups or variables are underrepresented or absent entirely from the dataset.

In the example of a health dataset that primarily includes data from men and omits women’s health indicators (e.g. pregnancy-related conditions or hormonal variations), AI models trained on this dataset may fail to recognise or respond appropriately to women’s health needs.

  1. Cognitive biases refer to human thinking patterns that deviate from rational judgment. When these biases influence how data is selected or interpreted during model development, they can become embedded in AI systems. One common form is confirmation bias, the tendency to seek or favour information that aligns with one’s pre-existing beliefs or worldview.

For example, a news recommender system might be designed using data curated by editors with a specific political leaning. If the system reinforces content that matches this worldview while excluding alternative perspectives, it may amplify confirmation bias in users.

  1. Social biases stem from systemic inequalities or cultural assumptions embedded in data, often reflecting historic injustices or discriminatory practices. These biases are often encoded in training datasets and perpetuate inequalities unless addressed.

For example, an AI recruitment tool trained on historical hiring data may learn to prefer male candidates for leadership roles if past hiring decisions favoured men, thus reinforcing outdated gender norms and excluding qualified women.

The process generated lively debate amongst the group and took considerable time. This made me question how practical it would be to apply this methodology in an overwhelmed/time-stricken newsroom. I also couldn’t help but notice that the issue of time didn’t seem to trouble the engineers in the room.

Please read full here:

https://reutersinstitute.politics.ox.ac.uk/news/its-feature-not-bug-how-journalists-can-spot-and-mitigate-ai-bias