OpenAI disbands another safety team, as head advisor for ‘AGI Readiness’ resigns

OpenAI is disbanding its “AGI Readiness” team, which advised the company on OpenAI’s capacity to handle artificial intelligence that could potentially equal or surpass human intellect and the world’s readiness to manage such technology.

Miles Brundage, senior advisor for AGI Readiness, announced his departure from the company and wrote that he believes his research will be more impactful externally.

In May, OpenAI decided to disband its Superalignment team, which focused on technology to control and steer superintelligent AI, just one year after it announced the group.

https://www.cnbc.com/2024/10/24/openai-miles-brundage-agi-readiness.html

The AI World Society (AIWS) Model seeks to build a safer, better world with AI. We encourage and call upon companies to uphold their responsibility to humanity in creating and conducting business with AI applications. AGI represents significant progress in AI, and it requires our commitment to duty and responsibility toward humanity and our world.

Silicon Valley Takes Artificial General Intelligence Seriously—Washington Must Too

First, Washington needs to start taking AGI seriously. The potential risks are too great to ignore. Even in a good scenario, AGI could upend economies and displace millions of jobs, requiring society to adapt. In a bad scenario, AGI could become uncontrollable.

Second, we must establish regulatory guardrails for powerful AI systems. Regulation should involve government transparency into what’s going on with the most powerful AI systems that are being created by tech companies. Government transparency will reduce the chances that society is caught flat-footed by a tech company developing AGI before anyone else is expecting. And mandated security measures are needed to prevent U.S. adversaries and other bad actors from stealing AGI systems from U.S. companies. These light-touch measures would be sensible even if AGI weren’t a possibility, but the prospect of AGI heightens their importance.

In a particularly concerning part of Saunders’ testimony, he said that during his time at OpenAI there were long stretches where he or hundreds of other employees would be able to “bypass access controls and steal the company’s most advanced AI systems, including GPT-4.” This lax attitude toward security is bad enough for U.S. competitiveness today, but it is an absolutely unacceptable way to treat systems on the path to AGI. The comments were another powerful reminder that tech companies cannot be trusted to self-regulate.

Finally, public engagement is essential. AGI isn’t just a technical issue; it’s a societal one. The public must be informed and involved in discussions about how AGI could impact all of our lives.

BGF has been pioneering the AI World Society (AIWS) since 2017.

Daniel Colson

https://time.com/7093792/ai-artificial-general-intelligence-risks/

Boston Global Forum Statement on AI Pioneers Winning the Nobel Prizes in Physics and Chemistry

The Boston Global Forum (BGF) extends its heartfelt congratulations to the pioneers in artificial intelligence who have been awarded the Nobel Prizes in Physics and Chemistry in 2024. This monumental achievement underscores the transformative impact of AI on advancing scientific discovery and addressing some of humanity’s most pressing challenges.

Recognition of AI’s Role in Nobel Prizes

  • Physics Nobel Prize: The integration of AI in physics has opened new horizons in understanding complex phenomena, leading to breakthroughs once thought unattainable. The laureates’ work exemplifies how AI can augment human ingenuity to unravel the mysteries of the universe.
  • Chemistry Nobel Prize: The award honors scientists who have harnessed AI to design proteins—the fundamental building blocks of life. This innovation accelerates drug discovery and has profound implications for medicine, biotechnology, and understanding biological processes.

BGF as a Pioneer of the AI World Society Model

As a pioneer of the AI World Society (AIWS) model, the Boston Global Forum has been at the forefront of advocating for the ethical governance and responsible development of AI technologies since 2017, notably with the Social Contract for the AI Age. This foundational framework sets forth principles to guide AI development in a manner that promotes transparency, accountability, and aligns with human-centric values.

The AIWS model envisions a world where AI is harnessed to enhance human welfare, uphold democratic values, and contribute positively to global society. Through initiatives like conferences, policy development, and collaborative projects, BGF actively works to ensure that AI technologies benefit all of humanity.

Expert Warns UN’s Role in AI Regulation Could Lead to Safety Overreach

A recent article published on Fox News highlights concerns from experts regarding the United Nations’ involvement in artificial intelligence (AI) regulation. The expert warns that the UN’s approach could potentially lead to overregulation, stifling innovation and hindering the development of AI technologies.

The core of the concern revolves around the balance between ensuring safety and promoting innovation. Overly stringent regulations may impede technological advancements and limit the benefits that AI can bring to society. On the other hand, insufficient regulation could lead to unethical uses of AI or unintended consequences.

The Boston Global Forum (BGF) has been a pioneer in AI governance since 2017 through its AI World Society (AIWS) Initiative. The AIWS initiative advocates for a balanced approach to AI governance that promotes innovation while ensuring ethical standards and societal benefits.

The concerns raised about the UN’s role in AI regulation underscore the need for ongoing discussions about the future of AI governance. Balancing innovation with safety is a complex challenge that requires input from various stakeholders.

The US will hold a safety summit in November to better coordinate global regulation efforts.

We invite you to share your thoughts and insights on this critical issue. Please send your perspectives to [email protected].

Your contributions will help shape meaningful discussions and inform strategies that aim to harness the potential of AI for the betterment of society.

For more information on BGF’s efforts in AI governance and to explore the AI World Society Initiative, please visit our publication: AIWS: Pioneering AI Governance and New Democracy.

An AI script editor could help decide what films get made in Hollywood

An AI script editor could help decide what films get made in Hollywood. Callaia provides analysis and feedback on scripts in seconds. But, as AI models are trained to be, it might be too nice to be truly useful.

On September 24, 2024, it launched a new tool called Callaia, which amateur writers and professional script readers alike can use to analyze scripts at $79 each. Using AI, it takes Callaia less than a minute to write its own coverage, which includes a synopsis, a list of comparable films, grades for areas like dialogue and originality, and actor recommendations. It also makes a recommendation on whether or not the film should be financed, giving it a rating of “pass,” “consider,” “recommend,” or “strongly recommend.” Though the foundation of the tool is built with ChatGPT’s API, the team had to coach the model on script-specific tasks like evaluating genres and writing a movie’s logline, which summarize the story in a sentence.

“It helps people understand the script very quickly,” says Tobias Queisser, Cinelytic’s cofounder and CEO, who also had a career as a film producer. “You can look at more stories and more scripts, and not eliminate them based on factors that are detrimental to the business of finding great content.”

The idea is that Callaia will give studios a more analytical way to predict how a script may perform on the screen before spending on marketing or production. But, the company says, it’s also meant to ease the bottleneck that script readers create in the filmmaking process. With such a deluge to sort through, many scripts can make it to decision-makers only if they have a recognizable name attached. An AI-driven tool would democratize the script selection process and allow better scripts and writers to be discovered, Queisser says.

https://www.technologyreview.com/2024/09/24/1104356/an-ai-script-editor-could-help-decide-what-films-get-made-in-hollywood/

 

AI-powered art puts ‘digital environmentalism’ on display at UN Headquarters

A groundbreaking art installation at UN Headquarters by renowned media artist Refik Anadol leverages artificial intelligence to raise awareness of the beauty and fragility of the world’s coral reefs, and the urgent need to address the climate crisis.

Abstract shapes in green, orange and white flow into and out of each other in an endless, never-repeating pattern, combined with ambient music that induces a hypnotizing effect on those who stare at it a little too long (like this writer).

It’s pretty hard for delegates at High-Level Week and the Summit of the Future to miss Large Nature Model: Coral. The artwork covers a whole section of wall in the ground floor corridor of the UN Headquarters Conference building, facing the Japanese Peace Garden.

As well as drawing the eye, however, the artist behind the piece is subtly drawing attention to two of the major global issues under discussion at the UN during the busiest week of the year: the climate crisis and the impact of artificial intelligence.

AI was used to gather together millions of photos of coral reefs, many of which are endangered by rising ocean temperatures. The effect on the viewer is both mesmerizing and, given the context, poignant: coral reef ecosystems are among the most vulnerable ecosystems on the planet to climate change.

These undersea cities, which support 25 per cent of marine life, could virtually disappear by the end of this century.

“I hope that Large Nature Model: Coral inspires people to see how technology can foster deeper connections with our planet and empower us to work together toward a more sustainable world,” said Mr. Anadol at the launch of the installation.

He was joined by Vilas Dhar, the President of the Patrick J McGovern Foundation – a philanthropic organization dedicated to advancing artificial intelligence and data science solutions for all – and Melissa Fleming, the UN Under Secretary-General for Global Communications, whose department jointly organized the exhibition.

https://news.un.org/en/story/2024/09/1154636

OpenAI releases o1 model with human-like reasoning

OpenAI is releasing a new artificial intelligence model known internally as “Strawberry” that can perform some human-like reasoning tasks, as it looks to stay at the top of a crowded market of rivals.

The new model, called o1, is designed to spend more time computing the answer before responding to user queries, the company said in a blog post Thursday. With the model, OpenAI’s tools should be able to solve multi-step problems, including complicated math and coding questions.

“As an early model, it doesn’t yet have many of the features that make ChatGPT useful, like browsing the web for information and uploading files and images,” the company said. “But for complex reasoning tasks this is a significant advancement and represents a new level of AI capability. Given this, we are resetting the counter back to 1 and naming this series OpenAI o1.”

A preview version of the model will be available through OpenAI’s popular chatbot, ChatGPT, to paid Plus and Team users on Thursday. Bloomberg previously reported the company could release the new model as soon as this week.

The model’s release comes as San Francisco-based OpenAI is looking to raise billions in funding and faces heightened competition in the race to develop ever more sophisticated artificial intelligence systems. OpenAI isn’t the only company working on such capabilities; competitors Anthropic and Google have also touted “reasoning” skills  with their advanced AI models.

In its blog post, OpenAI gave examples of the AI model’s responses to questions on topics including coding, English, and math, and asked it to solve a simple crossword puzzle. In a series of posts on X, Noam Brown, a research scientist at OpenAI, said the company is releasing the model in preview now in part to get a sense for how people use it, and where it needs to be improved.

https://fortune.com/2024/09/12/openai-new-ai-model-strawberry-o1-chatgpt/

Building AI World Society in Vietnam: A New Democracy with AI

On Vietnam National Day, September 2, 2024, Boston Global Forum (BGF) CEO Nguyen Anh Tuan authored an article for the special edition of Tuoi Tre Newspaper, highlighting how Vietnam could apply the Artificial Intelligence World Society (AIWS) model to shape its future.

AIWS is envisioned as a transformative system that integrates AI across politics, society, economics, business, culture, and education. The concept emphasizes a governance system that is efficient, equitable, and encourages honesty, responsibility, and kindness. This vision is complemented by the Boston Areti AI (BAI), a Virtuous AI assistant that aids leaders in making compassionate, optimal decisions for peace and security.

Nguyen Anh Tuan’s article calls for Vietnam to lead the way in creating a society where AI can help citizens maximize their potential and participate in a more just, sustainable world, fostering national development in alignment with global progress.

https://tuoitre.vn/y-tuong-xay-dung-xa-hoi-tri-tue-nhan-tao-o-viet-nam-20240829112251499.htm

From the Massachusetts Miracle to the Age of Global Enlightenment

Michael (Stanley) Dukakis (born November 3, 1933, in Brookline, Massachusetts, U.S.) is an esteemed American politician. He received a bachelor’s degree in political science from Swarthmore College in 1955 and a law degree from Harvard Law School in 1960. He served in the U.S. Army in Korea from 1955 to 1957. In 1963, he entered the Massachusetts House of Representatives and served eight consecutive years. He was also associated with the Boston firm of Hill and Barlow from 1960 to 1974.

From 1975 to 1979 and again from 1983 to 1991, Michael Dukakis served three terms as Governor of Massachusetts. During his tenure, he reside over a period that would later be recognized as the “Massachusetts Miracle,” an economic revival that not only recovered the state’s economy but also set a standard for progress and innovation. Governor Dukakis’ adept technocratic skills, coupled with a pragmatic approach and a keen understanding of societal fundamentals, were instrumental in formulating policies and reforms that underpin the current dynamic and forward-thinking Massachusetts. He was the Democratic Party’s nominee for president in 1988 running against the Republican nominee, incumbent Vice President George H. W. Bush.

After his political career, Governor Dukakis took on the role of distinguished professor at Northeastern University and UCLA, dedicating himself to public service through education. His academic roles provided a platform for inspiring and guiding the next generation of leaders. Through lectures rich with the wisdom gleaned from a lifetime of experience, he instilled in his students the values of integrity, responsibility, and civic engagement.

However, Governor Dukakis’ influence extended far beyond the lecture halls. In his later years, he co-founded and chaired the Boston Global Forum (December 12, 2012), the Michael Dukakis Institute for Leadership and Innovation, and the AI World Society Initiative.

Download a copy of From the Massachusetts Miracle to the Age of Global Enlightenment here