Professor Alex Pentland discusses tools that could improve AI accuracy and reduce bias

Researchers developed an easy-to-use tool that enables an AI practitioner to find data that suits the purpose of their model, which could improve accuracy and reduce bias.

In order to train more powerful large language models, researchers use vast dataset collections that blend diverse data from thousands of web sources.

But as these datasets are combined and recombined into multiple collections, important information about their origins and restrictions on how they can be used are often lost or confounded in the shuffle.

Not only does this raise legal and ethical concerns, it can also damage a model’s performance.

“These types of tools can help regulators and practitioners make informed decisions about AI deployment, and further the responsible development of AI,” says Alex “Sandy” Pentland, an MIT professor, leader of the Human Dynamics Group in the MIT Media Lab, Boston Global Forum and AI World Society Board Member, and co-author of a new open-access paper about the project.

Please read the full article on MIT News.

DCTs Monthly Series VI – Leadership in Action: Driving Change in Pharma, in partnership with AI World Society

DCTs Monthly Series VI: “Leadership in Action: Driving Change in Pharma,” held on August 20th in pảtnership with Boston Global Forum and AI World Society, right in the heart of the “most innovative square mile on earth” – Kendall MIT Square, Cambridge! Massachusetts, known as the Silicon Valley of Life Sciences. This hub is where groundbreaking ideas in pharma and biotech come to life.

The DCTs Monthly Series VI provided an in-depth exploration into the transformative leadership in pharmaceutical innovation, particularly focusing on early pharmaceutical research.

Keynote Address: Dr. Uli Stilz, Head of the Bio Innovation Hub at Novo Nordisk, delivered an insightful keynote titled “Cultivating Innovation: Leadership Strategies in Early Pharmaceutical Research.” Dr. Stilz highlighted three critical aspects of the Bio Innovation Hub:

  • Innovating in Cardiometabolic Space: Emphasized the long development timelines and the potential for increased investment.
  • Driving Partnerships: Discussed how partnerships can foster disruptive advancements by digging deeper into biology and overcoming translational challenges.
  • Unique Partner Models: Introduced adaptive frameworks and ecosystem creation strategies that facilitate co-creation and innovation.

Panel Insights: Redefining Pharma R&D and Leadership

Following Dr. Uli Stilz’s keynote, a rich panel discussion moderated by Dr. Usman Iqbal, and featuring Dr. Stilz, Dr. Roberto Araujo and Shaju Backer, tackled crucial questions shaping the future of pharmaceutical R&D:

Innovation Redefined: Discussion focused on shifting from mechanistic to real-world innovation to better meet the needs of patients, physicians, and payers.

  1. Combatting “Me Too” Drugs: The panel addressed the prevalence of derivative drugs, advocating for original development and meaningful differentiation.
  2. Leadership Adaptation: Insights on integrating real-world evidence, digital technology, AI, and necessary leadership upskilling to meet modern R&D demands.
  3. Market Access and Innovation: Strategies to navigate global market access hurdles and innovative pricing models were explored.
  4. Collaborative Patient-Centricity: Examples from COVID-19 vaccine development highlighted the potential for collaborative efforts in addressing diseases like HIV and cancer.

This discussion underscored the need for strategic innovation and adaptive leadership in pharmaceuticals to overcome current and future challenges.

Interactive Q&A Session: The session concluded with a Q&A, urging participants to limit their questions to one minute to accommodate as many inquiries as possible, reflecting the high engagement level of the attendees.

Networking and Collaborative Opportunities: Participants joined a special WhatsApp group to continue discussions on decentralized clinical trials, emphasizing the event’s focus on ongoing collaboration and networking.

The DCTs Monthly Series VI not only provided valuable insights from leading experts but also fostered a collaborative environment for professionals to engage directly with pioneers in the field.

White House holds creator conference

White House officials are meeting with 100 digital creators and industry professionals on Wednesday for the first-ever White House Creator Economy Conference.

The conference will address “the most pressing challenges, and opportunities, facing the creator economy,” including artificial intelligence, mental health and pay equity, according to a White House official.

“This Biden-Harris Administration has taken historic steps to engage digital creators, and works hard to meet Americans where they are,” the official said.

“Officials at the highest level of this White House have engaged creators extensively, hosting regular virtual and in-person briefings with digital creators on policy issues, hosting State of the Union watch events for creators at the White House, and, last year, hosting the first ever White House Holiday Party for digital creators,” they added.

The digital media landscape has become increasingly important to American politics in recent years, as more people, especially young people, get their information from social media.

With the rise of TikTok, many campaigns have joined the platform to reach young voters, even amid bipartisan national security concerns about Americans’ data privacy and the app’s ties to China.

President Biden’s campaign joined TikTok in February, and former President Trump followed suit in June. Shortly after Biden dropped out of the race last month and endorsed Vice President Harris, she, too, created an account.

Both parties have also sought to tap into the large followings of content creators on platforms including TikTok, Instagram and YouTube.

The Republican National Convention issued credentials to more than 70 influencers last month, while the Democratic National Convention has issued credentials to more than 200 creators for next week’s event, according to The Washington Post.

New supercomputing network could lead to AGI, scientists hope, with 1st node coming online within weeks

This article was originally published on Live Science.

Researchers plan to accelerate the development of artificial general intelligence (AGI) with a worldwide network of extremely powerful computers — starting with a new supercomputer that will come online in September.

Artificial intelligence (AI) spans technologies including machine learning and generative AI systems like GPT-4. The latter offer predictive reasoning based on training from a large data set — and they can often surpass human capabilities in one particular area, based on their training data. They are sub-par at cognitive or reasoning tasks, however, and cannot be applied across disciplines.

AGI, by contrast, is a hypothetical future system that surpasses human intelligence across multiple disciplines — and can learn from itself and improve its decision-making based on access to more data.

The supercomputers, built by SingularityNET, will form a “multi-level cognitive computing network” that will be used to host and train the architectures required for AGI, company representatives said in a statement.

These include elements of advanced AI systems such as deep neural networks, which mimic the functions of the human brain; large language models (LLMs), which are large sets of data AI systems train on; and multimodal systems that connect human behaviors such as speech and movement inputs with multimedia outputs. This is similar to what you would see from AI videos.

Building a new AI supercomputer network

The first of the supercomputers will start to come online in September, and work will be completed by the end of 2024 or early 2025, company representatives told LiveScience, depending on supplier delivery timelines.

The modular supercomputer will feature advanced components and hardware infrastructure including Nvidia L40S graphics processing units (GPUs), AMD Instinct and Genoa processors, Tenstorrent Wormhole server racks featuring Nvidia H200 GPUs, alongside Nvidia’s GB200 Blackwell systems. Altogether they form some of the most powerful AI hardware available.

“This supercomputer in itself will be a breakthrough in the transition to AGI. While the novel neural-symbolic AI approaches developed by the SingularityNET AI team decrease the need for data, processing and energy somewhat relative to standard deep neural nets, we still need significant supercomputing facilities,” SingularityNET CEO Ben Goertzel told LiveScience in a written statement.

“The mission of the computing machine we are creating is to ensure a phase transition from learning on big data and subsequent reproduction of contexts from the semantic memory of the neural network to non-imitative machine thinking based on multi-step reasoning algorithms and dynamic world modeling based on cross-domain pattern matching and iterative knowledge distillation. Before our eyes, a paradigmatic shift is taking place towards continuous learning, seamless generalisation and reflexive AI self-modification.”

The road to an AI “superintelligence”

SingularityNET’s goal is to provide access to data for the growth of AI, AGI and a future artificial super intelligence — a hypothetical future system that is far more cognitively advanced than any human. To do this, Goertzel and his team also needed unique software to manage the federated (distributed) compute cluster.

Federated compute clusters allow for the abstraction of user data and the exposure of summary data necessary for large scale, yet protected computations of data sets containing highly secure elements such as PII.

“OpenCog Hyperon, is an open-source software framework designed specifically for the architecture of AI systems,” Goertzel added. “This new hardware infrastructure is purpose-built to implement OpenCog Hyperon and its AGI ecosystem environment.”

To grant users with access to the supercomputer, Goertzel and his team are using a tokenized system that is common in AI. Users gain access to the supercomputer and through their tokens they can use and can add data to the existing sets other users rely on to test and deploy AGI concepts.

In their simplest form, AI tokens mimic tokens from free-standing video games in arcades. Players had to buy tokens to then insert into the video game to get a certain amount of chances at playing. In this instance, the data collected from playing the game is accessible to everyone else who is playing, not only at one arcade but also wherever that instance of the game is located in other arcades around the world.

“GPT-3 was trained on 300 billion tokens (typically words, parts of words, or punctuation marks), and GPT-4 on 13 trillion,” wrote Mercatus scholar and software engineer Nabeel S. Qureshi. “Self-driving cars are trained on thousands of hours of video footage; OpenAI Copilot, for programming, is trained on millions of lines of human code from the website Github.”

Leaders in AI, specifically DeepMind’s co-founder Shane Legg, have stated systems could meet or surpass human intelligence by 2028. Goertzel has previously estimated systems will reach that point by 2027, while Mark Zuckerberg is actively pursuing AGI having invested $10 billion in building the infrastructure to train advanced AI models in January.

SingularityNET, which is part of the Artificial Super Intelligence Alliance (ASI) — a collective of companies dedicated to open source AI research and development — plans to expand the network in the future and expand the computing power available. Other ASI members include Fetch.ai, which recently invested $100 million in a decentralized computing platform for developers.

(Image credit: Jose A. Bernat Bacete via Getty Images)

What the EU AI Act is already changing for businesses

This article was originally published on the IBM blog.

The European Union’s AI Act entered into force on August 1. The EU AI Act is one of the first regulations of artificial intelligence to be adopted in one of the world’s largest markets. What is it going to change for businesses?

What is the EU AI Act?

The European Union (EU) is the first major market to define new rules around AI.

“The aim is to turn the EU into a global hub for trustworthy AI,” according to EU officials.

The AI Act takes a risk-based approach, meaning that it categorizes applications according to their potential risk to fundamental rights and safety. Some of the most important provisions include: a prohibition on certain AI practices that are deemed to pose unacceptable risk, standards for developing and deploying certain high-risk AI systems and rules for general-purpose AI (GPAI) models. AI systems that do not fall within one of the risk categories in the EU AI Act are not subject to requirements under the act (these are often dubbed the ‘minimal risk’ category), although some may need to meet transparency obligations and they must comply with other existing laws.

“It is a form of regulatory pragmatism that aims to adapt constraints with the level of risk,” says Bruno Massot, Vice President, Assistant General Counsel and Head of Legal IBM Europe.

Under these new rules, certain AI applications that threaten citizens’ rights will be banned. These include biometric categorization systems based on sensitive characteristics, untargeted scraping of facial images from the internet or CCTV footage to create facial recognition databases, emotion recognition in the workplace and schools, and predictive policing.

The AI Act was approved by the EU Parliament on April 22 and by the EU Member States on May 21. It was published in the Official Journal of the European Union on July 12, and entered into force on August 1, with different provisions of the law going into effect in stages.

The EU AI Act will follow a very fast-paced schedule.

“It is coming very rapidly, but this is not a surprise. The drafts have been long known. It is also important to go fast as the technology is advancing very quickly,” notes Massot.

On month six, bans on prohibited AI practices come into force. On month nine, codes of practice become applicable. On month 12, general-purpose AI rules, including governance, come into force. At 24 months, the rules for high-risk AI systems will take effect. Finally, 36 months after the entry into force, the rules for AI systems that are products or safety components of products regulated under specific EU laws will apply.

Non-compliance with the law can result in hefty fines, up to 35 million euros or 7% of a company’s annual turnover.

What steps should businesses be taking?

“There is a sense of urgency,” observes IBM’s Dasha Simons, Managing Consultant, Trustworthy AI. “Two or three years might seem long for some of the things to get ready. But if you’re an international or multinational organization with a lot of AI systems across the globe, it’s not that long.”

So, where to start? The first step, Simons says, is to determine which rules will apply to your business.

“The AI Act defines different rules and definitions for deployers, providers, importers. Depending on which role you have as a company, you will need to comply with different requirements,” Simons explains. “Are you buying models and just kind of rebranding them and not changing the model itself? Or are you actually fine-tuning the model quite a bit? This is really the first step.”

The second step would be to determine which AI systems are used within your organization, and what the risk levels are associated with each one of them.

“The important thing is to know where you want to focus at first,” says Simons. “A lot of companies don’t even know what type of AI systems and models they have in production or in development.”

Doing this assessment will help businesses determine their priorities by assessing first the prohibited systems, then the high-risk ones.

“This will help determine which processes you need to put in place when new AI systems are launched, and make sure they are already compliant with the AI Act proactively, and not as an afterthought.”

Simons believes that the regulations outline the need for businesses to be strategic with AI and for the C-suite to be highly involved in this conversation.

“You need technical expertise, perhaps also the expertise from a Chief Privacy Officer that has implemented GDPR. You need that knowledge on the table, but the responsibility and the direction should be set at the C-level as well,” says Simons.

What will the EU AI Act change for businesses?

The European Union’s ambition is to establish a standard for AI development and act as a blueprint for the rest of the world. The EU already enacted a comprehensive data privacy and security law in 2018 with the GDPR.

The AI Act is “the first regulation in the world that is putting a clear path on a safe and human-centric development of AI,” said Dragoș Tudorache, a member of the Parliament and lead negotiator for the AI Act, during a press conference.

Around the world, many regulations are being adopted regarding AI use.

“Consistency is important because it would be complicated [for businesses] to develop different systems with diverging restraints,” believes Massot.

Hans-Petter Dalen, IBM’s EMEA Business Leader for watsonx and embeddable AI, notes that an interesting change will be the need for organizations and businesses operating in the EU to educate their users or employees about AI.

“Interestingly, in the EU AI Act, there is an article about AI literacy. So every company or organization in the EU single market that’s using AI will have a requirement to educate their users and employees to a certain level. We don’t know what the level is yet, but it is a very good thing to increase the base level of what AI is,” explains Dalen.

“An example in that area is HR systems that already today come with the ability to screen CVs and recommend candidates. That is a high-risk use case and comes with seven essential requirements that you have to comply with. Are you, as a purchaser of that software, educated enough in AI to understand the questions you need to ask about the algorithms?” asks Dalen.

There are still many unknowns around technical standards that will be required by the AI Act.

“There are seven essential requirements to comply with high-risk use cases that are formulated quite loosely in the law itself. And those seven requirements have resulted in ten requests for technical standards which two of the European standard decision organizations are now developing,” explains Dalen. “The technical standards will give us clarity on what the actual requirements from the regulation are and we clearly expect that implementing the technical standards will be the fastest and the cheapest way to achieve conformity.”

 

Google’s DeepMind announces “first AI” that can solve International Mathematical Olympiad problems

This article was originally published in the Times of India.

Google’s AI unit Google DeepMind recently unveiled a pair of artificial intelligence (AI) systems that achieved silver-medal standard in solving International Mathematical Olympiad problems. The AI models demonstrated advances in solving complex mathematical problems, one of the key frontiers of generative AI development.

“We’re presenting the first AI to solve International Mathematical Olympiad problems at a silver medalist level.

It combines AlphaProof, a new breakthrough model for formal reasoning, and AlphaGeometry 2, an improved version of our previous system,” Google DeepMind said.

Artificial general intelligence (AGI) with advanced mathematical reasoning is seen as the future because it has the potential to unlock new frontiers in science and technology. However, current AI systems still struggle with solving general math problems because of limitations in reasoning skills and training data.

Google AI unit said that together AlphaProof and AlphaGeometry 2 systems solved four out of six problems from this year’s International Mathematical Olympiad (IMO), achieving the same score as a silver medalist in the competition for the first time.

Each year, elite pre-college mathematicians train, sometimes for thousands of hours, to solve six exceptionally difficult problems in algebra, combinatorics, geometry and number theory.

Google’s AI unit Google DeepMind recently unveiled a pair of artificial intelligence (AI) systems that achieved silver-medal standard in solving International Mathematical Olympiad problems. The AI models demonstrated advances in solving complex mathematical problems, one of the key frontiers of generative AI development.

“We’re presenting the first AI to solve International Mathematical Olympiad problems at a silver medalist level.

It combines AlphaProof, a new breakthrough model for formal reasoning, and AlphaGeometry 2, an improved version of our previous system,” Google DeepMind said.

Artificial general intelligence (AGI) with advanced mathematical reasoning is seen as the future because it has the potential to unlock new frontiers in science and technology. However, current AI systems still struggle with solving general math problems because of limitations in reasoning skills and training data.

Google AI unit said that together AlphaProof and AlphaGeometry 2 systems solved four out of six problems from this year’s International Mathematical Olympiad (IMO), achieving the same score as a silver medalist in the competition for the first time.

Each year, elite pre-college mathematicians train, sometimes for thousands of hours, to solve six exceptionally difficult problems in algebra, combinatorics, geometry and number theory.

How Google DeepMind achieved IMO success

AI models have struggled with abstract math and need greater reasoning capabilities resembling human intelligence. It added that one question was solved within minutes but others took up to three days, longer than the competition’s time limit.

“First, the problems were manually translated into formal mathematical language for our systems to understand. In the official competition, students submit answers in two sessions of 4.5 hours each. Our systems solved one problem within minutes and took up to three days to solve the others,” Google Deepmind said.

The company said it created AlphaProof, a system focused on reasoning, by combining a version of Gemini, the language model behind its chatbot of the same name, with AlphaZero, another AI system which previously bested humans in board games such as chess and Go.

“AlphaProof solved two algebra problems and one number theory problem by determining the answer and proving it was correct. This included the hardest problem in the competition, solved by only five contestants at this year’s IMO. AlphaGeometry 2 proved the geometry problem, while the two combinatorics problems remained unsolved,” it added.

How Biden’s AI policies pushed some Silicon Valley bigwigs toward Trump

This article was originally published in Fast Company.

Trump has promised VCs and AI companies a hands-off approach to AI regulation.

 

Trump and his allies woo Silicon Valley with hands-off AI policy  

The tech media has for months been reporting on a supposed shift to the right by Silicon Valley founders and investors. The narrative seems driven by recent pledges of support for Trump from the likes of Marc Andreessen, Joe Lonsdale, and Elon Musk. The Trump camp has been wooing tech companies and investors, including those within the AI sector.

The Washington Post reported Tuesday that a group of Trump allies and ex-cabinet members have written a draft executive order that would mark a radical shift away from the Biden administration’s current approach to AI regulation. The draft executive order comes from the right-wing think tank, America First Policy Institute, which is led by Trump’s former chief economic adviser Larry Kudlow. The document proposes an AI regulatory regime that relies heavily on the AI industry to regulate itself when it comes to the safety and security of its models, and would establish “industry-led” agencies to “evaluate AI models and secure systems from foreign adversaries,” the Post reports. It would also create “Manhattan Projects” to develop cutting-edge AI for the military.

By contrast, the Biden administration’s current executive order (EO) on AI is chiefly concerned with the security risks to U.S. people and interests that the very largest AI models might pose. The administration seems particularly worried that such models, delivered as a service via an API, could be used to wage some kind of cyberwar on the U.S. The Biden order, which was signed into law last October, proposes that makers of such models regularly report to the Commerce Department on the development, safety testing, and distribution of their products. The EO’s reporting requirements apply only to the very largest AI models that are hosted in very big data centers. Right now, only a few well-monied AI companies have built such models.

But many in the AI sector fear that model sizes and computing power will rapidly increase, which would subject even smaller AI companies to onerous reporting requirements. “Things that today look very hard and expensive are going to get very cheap,” said Andreessen, founding partner of Andreessen Horowitz, on a recent podcast with fellow cofounder Ben Horowitz. The way Andreessen sees it, Biden’s regulations would stop the industry’s rapid movement forward and give the current market leaders a monopoly on big foundation models. That may be why one of those market leaders, OpenAI (an a16z portfolio company) has called for more stringent AI regulation.

However, the Biden EO’s current “big model” definitions are flexible, by design—the current thresholds are merely placeholders. The EO says that the Commerce Department will “determine the set of technical conditions for a large AI model to have potential capabilities that could be used in malicious cyber-enabled activity, and revise that determination as necessary and appropriate.”

The rightward shift of Silicon Valley is mostly a social media and podcast phenomenon. Silicon Valley has long been a bastion of liberals and libertarians, and there is no evidence that it’s changed much over the past decade. In the ‘24 election cycle, the top 20 venture capital firms and their employees have given twice as much money to Democratic candidates and causes versus Republican ones, a Wired review of Federal Election Commision reports shows.

YouTube is a victim of AI’s original sin: web scraping

AI models are trained largely on large corpuses of text scraped from the internet. Huge training datasets were being created while online publishers and creators had no idea it was happening. That’s how GPT-2 originally began showing hints of real language savvy and some kind of intelligence. Now, of course, publishers are wise to the situation, and many have found new revenue sources by licensing their data to AI companies for training.

Google, whose AI researchers opened the door to LLMs, was also a victim of the web data harvesting practiced by AI developers. A new investigation by the nonprofit news organization Proof finds that Anthropic, Nvidia, Apple, and Salesforce used the subtitles and transcripts of thousands of YouTube videos to train their language models. These included videos by popular creators such as MrBeast and Marques Brownlee, and from the channels of MIT, Harvard, NPR, Stephen Colbert, John Oliver, and others. The Proof investigators found that, overall, the training dataset included text from 173,536 YouTube videos across more than 48,000 channels.

The dataset containing the content wasn’t scraped by employees of the big tech companies that used it to train models. Rather, the YouTube text is part of a publicly available dataset called “Pile,” which is a compilation of various text datasets created by the nonprofit AI research group EleutherAI, the report says. A research paper published by EleutherAI says that YouTube subtitles are especially valuable because they are often available in a variety of languages. YouTube’s terms of service prohibit scraping its content without permission.

 

CoreWeave CEO Mike Intrator on generative AI’s effect on the power grid

Recent studies have shown that the advance of generative AI models may significantly increase demand on the power grid. A new study released Wednesday by Columbia University shows that by 2027, the GPUs that run generative AI models will constitute about 1.7% of the total electric use in the U.S., or 4%  of the total projected electricity sales. “While this might seem minimal, it constitutes a considerable growth rate over the next six years and a significant amount of energy that will need to be supplied to data centers,” the report says.

People within the AI infrastructure business have been thinking about the problem for a while now. “I think that the U.S. is in a position where the amount of power that’s going to be required and the scale of the power that’s required for these data centers is going to put increasing pressure on the grid,” says Mike Intrator, CEO of CoreWeave, which offers cloud computing designed for AI training and inference. “It’s going to become a bottleneck and a limiting factor for the development of the AI infrastructure that is required.”

Over the past year, CoreWeave increased its data center count from 3 to 14, and expects to have 28 data centers worldwide. Intrator believes that significant investment in the grid will be needed to both increase its power and improve the way it moves power around to where it’s needed. “I know that that’s a challenge because of how those projects are regulated at the state level,” he says, “and I know that that’s going to require some real thought.”

[Photo: ANGELA WEISS/AFP via Getty Images]

The University of AI

This is an episode of the podcast AI for the Rest of Us.

Art Markman and K.P. Procko consider how artificial intelligence is already changing the college experience, its promise and pitfalls, and future directions.

Artificial intelligence tools might transform education, for example, by giving every student 24/7 access to an affordable tutor that’s an expert in any subject and infinitely patient and supportive. But what if these AI tools give bad information or relieve students of the kind of critical thinking that leads to actual learning? And what’s the point of paying the big bucks to go to college if you can learn everything from AI chatbots?

Today on the show we have Art Markman—Vice Provost for Academic Affairs and a professor of psychology and marketing at the University of Texas at Austin. He’s also co-host of the public radio program and podcast “Two Guys on Your Head.” And we also have K.P. Procko—an associate professor of instruction in biochemistry who uses AI in the classroom and who also manages a grant program in UT Austin’s College of Natural Sciences to help faculty integrate AI tools into the classroom.

Finance leaders see GenAI to have most immediate impact on forecast, budget

The original article was published on FutureCFO.

Finance leaders are now seeing generative artificial intelligence to have the most immediate impact on explaining forecast amd budget variances, according to Gartner, Inc.

A recent Gartner survey revealed that 66% of finance leaders think the technological advancement will affect these organisational functions, as it also showed that there are high levels of uncertainty among finance executives regarding the challenges the implementation of GenAI will bring about.

Finance leaders say GenAI will have the most impactful use case in forecast/budget variance explanations, with the respondents anticipating revenue/spend data classification and management reports as the next most impactful use cases for GenAI in finance.

The survey of 100 finance leaders also revealed the GenAI use cases that corporate finance leaders anticipate will have the most impact on their function in 2024.

Most Impactful Anticipated Use Cases for GenAI in Finance in 2024

“Forecast and budget variance explanation as the top choice reflects the availability of embedded GenAI interfaces within business intelligence tools,” says Clement Christensen, senior director analyst for Research in the Gartner Finance practice. “This enables users to perform natural language queries to quickly assess known common causes of variance.”

Gartner says recent GenAI advancements have refined models, so they are more capable of supporting tasks related to forecasts and variances, industry information and other factors that generate hypotheses around business performance, which can be tested through statistical models.

Challenges ahead

When it comes to potential challenges around implementing GenAI, Gartner notes that finance leaders expect to contend with issues around talent, data accuracy and governance, technical compatibility, budgeting and change management.

Data accuracy and talent limitations cause slightly more concern, although the fairly even distribution of other potential barriers reiterates financial leaders’ relatively limited experience with GenAI.

“GenAI is all about large language models, but the core of finance’s work isn’t in natural language, it’s in numbers, so many finance leaders are still waiting to see a GenAI application that can reliably handle complex calculations,” Christensen says. “For most finance teams, GenAI will likely be an interface to interact with other AI models based on machine learning, or other non-generative models for the next few years.”

Finance leaders seeking to adopt GenAI in their function should keep an open mind and involve key stakeholders, including the finance leadership and IT teams to discuss priorities and expectations.

Further, finance executives should also identify when to approach vendors to help determine which GenAI offerings are worth acquiring for the organisation’s needs.

Finally, CFOs should audit critical data with respective owners before implementation, to decide what modifications must be implemented for use by a GenAI model.

“Finance leaders see potential in the accessibility of GenAI in finance, but valid questions on reliability, accuracy, auditability, and cost, as well as data privacy and security still remain,” says Christensen.

Photo by Google DeepMind