Home » Shaping Futures » IAPP Report: AI Governance Success Tied to Existing Privacy Program Maturity, Technical Tools Gap an Early Obstacle

IAPP Report: AI Governance Success Tied to Existing Privacy Program Maturity, Technical Tools Gap an Early Obstacle

As ChatGPT rockets AI governance to the forefront of discussion, IAPP and FTI Consulting’s annual privacy management report finds that over half of organizations are building their approaches on top of existing and mature privacy programs. But while the commitment is often there, the tools and skills may not be as the workforce only just begins to develop.

The main wall that organizations appear to be running into is a lack of available tools and talent. The AI governance workforce is only beginning to shape up, and has some interplay with already existing shortages for qualified privacy and security professionals of all types. The development of necessary technical tools is also in its early stage, with some organizations finding products that meet their needs are simply not available yet or are difficult to track down.

“Harmful bias” leads the risk categories of AI governance that organizations are focused on, with most reporting it as a “high probability.” One of the leading concerns in this area are models that fail to be representative or incorporate some kind of unconscious bias, leading to results that may be lacking validity or even unethical. Organizations are also concerned about making promises based on expected AI capability that they then fail to deliver. Organizations most frequently believe that risk and privacy management in this area require consistent definitions of harm, established risk indicators for determining bias, clear guidelines on fairness requirements, and common tools and standards for bias detection.

“Bad governance” is almost as common of a risk concern as the potential for bias. In addition to opening up a variety of risks, poor AI governance could bloat administrative and legal budgets. Individuals currently involved with privacy management programs say that they have doubts about how principles such as data minimization and purpose specification will translate to algorithmic AI systems. Respondents are looking for clear AI governance strategies tailored to the risks inherent in processing personal data in AI systems, and would like to see AI assessments and assignment of responsibilities embedded in workforce training programs.

The original article was posted at CPO Magazine.

The Boston Global Forum (BGF), in collaboration with the United Nations Centennial Initiative, released a major work entitled Remaking the World – Toward an Age of Global Enlightenment.   More than twenty distinguished leaders, scholars, analysts, and thinkers put forth unprecedented approaches to the challenges before us. These include President of the European Commission Ursula von der Leyen, Governor Michael Dukakis, Father of Internet Vint Cerf, Former Secretary of Defense Ash Carter, Harvard University Professors Joseph Nye and Thomas Patterson, MIT Professors Nazli Choucri and Alex ‘Sandy’ Pentland.  The BGF introduced core concepts shaping pathbreaking international initiatives, notably, the Social Contract for the AI Age, an AI International Accord, the Global Alliance for Digital Governance, the AI World Society (AIWS) Ecosystem, and AIWS City.