The original article was posted on EY Consulting.
In today’s rapidly advancing era of Generative Artificial Intelligence (AI), businesses are eager to embrace Generative AI technology to spearhead innovation and unlock unprecedented opportunities across various industries and applications. At this crucial juncture, it is imperative for enterprises to establish trust by developing a robust governance strategy and operational framework to navigate the risk associated with Generative AI, while also adhering to global regulatory requirements and guidance in this context.
Exploring Generative AI and the evolving risk landscape
Generative AI, as a technology, is fundamentally reshaping the way information is accessed, processed, created, and utilized to drive business innovations. Despite the remarkable outcomes it produces, Generative AI can give the impression of a plug-and-play system, raising concerns about the spread of misinformation, hallucinated responses, deep fakes and manipulated data that can compromise trust. The models are also vulnerable to prompt injections, disclosure of sensitive information, poisoning, inappropriate data provenance, unintended intellectual property infringement, and the generation of biased and unfair responses due to reliance on training datasets. These security and privacy concerns, along with sustainability issues like high computational demands and carbon emissions, make enterprises in regulated environments skeptical about adopting Generative AI.
Navigating governance challenges
The increased adoption of Generative AI has prompted widespread regulatory responses globally. Notable examples include the National Institute of Standards and Technology (NIST), which has introduced an AI Risk Management Framework. The European Parliament which has proposed the EU Artificial Intelligence Act, while the European Union Agency for Cybersecurity (ENISA) has been at the forefront of discussions regarding cybersecurity for AI. Additionally, HITRUST has released the latest version of the Common Security Framework (CSF v11.2.0), now encompassing areas specifically addressing AI risk management.
These guidelines and frameworks provide valuable assistance to enterprises but have limitations in fully addressing ethical, legal implications, and AI regulatory compliance associated with use of Generative AI, even as they foster innovation.
The US has taken a notable step by issuing a comprehensive executive order on AI, aiming to promote the “safe, secure and trustworthy development and use of artificial intelligence.” A White House fact sheet outlining the order has been released. This Executive Order stands as a significant contribution to the discussion of AI accountability in the development and deployment of AI across organizations.
Further, at the international AI Safety Summit, it was announced that like-minded governments and AI companies had reached an agreement to test new AI models prior to their release and adoption. The UK will also establish the world’s first AI safety institute, responsible for testing new AI models for a range of risks.
These developments underscore the seriousness and commitment of governments and regulators worldwide in managing risks and governance in Generative AI.
Understanding the trust quotient in generative AI
To capitalize on the competitive advantage and drive business value offered by Generative AI, it is imperative for enterprises to build trust in Generative AI models and solutions. This can be achieved by ensuring AI transparency, AI accountability, and ethical considerations in AI development are integral components of the Generative AI design itself.
Organizations need to establish clear lines of responsibility for actions and outputs generated, develop auditable and monitorable mechanisms for traceability, and facilitate the identification of errors, biases, or misconduct within Generative AI. Further, AI ethics must be intentionally integrated into the fabric of Generative AI design through development of technological processes to manage key ethical aspects such as bias mitigation, privacy protection, consent autonomy, social impact, responsible AI data usage, human oversight, etc.
Evaluating the efficacy of governance mechanisms in trust-building
To build and sustain trust in AI, organizations need to establish a Generative AI Governance framework that is unbiased, resilient, explainable, transparent and performance-based. Enterprises must ensure that inherent biases are identified and managed, and that the data used by the Generative AI system, its components, and the algorithm itself are secured from unauthorized access, corruption, and adversarial attack. The training methods and decisions criteria of Generative AI should be understood, documented and readily available for human operator challenge and validation, where necessary.
Privacy-related initiatives, such as providing end-users with appropriate notification during AI interaction, offering an opportunity to select their level of interaction, and obtaining user consent for related data processing, need to be clearly defined. It is crucial to ensure that the conformance of these initiatives is integral to the implementation process.
From a business standpoint, it is essential to ensure that the outcomes of Generative AI align with stakeholders’ expectations, and performance is monitored to adhere to the desired level of precision and consistency.
Steering regulatory dynamics forward
While commendable efforts are being made across the global landscape to rapidly develop and update Generative AI regulations and guidance, the pace of Generative AI innovation is too exponential for current global initiatives to adequately rein in the technology and its usage. There is an ardent need for enterprises to actively track these Generative AI regulations and ensure compliance with necessary requirements.