Home » Shaping Futures » Responsible Artificial Intelligence (AI) governance using a relational governance framework

Responsible Artificial Intelligence (AI) governance using a relational governance framework

Artificial Intelligence (AI) was regarded as a revolutionary technology around the early 21st century but its uptake had been slow and encumbered. Although AI has encountered its rise and fall, currently its rapid and pervasive applications have been termed the second coming of AI. It is employed in a variety of sectors, and there is a drive to create practical applications that may improve our daily lives and society. Healthcare is a highly promising, but also a challenging domain for AI. The two main uses of AI are to support health professionals in decision-making and to automate some repetitive tasks to free up time for professionals. While still in its early stages, AI applications are rapidly evolving. For instance, ChatGPT is a large language model (LLM) that utilizes deep learning techniques that are trained on text data. This model has been used in a variety of applications, including language translation, text summarisation, conversation generation, text-to-text generation and others.

However, the use of AI in medical and research fields has raised concerns about the potentially harmful effects it could have on the accuracy and integrity of the information it produces. One of the main concerns of using AI tools in the medical field is the potential for misinformation to be generated. As the model is trained on a large volume of data, it may inadvertently include misinformation in its responses. This could lead to patients receiving incorrect or harmful medical advice, potentially leading to serious health consequences. Another issue with using AI tools in medical research is the potential for bias to be introduced into the results. As the model is trained on data, it may perpetuate existing biases and stereotypes, leading to inaccurate or unfair conclusions in research studies as well as in routine care. In addition, AI tools’ ability to generate human-like text can also raise ethical concerns in various sectors such as in the research field, education, journalism, law, etc. For example, the model can be used to generate fake scientific papers and articles, which can potentially deceive researchers and mislead the scientific community.

Despite these concerns, it is important to note that AI tools, like any other tools, should be used with caution considering the context. One of the ways to address this is to have a governance framework in place which can help manage these potential risks and harms by setting standards, monitoring and enforcing policies and regulations, providing feedback and reports on their performance, and ensuring development and deployment with respect to ethical principles, human rights, and safety considerations. Additionally, governance frameworks can promote accountability and transparency by ensuring that researchers and practitioners are aware of the possible negative consequences of implementing this paradigm and encouraging them to employ it responsibly.

The original article was published at the Observer Research Foundation.

The Boston Global Forum (BGF), in collaboration with the United Nations Centennial Initiative, released a major work entitled Remaking the World – Toward an Age of Global Enlightenment.   More than twenty distinguished leaders, scholars, analysts, and thinkers put forth unprecedented approaches to the challenges before us. These include President of the European Commission Ursula von der Leyen, Governor Michael Dukakis, Father of Internet Vint Cerf, Former Secretary of Defense Ash Carter, Harvard University Professors Joseph Nye and Thomas Patterson, MIT Professors Nazli Choucri and Alex ‘Sandy’ Pentland.  The BGF introduced core concepts shaping pathbreaking international initiatives, notably, the Social Contract for the AI Age, an AI International Accord, the Global Alliance for Digital Governance, the AI World Society (AIWS) Ecosystem, and AIWS City.