Humans have many kinds of biases. To name just a few, we suffer from confirmation bias, which means that we tend to focus on information that confirms our preconceptions about a topic; from anchoring bias, where we make decisions mostly relying on the first piece of information we receive on that subject; and from gender bias, where we tend to associate women with certain traits, activities, or professions, and men with others. When we make decisions, these types of biases often creep in unconsciously, resulting in decisions that are ultimately unfair and unobjective.
These same types of bias can show up in artificial intelligence (AI), especially when using machine learning techniques to program an AI system. A commonly-used technique called “supervised machine learning” requires that AI systems be trained with a large number of examples of problems and solutions. For example, if we want to build an AI system that can decide when to accept or reject a loan application, we would train it with many examples of loan applications, and for each application, we would give it the correct decision (either accept or reject the application).
The AI system would find useful correlations in such examples and use them to make (hopefully correct) decisions on new loan applications. After the training phase, a test phase on another set of examples checks that the system is accurate enough and ready for deployment. However, if the training dataset is not balanced, inclusive, or representative enough of the dimensions of the problem we want to solve, the AI system may become biased. For example, if all accepted loan applications in the training dataset are related to men and all rejected ones are related to women, then the system will pick up the correlation between gender and acceptability as a form of bias and will use this bias when making decisions on new applications going forward.
The original article can be found here.
To support for AI Ethics, Michael Dukakis Institute for Leadership and Innovation (MDI) and Artificial Intelligence World Society (AIWS.net) has developed AIWS Ethics and Practice Index to measure the ethical values and help people achieve well-being and happiness, as well as solve important issues, such as SDGs. Regarding to AI Ethics, AI World Society (AIWS.net) initiated and promoted to design AIWS Ethics framework within four components including transparency, regulation, promotion and implementation for constructive use of AI. In this effort, Michael Dukakis Institute for Leadership and Innovation (MDI) invites participation and collaboration with think tanks, universities, non-profits, firms, and other entities that share its commitment to the constructive and development of full-scale AI for world society.