Home » News and Events » An instrument to deal with biases in AI algorithms

An instrument to deal with biases in AI algorithms

IBM recently released a tool called ‘Fairness 360’ which can detect biases in algorithms to adjust the code.

For AI to work properly, a vast range of unbiased data is required. IBM is making a move in this bias problem, stepping in with an instrument called Fairness 360. According to the AI News’s article, the software will be cloud-based and open source, it will also work with various common AI frameworks including Watson, Tensorflow, SparkML, AWS SageMaker, and AzureM.  The system searches for signs of bias in algorithms to recommend solutions to correct the problems.

Humans have natural biases, and that means a developer’s bias can creep into his or her algorithm. The problem is that AI’s developers do not know what exact decision their AIs can make. Therefore, with this IBM tool, they can see what factors are being used by their AIs.

This invention can play a vital role for developers to ensure accountability and transparency for further technology development, which is also the long-term aim of the AIWS Initiative.