Home » Shaping Futures » Michael Dukakis Leadership Fellows and Honorees » News » “It’s a feature, not a bug” – How journalists can spot and mitigate AI bias

“It’s a feature, not a bug” – How journalists can spot and mitigate AI bias

Consultant and executive coach Ramaa Sharma spoke to leading figures in the newsroom AI space to identify the risks and potential solutions of AI bias.

Mitigating bias as a process

Tackling bias in AI systems is not easy. Even well-intentioned efforts have backfired. This was illustrated spectacularly in 2024 when Google’s Gemini image generator produced images of Black Nazis and Native American Vikings in what appeared to be an attempt to diversify outputs. The subsequent backlash forced Google to temporarily suspend this feature.

Incidents like this highlight how complex the problem is even for well-resourced technology companies. Earlier this year, I attended a technical workshop at the Alan Turing Institute in London, which was part of a UK government-funded project that explored approaches to mitigating bias in machine learning systems. One method suggested was for teams to take a “proactive monitoring approach” to fairness when creating new AI systems. It involves encouraging engineers and their teams to add metadata at every stage of the AI production process, including information about the questions asked and the mitigations applied to track the decisions made.

The researchers shared three categories of bias that could occur along the lifecycle, with thirty-three presented in total, along with deliberative prompts to help mitigate them:

  1. Statistical biases arise from flaws in how data is collected, sampled, or processed, leading to systematic errors. A common type is missing data bias, where certain groups or variables are underrepresented or absent entirely from the dataset.

In the example of a health dataset that primarily includes data from men and omits women’s health indicators (e.g. pregnancy-related conditions or hormonal variations), AI models trained on this dataset may fail to recognise or respond appropriately to women’s health needs.

  1. Cognitive biases refer to human thinking patterns that deviate from rational judgment. When these biases influence how data is selected or interpreted during model development, they can become embedded in AI systems. One common form is confirmation bias, the tendency to seek or favour information that aligns with one’s pre-existing beliefs or worldview.

For example, a news recommender system might be designed using data curated by editors with a specific political leaning. If the system reinforces content that matches this worldview while excluding alternative perspectives, it may amplify confirmation bias in users.

  1. Social biases stem from systemic inequalities or cultural assumptions embedded in data, often reflecting historic injustices or discriminatory practices. These biases are often encoded in training datasets and perpetuate inequalities unless addressed.

For example, an AI recruitment tool trained on historical hiring data may learn to prefer male candidates for leadership roles if past hiring decisions favoured men, thus reinforcing outdated gender norms and excluding qualified women.

The process generated lively debate amongst the group and took considerable time. This made me question how practical it would be to apply this methodology in an overwhelmed/time-stricken newsroom. I also couldn’t help but notice that the issue of time didn’t seem to trouble the engineers in the room.

Please read full here:

https://reutersinstitute.politics.ox.ac.uk/news/its-feature-not-bug-how-journalists-can-spot-and-mitigate-ai-bias