A PYMNTS Company

Beyond Code: Addressing AI bias with Inclusive Governance

 |  October 18, 2024

By: Charlotte Swain & Bethan Odey (DLA Piper)

As artificial intelligence (AI) becomes increasingly integrated into our daily lives, the urgency of addressing AI bias and its implications has never been greater. While businesses rush to harness AI for data-driven decision-making, many overlook a crucial issue: the algorithms designed to enhance efficiency can also perpetuate societal biases. Recent high-profile cases of AI bias and hallucinations, along with reports on the tech sector’s lack of diversity, have underscored the risks involved, highlighting the need for robust governance to ensure the integrity of these systems. This article delves into the complexities of AI bias, its origins, the impact on businesses and society, and the essential role of diversity and governance in creating fair and accountable AI solutions.

What is AI Bias?

By now, many are familiar with the concept of AI bias and the related phenomenon of “hallucinations.” AI bias typically refers to biased or prejudiced outcomes produced by an AI algorithm, often stemming from flawed assumptions embedded during the machine learning process. The training data used to develop these algorithms often reflects the biases of society, leading to systems that reinforce existing prejudices—or even create new biases when users place undue trust in distorted datasets.

This can also lead to AI hallucinations—when an AI fabricates false or contradictory information, presenting it as credible facts. These hallucinations can have significant consequences for business decisions and reputational damage, especially if certain groups are unfairly targeted or if businesses rely on entirely fabricated data. Many may recall the recent case of a New York lawyer who faced disciplinary action after citing nonexistent legal cases in court. The lawyer had relied on ChatGPT to assist with legal drafting, resulting in fabricated examples of court cases that seemed legitimate but were entirely fictitious. Similarly, a high-profile AI designed to aid in scientific research was shut down after only three days due to frequent hallucinations, generating content as absurd as “the history of bears in space” alongside summaries of scientific concepts like the speed of light. While some hallucinations are easy to spot, others are so subtly wrong that they’re much harder to identify.

According to our latest Tech Index Report, 70% of businesses are planning AI-driven developments in the next five years. So, what should we consider when addressing AI bias?

CONTINUE READING…