Whitepaper

The complexity of fairness in AI: tackling bias in AI systems

As AI systems become increasingly prevalent in organisations and society, their limitations and risks are receiving more attention. Many newspaper headlines emphasise the dangers of biassed AI decisions that can potentially harm individuals. For example, in image generation tools like MidJourney, the prompt “an Indian person” often results in images of an old man with a beard, which misrepresents the diversity of the Indian population. In another context, predictive policing tools have been shown to exhibit significant racial bias, leading to disproportionately higher surveillance and policing of minority communities. This can perpetuate systemic discrimination and result in unfair treatment based on race.

Almost there

Fill in the form to gain access to the white paper