As AI systems become increasingly prevalent in organisations and society, their limitations and risks are receiving more attention. Many newspaper headlines emphasise the dangers of biassed AI decisions that can potentially harm individuals. For example, in image generation tools like MidJourney, the prompt “an Indian person” often results in images of an old man with a beard, which misrepresents the diversity of the Indian population. In another context, predictive policing tools have been shown to exhibit significant racial bias, leading to disproportionately higher surveillance and policing of minority communities. This can perpetuate systemic discrimination and result in unfair treatment based on race.