AI Bias – The Future is Accidentally Biased?

AI Bias

Every now and then a run-of-the-mill activity makes you sit up and take notice of something bigger than the task you’re working on, a sort of out-of-body experience where you see the macro instead of the micro.

AI Bias - The future is accidentally biased?

Yesterday was one such day. I’d had a pretty normal one of keeping across all the usual priorities and Teams calls, figuring out our editorial calendar and the upcoming webinars, all the while refreshing some buyer and user personas for our Self-Service Data Quality platform.

Buyer personas themselves are hardly a new thing, and they’re typically represented by an icon or avatar of the buyer or user themselves. This time, rather than pile all our hopes, dreams and expectations into a bunch of cartoons, I figured I’d experiment a little. Back in January I’d been to an AI conference run by AIBE, where I’d heard about Generative Adversarial Networks (GANs) and the ability to use AI to create images of pretty much anything.

Being someone who likes to use tech first and ask questions later, I headed over to the always entertaining thispersondoesnotexist.com where GANs do a pretty stellar job of creating highly plausible-looking people who don’t exist (with some amusing if mildly perturbing issues at the limitations of its capability!). I clicked away, refreshing the page and copying people into my persona template, assigning our typical roles of Chief Data Officer, Data Steward, Chief Risk Officer and so on; it wasn’t until I found myself pasting them in that I realised how hard it was to generate images of people who were not white. Or indeed how it was impossible to generate anyone with a disability or a degenerative condition.

Buyer personas are supposed to reflect all aspects of likely users of the technology, yet this example of AI would unintentionally bias our product and market research activities to overlook people who did not conform to the AI’s model. My colleague Raghad Al-Shabandar wrote about this recently (published today, incidentally), and I think probably the most impactful part of this, for me, anyway, was the following quote:

The question, then, is developing models for the society we wish to inhabit, not merely replicating the society we have.

In the website’s case, it’s even worse: it obliterates the society we currently have, by creating images that don’t reflect the diversity of reality, instead layering on an expected or predicted society that is over 50% white and 0% otherwise-abled.

I should make it clear that I’m a big fan of this tech, not least for the bafflement my kids have at the non-existence of a person who looks very much like a person! But at the same time, I think it perhaps exposes the risk all AI projects have – did we really think of every angle about what society looks like today, and did we consider how society ought to look?

These are subjective points that vary wildly from culture to culture and country to country, but we must ensure that every minority and element of diversity is in the room when we’re making such decisions or we risk baking-in bias before we’ve even begun.

Click here for the latest news from Datactics, or find us on LinkedinTwitter or Facebook

Scroll to Top