AI bias: what’s representative data got to do with it?*

Quite a bit, as it turns out… Here is an explainer. Skip to the end for the cheat sheet version of events.

It is a truth universally acknowledged that when speaking about AI, the conversation often turns to high profile instances where AI has been used to make serious decisions that are indisputably racist, sexist and/or other types of biased. Given the sheer volume of these cases**, it seems nigh impossible for companies to get it right.

So can anything actually be done about this? Let’s take a look from the beginning.

Why is AI biased?

While there is a temptation to believe that AI is inherently racist/sexist/biased, we can’t blame the AI itself.

In simple terms, AI works by:

  1. Consuming loads of sample data as input (i.e. the data that powers the AI).

  2. Getting “trained” and/or learning from this sample data, about what sorts of outputs and decisions have been made using this data (i.e. using the AI algorithm as instruction).

  3. Producing outputs and decisions based off its training and the data (i.e.AI algorithm + data = AI outputs)

So, at the heart of it, biases in AI are largely a function of biases in the data that powers the AI. While these data biases may then be amplified by the AI algorithms, or by the inappropriate use of certain data points, the main contributing factor to bias in AI is the data.

In plain English - if you’re training a parrot by speaking to it exclusively in an Australian accent, should you then be surprised if it speaks back to you in an Australian accent? Obviously not, because that is what you trained it to do. In fact, the parrot may end up sounding like an (alarmingly) ocker version of you. This is because the parrot took your accent, and presented back a stronger version of it. (Cue YouTube rabbit hole).

Similarly, if we train an AI product on data that contains biases, then the results that AI product gives us will also contain these biases. In fact, in some cases, the AI product displays biases beyond what we trained it on. (Think: your children being extreme versions of you, even the parts of yourself you’re not particularly proud of).

So what can we do about it?

The “simple” solution is to provide representative data to the AI and then validate that the results are equivalent across all the user types who will be using the AI.

In an AI context, representative data means data that represents the full set of communities who will be using the AI. This ensures that the data is not skewed or biased for certain communities. For example, instead of sourcing data only from Asian women over 40, training the AI on that data, and then releasing it to the larger Asian community and expecting it to work well for all users, we would look to source representative data from Asians of all ages and gender identities. Like training the parrot with multiple accents, this would ensure that the AI would then work across the spectrum of users in its target audience.

While it may be perceived as being difficult to source representative data, especially from minority groups, this problem may also be addressed by engaging with these user communities***. That is, training and testing the AI solution with people who are actually from those communities. In the near future, it may also be possible to address these issues by creating/sourcing synthetic data sets. Synthetic data has a lot of potential, because instead of collecting loads of data from real users, synthetic data can be created to supplement smaller datasets.

Additionally, the AI should be validated by disparity testing it on users who represent its target audience. That is, tested to ensure that disparities in results are not being introduced for different segments of its target audience (including race, nationality, gender, religion****). If the AI doesn’t produce equal results for its intended audience, then that AI should not be made available to users.

For example, there have been a slew of high profile facial recognition AIs that have been proven to work poorly on non-white, non-males where these flaws have been discovered after the AI has been put to commercial use. Had there been disparity testing on these AIs, then potentially the companies producing these systems would have addressed these issues (by sourcing and training the AI on data beyond white males) before releasing it to the public.

It should also be the default that untested AI, or AI that does not work for all user groups, should not be used to make high impact/final decisions. It can still be used to supplement human decision making (with caveats), but that should really be the extent of it.

For example, if you know that your AI does not work as well for you because you are not white, one assumes you would not use it to make key judgements such as identifying suspicious moles on your body. (Think: this would be as smart as trusting your twelve-year-old to make decisions about household finances). Most likely, you’d either abandon the AI altogether, or use it to supplement human medical advice.

So if we have the answers, why can’t we fix this now?

Sadly, I can only speculate as to why these issues are not being actively addressed, though it potentially boils down to a combination of factors.

Firstly, there are currently no laws mandating disparity testing and representative data. While the AI Bill of Rights and the EU AI Act recommend both, neither have been enacted into law.

Secondly, there are also no laws (or even best practices) relating to disclosure about the use of AI, what data it is being trained on and what testing has been performed on it. This lack of disclosure has meant that some of the AI systems in use are failing us in secret. Even the famous case of the sexist Amazon hiring AI was leaked, as applicants had no idea (or choice) that an AI was making crucial hiring decisions about them.

Thirdly, there is a lack of AI literacy and human-focused design and engagement for AI solutions*****. Both of these issues mean that not only do users lack basic AI knowledge (not you, because you’re reading this article) but the AI lifecycle is such that users are often not brought in to provide feedback on the system until it is too late (if they are brought in at all).

Based off the fact that we are still getting biased solutions released into the wild, it would appear that many companies are prioritising commercial factors (cost savings, time to market, funding priorities) instead of accuracy, by not performing disparity testing on representative data, or releasing their solutions regardless. (Think: serving an undercooked chicken instead of putting it back into the oven until it is fully cooked).

In conclusion

The reality is that many companies are not incentivised to address issues of bias in AI, even though the solutions are actually quite simple. Nor are they obligated to disclose key facts about its use and its accuracy, or introduce greater human involvement in the AI lifecycle. While it may take some time for the laws to be enacted and for companies to take responsibility for the AI that they provide, as users, and better yet as those who are decision makers about AI products for our respective organisations, we can do our part to hold AI providers to account. This includes becoming more informed/literate about AI, finding out more about the AI solutions that are sometimes pressed upon us by our banks/insurance agencies, etc, and opting out of the AI solutions until they can be proved to work on people like us******.

Things you can say to sound smart

  • Isn’t it funny how some people (not me) think that the AI is biased, when it’s actually the data that they’re feeding it that is biased?

  • Solving the AI bias problem mostly boils down to feeding the AI representative data and then performing disparity testing - it’s quite outrageous that companies are not bothering to do this.

  • It’s shocking that we don’t have any actual laws requiring companies to make sure that their AI products work across all user communities.

  • AI is definitely useful, but until it’s proven to work for everyone, we should definitely not be giving it power to make serious decisions about us!


Footnotes

* if you get this reference, then congrats on your good taste in music! If not, have a listen.

** there are just so many examples the best link I can give you is this - https://www.google.com/search?q=ai+bias+examples&oq=ai+bias+examples

*** user engagement (especially with minorities) is something that is woefully inadequate - but that is an article for another time!

**** full list of protected classes in the AI Bill of Rights covers “race, color, ethnicity, sex (including pregnancy, childbirth, and related medical conditions, gender identity, intersex status, and sexual orientation), religion, age, national origin, disability, veteran status, genetic infor-mation, or any other classification protected by law.”

***** it is absolutely no coincidence that AI Shophouse provides services to address AI literacy and inclusive collaboration. We are motivated to do this so that we can address these exact same types of issues with AI bias (and other forms of exclusion).

Elaine Ng

Founder of AI Shophouse.

Previous
Previous

ChatGPT, Dall-E, Midjourney: What is generative AI all about? How should you respond?

Next
Next

AI regulation: will the US AI Bill of Rights save us from the AI wild west?