Artificial Intelligence and Bias

Artificial Intelligence has limitless potential.

However the systems can be built with unintended bias. This could be racism, sexism, ageism or any other form of bias.

Artificial Intelligence’s are products of the algorithms that are used to create them and the data provided to the system and the links it makes using this data.

When a developer creates an algorithm or inference engine to process the data stored in a knowledge base or database they can easily impact their either conscious or unconscious bias’ onto the system.

Imagine you are building a system to simulate car crashes. There is plenty of data from existing crash test dummies so this is imported into the system. You then scale these models down to represent a child. e.g. one child is 60% of an adult. This ignores the vast ranges of children in age and size and the fact that they are simply not just smaller people.

There is another bias here due to importing the crash test dummy data is that crash test dummies are based on males so they do not represent females in the crash data. Already any predictions are going to discount 50% of the population.

Another way that bias can exist in a system is training a system using those who are around the developer initially. Close your eyes and imagine the stereotypical software developer.

Wait a bit

Just a bit more

Keep waiting.

Ok, now describe the person that you imagined? Chances are that you probably thought of someone like you or a white male. With white males representing a large portion of the software developers.

As a result when building facial recognition artificial intelligence’s they are trained on predominately white faces and therefore are able to identify those more accurately.

Giving existing data sets to AI might seem like a good idea. Without appropriate checks this can prove problematic. Give an AI a list of company executives (who are predominately white and male) and a system could infer that white and male are two characteristics that make a successful executive and then automatically remove all other candidates that do not fit this rule. This would actually enable an AI to widen the gap for female and minorities on company executive boards.

To mitigate this checks and balances need to be included throughout the development and during the running of an AI (or any system really). This could be sampling the input and output data and looking for expected trends to match up.