Algorithms can be biased if trained on biased data – but don’t hide sensitive information from AI

An algorithm can only do what we train it to do. This means that preventing social biases in algorithmic decision making requires human intervention.

A photo annotation algorithm may confuse black people with gorillas and a Google search could recommend jobs with lower salaries for women than it does for men. Both of these cases really happened, and they have become classic examples of algorithmic bias, which is now considered to be one of the greatest challenges associated with the development of artificial intelligence.

How can an algorithm become biased? This may happen, if data that records socially biased human decisions is used for training a machine learning algorithm, which will then reproduce biased decisions, says Assistant Professor Indrė Žliobaitė, who has worked on discrimination-aware machine learning.

Making a distinction between individuals is not discriminatory in itself. Predictive algorithms use descriptive characteristics to differentiate individuals from one another. If no differentiation was allowed, everyone would have the same credit rating at the bank, or the same movie or music recommendations in online streaming services.

“A judge or a doctor wouldn’t give the same verdict or diagnosis for all. The outcome will vary depending on the factors considered,” Žliobaitė points out.

Algorithms can take a hint

Human decision makers may decide that ethnicity has no direct impact on anyone’s ability to meet their loan payments, but if the past data for any reason records an indirect link between ethnicity and loan repayments, an algorithm will capture this signal unless instructed otherwise.

For example, if you live in a lower-income neighbourhood where a particular ethnicity is common, an algorithm predicting your credit score may capture this link. If the algorithm is not trained otherwise, it will learn that people of a certain ethnic background are more likely to have a lower income, and refuse to grant them a loan.

“Data-based algorithms learn from the data they are given. If we want to prevent bias, we need to formulate our society’s moral and ethical rules as mathematical constraints and use those constraints as performance criteria when training algorithms that are used for making decisions about people,” says Žliobaitė. 

Recognising bias retroactively

The field of computer science became aware of the fairness of the data used to teach algorithms approximately ten years ago, when algorithmic decision making along with the big data hype started to become widespread. At the time, Žliobaitė was a postdoctoral researcher at the Eindhoven University of Technology in the Netherlands. She joined a research team which worked on discrimination-aware classification methods.

Žliobaitė found that this was a new and interesting research direction. At that time there was little public attention regarding the fairness of algorithms. 

“The general public thought that decision making by algorithms was automatically fair and objective, because it was repetitive. We even had difficulties convincing experts of machine learning and data mining that this was not the case.”

Potential biasness of algorithmic decision making only became a topic of broader public interest a few years ago. Now it is studied from a social justice perspective as well as a technological one.

“The problem is that there’s no such thing as perfect data. They always reflect society,” says Žliobaitė.

Statistics help reveal bias

So how can we tell that we might have become victims of algorithmic bias?

According to Žliobaitė, the way is the same as for assessing decision making by humans. It’s easy to tell whether algorithmic decision making is leading to direct discrimination – at least in theory. For example, if two people with similar backgrounds apply for the same position, and the algorithm sorting applicants discards one of them, we can test to see if ignoring a particular feature, such as gender, would have made the result different. If yes, the decision is biased with respect to gender.

However, things aren’t usually this simple, as algorithmic decision making can also result in indirect discrimination.

“For example, we can examine statistics of all promotions granted in a company and compare them to the personal data of all employees. If the career trajectories of people with a particular background are significantly different from the average, there may be indirect discrimination at play.”

According to Žliobaitė, it ultimately makes no difference whether decisions are made by an artificial or human intelligence. It is down to us to agree which characteristics can be used in decision making, and to what extent.

“The constraints do not come from computers, but from the notions we as a society consider to be right. We must instruct artificial intelligence with our value judgement, just like we do with all human decisions.” 

Transparency in the beginning helps down the line

When people became aware of algorithmic bias, many thought of it in terms of privacy: the more identifying information they could hide, the better. However, this mode of thinking cannot be applied to machine learning, as hiding factors such as gender or ethnicity from the data can actually make the situation worse.

According to Žliobaitė, removing characteristics that are considered discriminatory would only be beneficial if there were no other characteristics that could correlate with them. For example, an algorithm might begin to recognise the ethnic background of a person based on word use or another small hint embedded in the data, which would make removing ethnicity pointless.

This means that during training it’s better to give the algorithm all available information, such that the part of the signal that we consider discriminatory could be explicitly accounted for and removed during the training process.

“In order to be able to remove sensitive characteristics from predictive models, we must allow the algorithm to access this information during training. This is the only way to explicitly include the value judgement of the society in the algorithm. After an algorithm is built, it should not use sensitive characteristics as inputs for decision making.”

Future directions

Žliobaitė believes that the next step in the development of fairness-aware artificial intelligence and machine learning could be developing ways to audit the process of building algorithms.

“Right now we’re talking about algorithms as these complex black boxes. People say they should be more transparent. But we don’t expect our doctors to be transparent, we trust their education and experience.”

GDPR protects and exposes

Due to the EU’s General Data Protection Regulation (GDPR), anyone can request that a company erase all information it is holding of the individual in its register. This may lead to challenges in using data for predictive modelling in the future, believes Indrė Žliobaitė, assistant professor of computer science.

For example, banks may have been using artificial intelligence to condense the information they have on their clients into models which they then use to evaluate the clients’ credit rating. A model is a summary of data. When a model has been built, the bank may in principle discard the client data that was used to create it.

But if people can demand the banks remove their information from the models, models would need to be retrained. According to Žliobaitė, removing the information on a specific individual from a model, which is a summary, may be impossible without access to data of all the people that were used in model training. Therefore, the data of all people needs to be kept instead of discarding it, to make it possible to remove one person from the model, if it becomes necessary.

“So it is a double-edged sword. If we want to give people the opportunity to have their data removed, we should also retain everyone’s identifying information for much longer than we would otherwise need to. This is to say that the ‘right to be forgotten’ in the GDPR comes at a cost to the privacy protection of others. There has yet to be a follow up discussion on this,” says Žliobaitė.

 

Indrė Žliobaitė

University of Helsinki developing methods for artificial intelligence

The University of Helsinki is developing new methods for artificial intelligence, machine learning and data mining, which several different research groups are applying extensively to varying purposes. In this series, we highlight individual researchers to explain the research in the field and the ways in which it is impacting our lives.

The other instalments have discussed algorithms in drug discovery and applications of artificial intelligence in industrial processes