AppNexus Events

Pursuing Differences through Responsible Machine Learning: 3 Tips For Building Ethical Algorithms

Blog Post
The AppNexus Team

For those who work in the technology industry, there’s sometimes a tendency to see data science and machine learning as inherently neutral tools. When they work correctly, we’re able to plug in data and get back something approximating a truthful analysis that enables us to predict the future and make smarter decisions.

However, algorithms are still constructed by humans, who frequently bring with them the same pernicious biases that infect the rest of our society. And these biases can have real, harmful impacts on women, people of color and other often-marginalized groups.

A recent example is when Amazon shut down a biased machine learning tool it had built to identify qualified job applicants. Most of Amazon’s software development applicants were men and, as a result, the tool had learned to penalize resumes that included the word “women’s” and the names of two all-women’s colleges.

At our 2018 Women’s Leadership Forum, several industry leaders took to the stage to discuss these dangers, as well as the steps companies can take to build ethical, responsible machine learning tools.

Here are three ways companies can work to avoid perpetuating systemic biases in their machine learning algorithms.

 

Make ethics a priority from the C-suite on down.

Because tech companies can sometimes adopt a “move fast and break things” mentality, Xandr’s Chief Data Scientist Catherine Williams says it’s important for executive teams to communicate that their data scientists need to take time to review whether their creations are ethically sound.

While executives don’t necessarily need a technical background, they do need to be asking their data science team leads about how they’re collecting data and constructing their algorithms. This way, data practitioners know that they’ll be held accountable for letting people know if there’s potential for bias in their underlying data or a biased feedback loop that might be created.

Haile Owusu, SVP of analytics, decisions and data science at Turner, adds that he would like to see practitioners take more time to talk to each other about how they can prevent their tools from causing harm.

 

Hire an empathetic, diverse team of data practitioners.

Williams and Owusu both stressed the need for companies to hire demographically diverse teams to build their algorithms, in order to bring awareness to biases that might affect their communities.

Owusu also noted that data practitioners need to have a certain amount of empathy for the people who are impacted by their creations. For instance, when Google’s computer vision algorithm falsely labeled an image of two Black people as depicting two gorillas, several of Owusu’s colleagues insisted that the issue was nothing more than a technical error. Owusu correctly identified the problem as one of systemic bias, as Google ultimately concluded that its algorithm had not been fed enough images of Black people when it was being taught what a human looked like.

In retrospect, Owusu said the judgment of a particularly insistent colleague had been clouded by the fact that they had not given any real thought to the impact the algorithm had on the Black people who encountered it.

 

Create products where inclusion is the default, not the exception.

When it comes to search algorithms, members of historically marginalized groups frequently find that their default queries return results that are either unhelpful or actively offensive to them. In the past, a Google search for “professional hairstyles for work” would bring back images of White women.

Candice Morgan, head of diversity and inclusion at Pinterest, says that tech companies should strive to develop algorithms that are designed to serve everyone who uses them. One of the projects she’s most proud to have worked on is an addition to Pinterest’s beauty search feature that asks users to select their skin tone so that the algorithm can surface products that match their complexion. Morgan was particularly glad to read the Twitter praise posted by a South Asian woman, who said she’d previously had to search for “beauty dark skin” any time she wanted to find beauty products.

“She was saying just how meaningful it was to have a feature where she was included as part of the default, not the exception,” Morgan said.

For more actionable insights from this year’s Women’s Leadership Forum, visit our YouTube channel.