Facebook is taking a hard look at racial bias in its algorithms

A screen with the Facebook logo visible.

Facebook has announced it will study racial bias in the algorithms used on both its platform and Instagram, which it owns. | Richard James Mendoza/NurPhoto via Getty Images

Following a rocky civil rights audit, Facebook is creating teams to make its platforms work better for everyone.

Open Sourced logo

Facebook has announced it’s building teams that will study racial bias baked into the algorithms used on its platform and on Instagram, which it owns. The move is a significant acknowledgment that the algorithms driving two of the most influential social media platforms can be discriminatory.

Instagram will create an “equity team” charged with tasks like analyzing the enforcement of its harassment policies and studying its algorithms for racial bias, the Wall Street Journal reports. Facebook spokesperson Stephanie Otway told Recode that the team will continue to work with Facebook’s Responsible AI team to study bias, and added that Facebook will also create a similar equity team.

“The racial justice movement is a moment of real significance for our company,” Vishal Shah, a vice president of product at Instagram, said in a statement. “Any bias in our systems and policies runs counter to providing a platform for everyone to express themselves.”

Algorithmic bias can be pervasive and impact how a platform treats users by affecting the content and ads they see as well as the way their own posts get filtered. Users can have trouble spotting algorithmic bias on their own since, for example, most can’t necessarily compare their News Feed with those of other users. Researchers, civil rights groups, and politicians have sounded alarm bells about algorithmic bias on its platforms, and now Facebook is devoting more resources to addressing the problem.

Notably, the Facebook news comes amid an advertising boycott of the platform organized by major civil rights groups, including the NAACP and the Anti-Defamation League, and just two weeks after the company shared the results of its civil rights audit, which panned Facebook for failing to address racism and misinformation on its site.

Ahead of the boycott, Instagram acknowledged and pledged to deal with algorithmic bias on its platforms more directly. As demonstrations against police brutality and racism swept across the United States in mid-June, Instagram head Adam Mosseri announced that the company would look into racial bias on Instagram, including in its account verification policies and approach to content filtering and distribution.

“While we do a lot of work to help prevent subconscious bias in our products, we need to take a harder look at the underlying systems we’ve built, and where we need to do more to keep bias out of these decisions,” Mosseri wrote at the time.

We don’t know much about the new efforts yet. Facebook’s Otway emphasized that these initiatives are still in the early stages and said the new team will be charged with reviewing a wide variety of issues that marginalized groups may encounter on the Instagram platform. As an example, she suggested that the company will support tools that focus on supporting minority-owned businesses.

The company seems especially willing to invest in efforts analyzing the role of bias in its systems, after its recently concluded civil rights audit highlighted two pilot programs at the company: a Facebook-built tool called Fairness Flow and a fairness consultation process launched in December. The auditors also called for Facebook to establish “processes and guidance designed to prompt issue-spotting and help resolve fairness concerns” that employees company-wide must follow.

“The company certainly has the resources to be more proactive and aggressive in its actions, to be a leader,” Kelley Cotter, a graduate student who studies public understanding of algorithms at Michigan State University, told Recode at the time of the audit. “That Facebook still appears to be in an ‘exploratory’ phase after years and years of civil rights complaints evidences its reluctance to prioritize public values like equity and justice over its private interests.”

Automated tools can discriminate in myriad ways. Bias can be baked into algorithms and artificial intelligence based on who builds these technologies, which assumptions are programmed into them, how they’re trained, and how they’re ultimately deployed. One notable source of this bias can come from data: If an algorithm is trained using a database that isn’t representative of a particular demographic group, it’s very possible the algorithm will be inaccurate when applied to people who are part of that group.

Algorithmic bias can have life-changing and dangerous impacts on people’s lives. Résumé-screening algorithms can learn to discriminate against women, for example. Facial recognition systems used by police can also have racial and gender biases, and they often perform worst when used to identify women with darker skin. In June, we learned of the first known false arrest of a Black man living in Michigan caused by a facial recognition system.

On social media platforms built by Facebook, there’s concern that bias could show up anywhere an automated system makes decisions, including in how Instagram filters content and whose posts get flagged by Facebook’s content moderation bots.

There’s also concern that the lack of racial diversity among Facebook’s employees could hinder its efforts to make its product more equitable. Just under 4 percent of roles at Facebook are held by Black employees, and just over 6 percent are held by Hispanic employees, according to the company’s diversity report. Facebook would not share statistics on the racial diversity of the teams that work on its algorithms and artificial intelligence. According to Nicol Turner Lee, the director of the Brookings Institution’s Center for Technology Innovation, “Without representative input, the company may find itself generating tradeoffs that further the differential treatment or disparate impacts for communities of color.”

Meanwhile, the capacity these systems have to be discriminatory is why some say the algorithms themselves need to be externally audited, which Facebook thus far has not opted to do.

Facebook “seems to plan to keep the results of its research in-house,” Nicolas Kayser-Bril of Algorithm Watch told Recode after the announcement of the new teams. “It is unlikely that, were the new ‘equity and inclusion team’ to make claims regarding discrimination or the remediation thereof, independent researchers will be able to verify them.”

After all, Facebook can say it’s improving its algorithms again and again, but it’s not clear how those outside the company, including Facebook and Instagram users, would ever know if the changes were actually making an overall difference.

Open Sourced is made possible by Omidyar Network. All Open Sourced content is editorially independent and produced by our journalists.


Support Vox’s explanatory journalism

Every day at Vox, we aim to answer your most important questions and provide you, and our audience around the world, with information that has the power to save lives. Our mission has never been more vital than it is in this moment: to empower you through understanding. Vox’s work is reaching more people than ever, but our distinctive brand of explanatory journalism takes resources — particularly during a pandemic and an economic downturn. Your financial contribution will not constitute a donation, but it will enable our staff to continue to offer free articles, videos, and podcasts at the quality and volume that this moment requires. Please consider making a contribution to Vox today.

via Vox – Recode

Check out the Finding Your Identity Podcast!