Skip to main content

The Journal Gazette

  • File

Friday, August 11, 2017 1:00 am

Diversity memo: Vile or visionary?

Sexism evident in ideas, products

Sara Wachter-Boettcher

Sara Wachter-

Boettcher is a web consultant and author of the forthcoming book “Technically Wrong: Sexist Apps, Biased Algorithms, and Other Threats of Toxic Tech.” She wrote this for the Washington Post.

It was a rough weekend at Google. On Friday, a 10-page memo,“Google's Ideological Echo Chamber,” started circulating on internal networks, arguing disparities between men and women in tech and leadership roles were rooted in biology, not bias. By Saturday afternoon, Gizmodo had obtained and published the entire thing. The story blew up.

The author, a male software engineer, argued that women were more neurotic and less stress-tolerant than men, that they were less likely to pursue status, that they were less interested in the “systematizing” work of programming. “We need to stop assuming that gender gaps imply sexism,” he concluded before offering recommendations. Those included demanding that Google “de-emphasize empathy,” that it stop training people on sensitivity, and that it cancel any program aimed specifically at advancing or recruiting women or people of color.

The memo was reductive, hurtful and laced with assumption. It was also unsurprising.

We've heard lots about Silicon Valley's toxic culture this summer. Those stories have focused on how that culture harms those in the industry – the women and people of color who've been patronized, passed over, pushed out and, in this latest case, told they're biologically less capable of doing the work in the first place. But what happens in Silicon Valley comes into our homes and onto our screens, affecting all of us who use technology, not just those who make it.

Take Apple Health, which promised to monitor “your whole health picture” when it launched in 2014. The app could track your exercise habits, your blood alcohol content and even your chromium intake. But for a full year after launch, it couldn't track one of the most common human health concerns: menstruation.

And consider smartphone assistants such as Cortana and Siri. In 2016, researchers from JAMA Internal Medicine noted that these services couldn't understand phrases such as “I was raped” or “I was beaten up by my husband” – and, even worse, would often respond to queries they didn't understand with jokes.

It's bad enough for apps to showcase sexist or racially tone-deaf jokes or biases. But in many cases, those same biases are also embedded somewhere much more sinister – in the powerful (yet invisible) algorithms behind much of today's software.

For a simple example, look at FaceApp, which came under fire this spring for its “hotness” photo filter. The filter smoothed wrinkles, slimmed cheeks – and dramatically whitened skin. The company behind the app acknowledged that the filter's algorithm had been trained using a biased data set – meaning the algorithm had learned what beauty was from faces that were predominantly white.

Then there's Word2vec, a neural network Google researchers created in 2013 to assist with natural language processing – that is, computers' ability to understand human speech. The researchers built Word2vec by training a program to comb through Google News articles and learn about the relationships between words. Millions of words later, the program can complete analogies such as “Paris is to France as Tokyo is to ----------.” But Word2vec also returns other kinds of relationships, such as “Man is to woman as computer programmer is to homemaker,” or “Man is to architect as woman is to interior designer.”

In an industry where white men are the norm and “disruption” trumps all else, technology such as Word2vec is often assumed to be objective and then embedded into all sorts of other software, whether it's recommendation engines or job-search systems. The effects are far-reaching. Study after study has shown that biased machine-learning systems result in everything from job-search ads that show women lower-paying positions than men to predictive-policing software that perpetuates disparities in communities of color.

Some of these flaws might seem small. But together, they paint a picture of an industry out of touch with the people who use its products. And without a fundamental overhaul to the way Silicon Valley works – to who gets funded, who gets hired, who gets promoted and who is believed when abuses happen – it's going to stay that way. That's why calls to get rid of programs targeted at attracting and supporting diverse tech workers are so misguided. The sooner we stop letting tech get away with being insular, inequitable and hostile to diversity, the sooner we'll start building technology that works for all.