The Long-Tail Danger of the Lack of Diversity in Tech: Monochromatic Algorithms

Written by alohajha | Published 2017/10/14
Tech Story Tags: women-in-tech | tech | ai | iot | grace-hopper-conference

TLDRvia the TL;DR App

It’s been a few weeks since 18,000 people gathered at the Grace Hopper Celebration of Women in Computing.

I attended with a focus on learning more about what technologists are doing in the areas of Artificial Intelligence (A.I.) and the Internet of Things (IoT).

Personally, I’m interested in A.I. and how it applies to my work as an Information Architect and how chatbots and content recommendation engines can help my users more easily and naturally find what they are looking for. I’m interested in IoT as it applies to my newbie-level work on e-textiles.

With so many impressions and ideas tucked into my head, I left the conference inspired and yet tongue-tied. I knew that something profound needed to be teased out, but what?

As I prepared for and traveled to the conference, I monitored Twitter #ghc17 to get a preview of the scene at the conference.

The top tweet was a supportive rallying cry from a tireless male ally and I was inspired. As the day went on, this tweet remained as the top tweet, and it made me wonder.

As I prepared to write this post, I checked Medium itself to see impressions from Grace Hopper.

The top result was a thoughtful post from a male attendee of Grace Hopper 2016. This also made me wonder.

How can it be that representative voices of this conference about women in computing — one rising to the top for the first 24 hours and one rising to the top a year later — are not the voices of women?

As a woman of color, I’m not shocked, as I see an endlesss queue of folks who don’t know my experience, put into positions of being the dominant voices speaking about and around my experience, whether they like it or not.

And, I’m not asking in outrage. Here’s a :-) to prove it. I’m asking because it’s an interesting question.

I see three factors at work that produced these results:

  • Content
  • User behavior
  • Algorithms

The content emitted an information scent and evoked user behavior that made algorithms identify it as highly relevant to users looking for information about Grace Hopper.

This is a loaded potion.

One could make guesses about why a man’s tweet supporting women would rise to the top of Twitter, besting tweets from women supporting women.

One could make guesses about why more womens’ voices speaking on Grace Hopper are not surfaced in Medium’s top search results, instead making the top story a man’s experience at the conference a year ago.

But I want to talk about the algorithm.

al·go·rithm

a process or set of rules to be followed in calculations or other problem-solving operations, especially by a computer.

Why the algorithm?

The algorithm is the god from the machine powering them all, for good or ill. (from How Algorithms Rule the World)

These Medium and Twitter examples illustrate symptoms of the issue that rose to the top of everything I took away from Grace Hopper: monochromatic algorithms.

The lack of diversity in technology we see today is kindling a humanitarian crisis in which ubiquitous monochromatic algorithms will marginalize any people who were not represented in their design and creation.

At this point in the flow, we’re hearing the issue surfaced at this level:

  • Women are paid less than men doing the same job.
  • Discrimination in the hiring process keeps women and minorities out of the tech workforce.
  • We don’t have enough women and minorities in the tech workforce candidate pipeline.

But to be clear, what I’m talking about is the long-tail problem that will manifest as a result of these initial factors.

If we don’t course-correct, our lives in the future will be shaped by algorithms produced by a monochromatic tech workforce that does not understand, reflect, or empathize with its user population.

Here are just a couple of examples of these monochromatic algorithms at work:

Machine Bias: There’s software used across the country to predict future criminals. And it’s biased against blacks.

Machine Bias - ProPublica_On a spring afternoon in 2014, Brisha Borden was running late to pick up her god-sister from school when she spotted an…_www.propublica.org

As you hear the following buzzwords in your daily life — A.I., machine learning, neural networks, big data, data science, IoT, and “smart” anything — consider how each of these technologies involves algorithms that dictate rules for how decisions are made.

Consider how these algorithms and decision trees are currently being designed and coded by a workforce that may not be able to account for even a tiny sliver of the wide range of human experiences in this world. And that in effect, this tunnel-visioned workforce is determining outcomes in our healthcare, education, housing, environment, communication, and entertainment.

Remember that it was not until 1992 that wheelchair access became legally required for public spaces.

Also, remember that corporations are people and that some of these corporations are essentially algorithms and how given these rights, these algorithms may be given much leeway over how they control and shape our lives.

If the predictive-policing algorithm cited in the Pro Publica story above were designed by a team that included even a single person of color, would its scoring system use race as a factor?

Yet something odd happened when Borden and Prater were booked into jail: A computer program spat out a score predicting the likelihood of each committing a future crime. Borden — who is black — was rated a high risk. Prater — who is white — was rated a low risk.

Two years later, we know the computer algorithm got it exactly backward. Borden has not been charged with any new crimes. Prater is serving an eight-year prison term for subsequently breaking into a warehouse and stealing thousands of dollars’ worth of electronics.

Sure, we’ve been using flawed data for centuries, but the difference now is that these buzzwords are mesmerizing in their promise of a better world, when in reality, they could just be a false veneer over the bad data, convincing us that these algorithms know better.

How can something called Big Data be wrong?

How can an Internet of Things be bad — it’s the internet (love the internet), but with things (love things).

I’m going to overgeneralize by using my mom and dad as examples.

My mom believes that A.I. will destroy humanity by becoming smarter than us and making us its bitch.

My dad believes that A.I. will be benevolent and can’t possibly do harm because it will be like Data on Star Trek and just know to how to do the right thing.

As A.I. moves closer and closer to the heart of our lives, we must consider the words of Dr. Fei-Fei Li (@drfeifei) at Grace Hopper 2017:

“There is nothing artificial about A.I.”

A.I. does not spring to life on its own as pure, objective, god-like technology. Human beings with biases, conscious and otherwise, are the ones putting the “intelligence” into A.I. by designing and coding algorithms and by contributing data to models that are crowdsourced, as with the Google Translate example above.

In other words, A.I. can only be as intelligent, or dumb, as we are.

My technologist hero Jaron Lanier speaks of the myth of A.I. and how a dangerous echo chamber created when the following factors exist:

The root data is bad.

We blindly trust A.I. to do the right thing.

The user interfaces surfacing the data don’t allow us to question or point out flaws in the data.

The Myth Of AI | Edge.org_To go yet another rung deeper, I'll revive an argument I've made previously, which is that it turns into an economic…_www.edge.org

The lack of diversity in tech is not just about fair and equal treatment in the workplace, it is also about a much larger crisis: fair and equal treatment in the digital and virtual worlds being built all around us.

As the analog world continues to fade and the virtual world comes more and more to the forefront of our lives, are we going to correct the errors and biases built into the analog world or are we going to bake these travesties in all over again?

I was blown away by the work Chieko Asakawa shared at Grace Hopper around her use of technology of help blind people be more independent and navigate the world.

And at the same time, it highlighted how sight-centric our world is and what an uphill battle it is when you need to retrofit the world to meet your needs when your experience was not considered in its design.

We must evolve our tech workforce to reflect the diversity of the real world we live in and include diverse voices in positions that can influence the technology that will shape our lives.

If we allow these algorithms and the virtual worlds we inhabit to be built by only a narrow range of people, these systems will inevitably perpetrate biases against those not represented.

We will miss out on one of the biggest opportunities in the history of the civilization to fix the most fundamental and profound wrongs in how we inhabit this world.

Imagine the end of gender- and race-based profiling and discrimination and where humanity can go without these limitations perpetrated by flawed stereotypes and data.

Imagine a world in which we are truly seen and valued for who are are and are treated according to our actual behaviors, not on what we are artificially predicted to do or want.

Imagine a world in which technology promises to make the world a better place and actually succeeds in doing so, not just for a narrow demographic, but for everyone.

AI Principles - Future of Life Institute_These principles were developed in conjunction with the 2017 Asilomar conference ( videos here), through the process…_futureoflife.org


Written by alohajha | Poet and information architect. Docs at XMTP Labs.
Published by HackerNoon on 2017/10/14