white woman, black woman, and hispanic women gathered with their tablet, phone, and laptop enjoying AI tools
Insights

The Importance of Diversity in AI and the Consequences of Ignoring It

Published By: The 3rd Eye
Apr 12, 2023

From Agricultural to Industrial, humanity evolves by hopping from revolution to revolution. Now we’re in the midst of an Artificial Intelligence revolution.

It’s projected that AI could add $15.7 trillion to the market by 2030—with the biggest gains in retail, financial services and healthcare (PwC).

At first, AI was feared—but every day we embrace it more. Rather than AI replacing human intelligence, it serves to complement us. Many forms of augmentative AI, for instance, are aiding healthcare providers.

Smart-sensor technology can give healthcare providers an “extra pair of eyes to augment the attention of human caretakers and to add information and to alert when something needs to be alerted.”

Source: stanford.edu

Even apart from these lesser known innovations, Artificial Intelligence (AI) has quickly transformed the modern world. With Machine Learning (ML) models powering your Netflix recommendations and Natural Language Processing (NLP) models powering your Alexa, AI is the new normal. (Key Differences)

So we must ask ourselves:

Is AI inclusive of all people, cultures, and perspectives? Or have we passed on our biases as its creators?

TABLE OF CONTENTS

The Importance of Diversity in AI
Avoiding Bias
Diverse Development Teams
Diverse Data Sets
Built-in Accountability
Consequences of Ignoring Diversity in AI
Biased Systems
Limiting Innovation
So, what can we do?

The Importance of Diversity in AI

New systems, old biases

AI might be speeding up society’s processes, but it’s also amplifying our biases.

While we bask in ChatGPT’s ability to help us write an email to our landlord, we may not be worrying about how it told this twitter user that an equation for a good scientist included the criteria “white” and “male.”

Or how this ProPublica report revealed that a private contractor’s algorithm was more likely to rate black parole candidates as higher risk. 

The development and deployment of AI systems can exacerbate existing inequalities and biases. 

That’s why it’s crucial to prioritize diversity and inclusivity in AI research, development, and implementation. 

Avoiding Bias

It’s a common myth that technology is morally neutral, like money or evolution, and that it can’t be inherently good, evil, or discriminatory—it might only appear so due to the intent of its user.

Source: redshift.autodesk.com

Bias in AI is not a new problem. It’s been around since its inception.

In 2021, “a new AI tool turned a pixelated photo of this column’s coauthor, Charles Isbell, from an image of a Black man to an image of a white man.” This sparked a debate about diversity in AI, revealing that many AI researchers haven’t embraced diversity concerns. 

Diverse Development Teams

The tech space isn’t well known for its diversity, and AI is unfortunately no exception.

A report by the AI Now Institute of NYU found that men make up 80% of AI professions and that AI researchers at Google and Facebook are only 10 and 15% women, respectively (forbes.com). Minorities are solely lacking, as well—with Black workers representing only 2.5% of Google’s workforce and 4% of Facebook’s and Microsoft’s (mit.edu).

Diversity in leadership matters. When leaders are diverse, the approaches to problem-solving diversify alongside them. The way talent is developed diversifies as well. With women and people of color in mentorship positions, the budding talent of young women of color in tech can find a space to be nurtured.

Diverse Data Sets

Numbers, text, photos. It all starts with data. AI models like machine learning are consistently fed data—and AI models are only as good as the data on which they’re trained.

If the data isn’t diverse and equitable, then its output won’t be diverse or equitable.

Diverse development teams increase the chances that algorithms are fed diverse data sets.

We also know, however, that feeding diverse data sets is not all that needs to be done—since often the algorithm itself can be the source of the problem

In machine learning, it’s not just sufficient to feed diverse data into a learning system. Rather, an AI system also needs to be designed so that it does not disregard data just because it appears to be [atypical] based on the small number of data points.

Source: sloanreview.mit.edu

Built-in Accountability

If it’s about more than just data sets, how can we avoid an algorithm with intrinsic biases?

There’s a few things we can do. Implementing these options is a start.

Ensemble methods, for instance, perform far better when it comes to diversity than homogenous methods. These are “learning systems that combine different kinds of functions, each with its own different biases,” and are considered “diverse by design.” (Source)

Also, we can build in accountability by incorporating a loss function, which evaluates how well the algorithm models the data. 

A loss function punishes the learning system if/when predictions deviate too much from actual results. With this objective and clear incentive that allows a system to know how it is performing, there is no way to know how it is performing.

Source: sloanreview.mit.edu

Consequences of Ignoring Diversity in AI

AI systems are everywhere—and they make decisions that have a real impact on people’s lives.

Biased Systems

If those decisions are biased, they negatively impact the lives of already marginalized people.

Let’s say an AI system decides who is going to receive a loan. If that system is biased against people of color, it will deny them loans they’d otherwise qualify for. 

Or consider an AI bus route built by only white men in their 30s. The needs of single mothers of color in neighboring communities aren’t very likely to be considered. 

Limiting Innovation

Not considering the diversity of people creating systems does more than just limit opportunities for young women and people of color getting into tech. 

It also stifles innovation in a big way. Having different perspectives, cultures, and life experiences informing the process leads to new and improved ideas. 

Frankly, ignoring diversity is a missed opportunity. Diversity helps you avoid an echo chamber of ideas.

So, what can we do?

Humans may be at fault for the bias within AI, but we’re also the only ones who can combat it. 

Tech companies developing and implementing AI systems can ensure a diverse team, diverse data sets, and built-in accountability

Marketers don’t build these tools—so we don’t have as direct of an impact. In the past, marketers have been responsible for exacerbating biases and stereotypes—and often still are. Being self-aware is the first step to combatting this.

When working with AI tools, marketers bring the human element, the multicultural awareness.We can hire culturally sensitive teams who are also technologically savvy. 

As a women-owned, primarily hispanic agency, multicultural marketing calls us to do this already. In this new AI-dominated world, it’s just a matter of ensuring they understand the complexities and possible biases of the tools they now rely on daily.

To top