The Importance of Diversity in AI and the Consequences of Ignoring It

by | Apr 12, 2023 | Insights

5 MIN READ

TABLE OF CONTENTS

From Agricultural to Industrial, humanity evolves by hopping from revolution to revolution. Now we’re in the midst of an Artificial Intelligence revolution.

It’s projected that AI could add $15.7 trillion to the market by 2030—with the biggest gains in retail, financial services and healthcare (PwC).

At first, AI was feared—but every day we embrace it more. Rather than AI replacing human intelligence, it serves to complement us. Many forms of augmentative AI, for instance, are aiding healthcare providers.

Smart-sensor technology can give healthcare providers an “extra pair of eyes to augment the attention of human caretakers and to add information and to alert when something needs to be alerted.”

Source: stanford.edu

Even apart from these lesser known innovations, Artificial Intelligence (AI) has quickly transformed the modern world. With Machine Learning (ML) models powering your Netflix recommendations and Natural Language Processing (NLP) models powering your Alexa, AI is the new normal. (Key Differences)

So we must ask ourselves:

Is AI inclusive of all people, cultures, and perspectives? Or have we passed on our biases as its creators?

TABLE OF CONTENTS

The Importance of Diversity in AI
Avoiding Bias
Diverse Development Teams
Diverse Data Sets
Built-in Accountability
Consequences of Ignoring Diversity in AI
Biased Systems
Limiting Innovation
So, what can we do?

The Importance of Diversity in AI

New systems, old biases

Artificial Intelligence might be speeding up society’s processes, but it’s also amplifying our biases.

AI ethics literature pays due attention to issues of justice and fairness. For instance, do AI-aided sentences inflict harsher sentences on members of minorities? Do facial recognition algorithms used in recruitment screen out a disproportionate amount of black or colored applicants? Do AI-powered voice recognition systems consider all English accents on an equal basis?

Source: Applied Artificial Intelligence, An International Journal

While we bask in ChatGPT’s ability to help us write an email to our landlord, we may not be worrying about how it told this twitter user that an equation for a good scientist included the criteria “white” and “male.”

Or how this ProPublica report revealed that a private contractor’s algorithm was more likely to rate black parole candidates as higher risk. 

The development and deployment of AI systems can exacerbate existing inequalities and biases—creating a digital divide.

For business ethics and just human well-being, it’s crucial to prioritize diversity and inclusivity in AI research, development, and implementation.

Avoiding Bias

It’s a common myth that technology is morally neutral, like money or evolution, and that it can’t be inherently good, evil, or discriminatory—it might only appear so due to the intent of its user.

Source: redshift.autodesk.com

Bias in AI is not a new problem. It’s been around since its inception.

In 2021, “a new AI tool turned a pixelated photo of this column’s coauthor, Charles Isbell, from an image of a Black man to an image of a white man.” This sparked a debate about diversity in AI, revealing that many AI researchers haven’t embraced diversity concerns. 

Diverse Development Teams

The tech industry isn’t well known for its diversity, and AI is unfortunately no exception.

A report by the AI Now Institute of NYU found that men make up 80% of AI professions and that AI researchers at Google and Facebook are only 10 and 15% women, respectively (forbes.com). Minorities are solely lacking, as well—with Black workers representing only 2.5% of Google’s workforce and 4% of Facebook’s and Microsoft’s (mit.edu).

We combat this by hiring diverse developers. More than just hiring diverse employees, consider the concept of diversity climate: do your diverse employees feel comfortable and empowered to speak up?

This is why diversity in leadership matters. When leaders are diverse, the approaches to problem-solving diversifies alongside them. The way talent is developed diversifies as well. With women and people of color in mentorship positions, the budding talent of young women of color in tech can find a space to be nurtured.

Diverse Data Sets

Numbers, text, photos. It all starts with data. Machine learning systems are consistently fed data—and AI models are only as good as the data on which they’re trained.

If the data isn’t diverse and equitable, than its output won’t be diverse or equitable.

Diverse development teams increase the chances that algorithms are fed diverse data sets.

If a generative AI model is fed photos of mostly light-skin people to learn what a face looks like, then dark skin faces will be difficult to generate—if generated at all.

If it’s taught what parents or couples look like with only examples of straight people, then that’s the only sexual orientation that will be generated.

These intelligent systems are only as intelligent as the humans who pitch in throughout the development process.

We also know, however, that feeding diverse data sets is not all that needs to be done—since often the algorithm itself can be the source of the problem

In machine learning, it’s not just sufficient to feed diverse data into a learning system. Rather, an AI system also needs to be designed so that it does not disregard data just because it appears to be [atypical] based on the small number of data points.

Source: sloanreview.mit.edu

Built-in Accountability

If it’s about more than just data sets, how can we avoid an algorithm with intrinsic biases?

There’s a few things we can do to avoid discrimination in machine learning. Implementing these options is a start.

Ensemble methods, for instance, perform far better when it comes to diversity than homogenous methods. These are “learning systems that combine different kinds of functions, each with its own different biases,” and are considered “diverse by design.” (Source)

Also, we can build in accountability by incorporating a loss function, which evaluates how well the algorithm models the data. 

A loss function punishes the learning system if/when predictions deviate too much from actual results. With this objective and clear incentive that allows a system to know how it is performing, there is no way to know how it is performing.

Source: sloanreview.mit.edu

Consequences of Ignoring Diversity in AI

AI systems are everywhere—and they make decisions that have a real impact on people’s lives.

Biased Systems

If those decisions are biased, they negatively impact the lives of already marginalized people.

Let’s say an AI system decides who is going to receive a loan. If that system is biased against people of color, it will deny them loans they’d otherwise qualify for. 

Or consider an AI bus route built by only white men in their 30s. The needs of single mothers of color in neighboring communities aren’t very likely to be considered. 

Limiting Innovation

Not considering the diversity of people creating systems does more than just limit opportunities for young women and people of color getting into tech. 

It also stifles innovation in a big way. Having different perspectives, cultures, and life experiences informing the process leads to new and improved ideas. 

Frankly, ignoring diversity is a missed opportunity. Diversity helps you avoid an echo chamber of ideas.

So, what can we do?

Humans may be at fault for the bias within Artificial Intelligence, but we’re also the only ones who can combat it.

How can we ensure fairness in machine learning?

Tech companies developing and implementing AI systems can implement diverse teams, diverse data sets, and built-in accountability.

Marketers don’t build these tools—so we don’t have as direct of an impact. In the past, marketers have been responsible for perpetuating biases and stereotypes—and often still are. Being self-aware is the first step to combatting this.

When working with AI tools, marketers bring the human element, the multicultural awareness. We can hire culturally sensitive teams who are also technologically savvy. 

As a women-owned, primarily hispanic agency, multicultural marketing calls us to do this already. In this new AI-dominated world, it’s just a matter of ensuring they understand the complexities and possible biases of the tools they now rely on daily.


Curious about how Artificial Intelligence can help supercharge your creative process? Here are 4 AI tools to revolutionize your team’s creative process.

Need help incorporating AI into your campaign strategy? Reach out to us here

AUTHOR

RELATED ARTICLES

The 3rd Eye

Content Writer

See More By The 3rd Eye
How To Market To Hispanic Consumers: Tapping into tech-savvy audiences

How To Market To Hispanic Consumers: Tapping into tech-savvy audiences

Watch our recent work showreel: THE 3RD EYE’s 2023 Wins

Watch our recent work showreel: THE 3RD EYE’s 2023 Wins

AUTHOR