Search Engine Marketing | Marketing | Research and Analysis | Advertising
Artificial Intelligence in marketing has the power to help automate routine tasks by applying math to massive amounts of data. Deploying AI tools have become important for marketers to not only save time and money but to increase the success of marketing initiatives. However, marketers need to be careful because those same AI programs also have the potential to magnify the biases that you unwittingly introduce in your marketing or that already exist in the applications you use.
“In short, we are creating the perfect storm against persons of color and other underrepresented populations,” says Miriam Vogel, President and CEO of EqualAI, a nonprofit organization and movement focused on illuminating and reducing unconscious bias in the development and use of artificial intelligence.
AI can help marketers in many ways, such as analyzing which blog or email newsletter topics have the greatest chance of getting seen. I wrote about this last month in my post Artificial Intelligence and Machine Learning for Marketing and Public Relations.
But wait, you might say, I don’t use AI in my marketing!
You likely make use of marketing AI tools and don’t know it.
If you’re running ads on platforms like Google, YouTube, or Facebook, you’re using the AI built into the ad platforms. And if you’re using a marketing automation system, there’s likely AI built into the system. You can learn about some of the ways in my post from a few weeks ago: Your Marketing May Already Be AI-Powered (And You Didn’t Even Know).
Racial and other biases can be introduced into your marketing in a number of ways that I will touch upon in this article.
An understanding of the limitations of AI in marketing is essential for your company to be seen as antiracist.
Bias in the images used in your marketing
When marketers write copy and choose images to go along with that copy, certain words have the potential to signal “male” or “female” bias. Similarly, particular images might mean different things to different types of buyer.
For example, a University of Washington study looked at the images surfaced during searches in online image catalogs. The study found that for a CEO image search, only 11 percent of the top image results for “CEO” showed women, whereas women were 27 percent of US CEOs at the time, according to the study. Twenty-five percent of people depicted in image search results for “authors” are women, compared with 56 percent of actual U.S. authors.
The same problems can emerge with images depicting people of different races.
I sat down with my daughter Reiko for a short video discussion about diversity in marketing. We discussed true diversity vs a fictional depiction of diversity.
Reiko is a half-Japanese Millennial and my co-author of our book Fanocracy: How to Turn Fans into Customers and Customers into Fans.
“I want to see somebody who looks like me and acts like me and has a background like me,” Reiko says. “I want to know that the companies I do business with understand me.”
Reiko and I discuss tokenism, the practice of many marketers of tossing in one or two people of a different race into their images, rather than truly understanding the needs and backgrounds of a diverse audience.
In this video, Reiko has some specific suggestions for marketers.
True Diversity vs a Fictional Version of Diversity in Marketing from David Meerman Scott on Vimeo.
Of course, this is what all marketers are supposed to do but when it comes to communicating with people who are unlike us, it can be difficult.
Understanding bias built into ad platforms
If you are using the ad platforms from companies like Google, YouTube, and Facebook, your ads are likely being shown using bias built into the AI platforms.
In an article in MIT Technology Review, Facebook’s ad-serving algorithm discriminates by gender and race, author Karen Hao writes, “Even if an advertiser is well-intentioned, the algorithm still prefers certain groups of people over others.”
Hao found postings for preschool teachers and secretaries were shown to a higher fraction of women, while postings for janitors and taxi drivers were shown to a higher proportion of minorities. Ads about homes for sale were shown to more white users, while ads for rentals were shown to more minorities.
The problem stems from limitations with the underlying word embedding AI technology that helps computers interpret speech.
A paper published in Science Semantics derived automatically from language corpora contain human-like biases shows implicit bias fed into AI programs are acquired by those programs. As summarized in The Guardian, the words “female” and “woman” were more closely associated with arts and humanities occupations and with the home. Meanwhile “male” and “man” were more closely associated with math and engineering professions.
The research also showed AI systems were more likely to associate European American names with pleasant words such as “gift” or “happy”, while African American names were more commonly associated with unpleasant words.
Remaining human in a world of AI
We need to remember that AI programs are only as good as the people who build them. If we’re biased, the machines we build will be biased.
As we increasingly use AI in marketing, it’s essential that we remain vigilant about keeping the human aspects of our marketing intact.
Algorithms trained with data generated or interpreted by homogeneous groups have failed with more diverse data. Without humans paying attention, AI algorithms may continually skew your marketing to the point where you miss entire groups of buyers or, worse, become a company that’s seen by part of the market as prejudiced or insensitive to diversity.
Always remember that the humanity of what you and your people bring to marketing is essential in a world of AI.
Machines are not taking over marketing. Rather, an effective “collaboration” between machines and people will be the key to success in the years to come.
Image via EqualAI