We are excited to bring Transform 2022 back in-person July 19 and virtually July 20 – 28. Join AI and data leaders for insightful talks and exciting networking opportunities. Register today!
There are many ways that bias can get woven into artificial intelligence (AI) and machine learning (ML) algorithms and decisions despite the best efforts to identify and root it out. It can be buried in the data used to generate the algorithms, the training process itself or arise in how the algorithms are used to make decisions.
In 2018, IBM launched AI Fairness 360, an open-source toolkit to check for and mitigate bias in datasets and ML models, and later added support for measuring uncertainty. The tool has improved the fairness of housing loans, insurance and medical decisions.
IBM’s new open-source Advertising Toolkit for AI Fairness hopes to do the same for the advertising industry. Consumers are not scrambling for approval for better ads in the same way they might for approval of a better mortgage rate or medical procedure.
“The real meat of all this is about integrating bias detection and mitigation tools in core marketing and advertising technologies,” Bob Lord, IBM senior vice president of the Weather Company and Alliances, told VentureBeat.
Statista estimated that companies spent $764 billion on advertising in 2021 and expects that figure to grow past $1 trillion in 2026. Better bias detection and mitigation could help companies, nonprofits and governments get more value from their ad spend across different groups. It may also help improve social determinants of health.
Ad meets tech
“The bias that exists in advertising has historically been ingrained in how we do marketing,” said Lord. It starts with how data scientists model segment data and model consumers. Now the ad industry is going through a convergence of marketing and technology. “We have gotten really good in the advertising industry at targeting people,” Lord said. But in the process of targeting people with new ML algorithms, advertisers have also suboptimized the results for certain groups.
For example, IBM worked with the Ad Council on a project to understand the impact of bias in an algorithmically-driven COVID-19 vaccine education campaign. The system dynamically served up over 10 million ad impressions consisting of 108 different creative variations selected by the algorithms. Over time, the system optimized the ads for women aged 45–65 who ended up clicking-through 32 times more than average.
This may have been a great result for a new handbag accessory, but was suboptimal for improving COVID-19 awareness for other demographics. “The bias is not intentional,” Lord explained. “It is hidden in the technology, and we don’t see it because we don’t have bias-detecting technology built into the machines yet.”
Lord’s team has already integrated this technology into AI and ML development workflows for mortgage applications and insurance underwriting. Today they are working with a few quick-service companies to analyze marketing campaigns after the fact. “My hope is that a year from now, we could build this technology in from the beginning,” Lord said.