By MATT O’BRIEN, AP Technology Writer
CAMBRIDGE, Mass. (AP) — After retreating from their workplace diversity, equity and inclusion programs, tech companies could now face a second reckoning over their DEI work in AI products.
In the White House and the Republican-led Congress, “woke AI” has replaced harmful algorithmic discrimination as a problem that needs fixing. Past efforts to “advance equity” in AI development and curb the production of “harmful and biased outputs” are a target of investigation, according to subpoenas sent to Amazon, Google, Meta, Microsoft, OpenAI and 10 other tech companies last month by the House Judiciary Committee.
And the standard-setting branch of the U.S. Commerce Department has deleted mentions of AI fairness, safety and “responsible AI” in its appeal for collaboration with outside researchers. It is instead instructing scientists to focus on “reducing ideological bias” in a way that will “enable human flourishing and economic competitiveness,” according to a copy of the document obtained by The Associated Press.
In some ways, tech workers are used to a whiplash of Washington-driven priorities affecting their work.
But the latest shift has raised concerns among experts in the field, including Harvard University sociologist Ellis Monk, who several years ago was approached by Google to help make its AI products more inclusive.
Back then, the tech industry already knew it had a problem with the branch of AI that trains machines to “see” and understand images. Computer vision held great commercial promise but echoed the historical biases found in earlier camera technologies that portrayed Black and brown people in an unflattering light.
“Black people or darker skinned people would come in the picture and we’d look ridiculous sometimes,” said Monk, a scholar of colorism, a form of discrimination based on people’s skin tones and other features.
Google adopted a color scale invented by Monk that improved how its AI image tools portray the diversity of human skin tones, replacing a decades-old standard originally designed for doctors treating white dermatology patients.
“Consumers definitely had a huge positive response to the changes,” he said.
Now Monk wonders whether such efforts will continue in the future. While he doesn’t believe that his Monk Skin Tone Scale is threatened because it’s already baked into dozens of products at Google and elsewhere — including camera phones, video games, AI image generators — he and other researchers worry that the new mood is chilling future initiatives and funding to make technology work better for everyone.
“Google wants their products to work for everybody, in India, China, Africa, et cetera. That part is kind of DEI-immune,” Monk said. “But could future funding for those kinds of projects be lowered? Absolutely, when the political mood shifts and when there’s a lot of pressure to get to market very quickly.”
Trump has cut hundreds of science, technology and health funding grants touching on DEI themes, but its influence on commercial development of chatbots and other AI products is more indirect. In investigating AI companies, Republican Rep. Jim Jordan, chair of the judiciary committee, said he wants to find out whether former President Joe Biden’s administration “coerced or colluded with” them to censor lawful speech.
Michael Kratsios, director of the White House’s Office of Science and Technology Policy, said at a Texas event this month that Biden’s AI policies were “promoting social divisions and redistribution in the name of equity.”
The Trump administration declined to make Kratsios available for an interview but quoted several examples of what he meant. One was a line from a Biden-era AI research strategy that said: “Without proper controls, AI systems can amplify, perpetuate, or exacerbate inequitable or undesirable outcomes for individuals and communities.”
Even before Biden took office, a growing body of research and personal anecdotes was attracting attention to the harms of AI bias.
One study showed self-driving car technology has a hard time detecting darker-skinned pedestrians, putting them in greater danger of getting run over. Another study asking popular AI text-to-image generators to make a picture of a surgeon found they produced a white man about 98% percent of the time, far higher than the real proportions even in a heavily male-dominated field.