TechieTricks.com
Algorithmic Justice League protests bias voice AI and media coverage Algorithmic Justice League protests bias voice AI and media coverage
A group of six influential women studying algorithmic bias, AI, and technology, released a spoken word piece titled “Voicing Erasure” to highlight racial bias... Algorithmic Justice League protests bias voice AI and media coverage


A group of six influential women studying algorithmic bias, AI, and technology, released a spoken word piece titled “Voicing Erasure” to highlight racial bias in the speech recognition systems made by tech giants. Creators also made Voicing Erasure to recognize the exclusion and overlooked contributions of women scholars and researchers.

“Racial disparities in automated speech recognition” was published roughly a week ago, and the authors found that automatic speech recognition systems for Apple, Amazon, Google, IBM, and Microsoft achieve word error rates of 35% for African-American voices and 19% for white voices. Automatic speech recognition systems from these tech giants can do things like transcribe speech-to-text and power AI assistants like Alexa, Cortana, and Siri.

The Voicing Erasure project is the product of the Algorithmic Justice League, a group created by Joy Buolamwini. Others who participated in the computer science art piece include former White House CTO Megan Smith, Race After Technology author Ruha Benjamin, Design Justice author Sasha Costanza-Chock, and Kimberlé Crenshaw, a professor of law at Columbia Law School and UCLA.

“We cannot let the promise of AI overshadow real and present harms,” Benjamin said in the piece.

Buolamwini and collaborators carried out audits in 2018 and 2019, which lawmakers and activists  frequently cite as central to understanding race and gender disparities in the performance of facial recognition systems from tech giants like Amazon and Microsoft. Buolamwini was also part of the Coded Bias documentary, which made its premiere at the Sundance Film Festival earlier this year, and “AI, Aint I A Woman?,” a play on an 1851 Sojourner Truth speech with a similar name.

VB TRansform 2020: The AI event for business leaders. San Francisco July 15 - 16

Additional audits are in the works, Buolamwini told VentureBeat, but the poetry was made to underscore “Racial disparities in automated speech recognition”. The Voicing Erasure project also recognized that voice assistants can reinforce gender stereotypes. Many major assistants today now offer both masculine and feminine voice options, with the exception of Amazon’s Alexa.

The poetic protest also recognizes that women researchers can encounter sexism, pointing to a New York Times article about the report that cites multiple male authors but fails to recognize lead author Allison Koenecke, who appears in Voicing Erasure. Algorithms of Oppression author Dr. Safiya Noble, who has also been critical of tech journalists, also participated in the spoken word project.

“Racial disparities in automated speech recognition” was published in the Proceedings of the National Academy of Sciences by a team of 10 researchers from Stanford University and Georgetown University. They found that Microsoft’s automatic speech assistant tech performed the best, while Apple and Google gave the worse performance.

Each conversational AI system transcribed a total 42 white speakers and 73 African-American speakers from data sets with nearly 20 hours of voice recordings. Researchers focused on voice data from Humboldt County, California and Sacramento, California from data sets with African-American Vernacular English (AAVE) like Voices of California and the Corpus of Regional African American Language (CORAAL).

The authors said their findings are likely the result of insufficient audio data from African-Americans speakers to train speech recognition systems and highlight the need for speech recognition system makers, academics, and government sponsoring research to invest in inclusivity.

“Such an effort, we believe, should entail not only better collection of data on AAVE speech but also better collection of data on other nonstandard varieties of English, whose speakers may similarly be burdened by poor ASR performance—including those with regional and nonnative-English accents,” the report reads. “We also believe developers of speech recognition tools in industry and academia should regularly assess and publicly report their progress along this dimension.”

In statements following the release of the study, Google and IBM Watson pledged commitments to progress.



Source link

techietr