cupure logo
trumpukraineplanpeacetrumpspeace planwardealdeathcanada

AI's listening gap is fueling bias in jobs, schools and health care

Artificial intelligence is struggling to understand accented English and non-standard dialects, creating problems that can cascade into biased hiring, grading or clinical records. Why it matters: AI is deciding who gets a job interview, how students are graded, and what doctors record in a patient's chart. But major speech-to-text systems make far more errors for Black speakers than for white speakers.How it works: AI-powered speech recognition systems convert spoken words into text through automatic speech recognition (ASR), which uses acoustic models trained on millions of audio samples.Some companies use AI to transcribe and analyze interview responses, scoring candidates for jobs on clarity, keywords or sentiment.Schools use voice AI for oral reading tests, class captions and language learning."Ambient" AI tools listen during doctor visits and convert conversations into medical notes.U.S. courtrooms are also using similar systems to transcribe proceedings. Friction point: Various studies show that AI systems misinterpret speech from some Black speakers or others who don't use "standard English." Sarah Myers West, co-executive director of the AI Now Institute, tells Axios that it can lead to a misdiagnosis or false information in criminal cases."We're already seeing AI replicate patterns of inequality," she said. "If these systems decide who gets a job interview or access to care, they risk amplifying those same divides."West said these AI systems still mishear people because they're being deployed without proper testing or oversight.Zoom out: Allison Koenecke, an assistant professor of Information Science at Cornell Tech, tells Axios there's insufficient awareness of how AI speech models are being applied in "high-stakes domains" such as health care and criminal justice. "At face value, it seems fair because you're using the same speech model for everyone. But if that model is inherently biased, it leads to different outcomes for different people."The intrigue: Koenecke said many Fortune 100 companies use HireVue, an AI-based interviewing tool that automatically transcribes and scores applicants' recorded answers.That data can be used to determine if the applicant gets another round of interviews or gets hired, Koenecke said.Hirevue's Chief Science Officer Mike Hudy told Axios in a statement that Hirevue Assessments help ensure that "every candidate is evaluated only based on job-related competencies and skills, not on appearance, race, age, or background."The other side: Developers say they're expanding datasets and testing for "accent robustness." Companies like OpenAI, Amazon and Google have launched projects to collect more diverse speech samples and say their systems are improving. Some hospitals also use human reviewers to double-check transcripts from "ambient" AI scribes.Case in point: Whisper by OpenAI, the speech-recognition model, was recently trained on 680,000 hours of multilingual and multitask data, to improve "recognition of unique accents, background noise and technical jargon."Koenecke said models need new architectures, continual testing across dialects, accents and diverse teams that understand the risks."Just collecting more data won't solve all problems. It needs to be a continued, longitudinal effort across many speech types, not a one-time dataset fix."West said the public should demand a "Zero Trust AI" policy that shifts the burden of proof onto companies to show compliance with existing laws and requires continuous evaluation of AI systems.The bottom line: AI's "listening gap" risks making speech itself a new frontier of discrimination. Without auditing, accommodation and accent-inclusive data, the tools meant to broaden opportunity could instead lock out the very voices they claim to empower.Go deeper: AI's language gap

Comments

Similar News

World news