Telus Digital deployed artificial intelligence this week to alter call center agents’ accents in real-time—without telling customers their voices have been AI-processed. When The Globe and Mail broke the story on May 5, labour unions called it “deceptive,” competitors Rogers and Bell publicly refused to adopt the technology, and Canadian Parliament is being urged to mandate disclosure.
This isn’t about customer service efficiency. It’s about AI fundamentally altering human identity without transparency. Workers are forced to hide cultural markers while customers unknowingly interact with AI-mediated personas.
Cultural Ventriloquism Without Disclosure
Customers have no idea they’re hearing AI-altered voices. Telus doesn’t disclose the technology is in use, meaning every affected customer call involves an AI-mediated interaction presented as authentic human speech.
“The use of AI technology to deceive Canadians in any way should be prohibited,” Roch Leblanc, Unifor’s telecommunications director, testified before Canadian Parliament on April 30. “Using AI to mask accents of offshore agents could mislead Canadians into believing they were speaking with Canada-based employees.”
A worker in the Philippines demonstrated the technology by toggling it on and off during a call, showing the stark difference before and after modification. Meanwhile, Rogers and Bell both told The Globe and Mail they will NOT use accent-altering AI, drawing a clear ethical line their competitor refused to draw.
This crosses a fundamental boundary. If companies can secretly alter worker voices, what else can they modify? Speech patterns, emotional tone, even words themselves? Without disclosure requirements, customers interact with AI-mediated reality while believing they’re hearing authentic human communication.
How the Technology Works
The technology from Tomato.ai uses speech-to-speech models to directly transform audio in real-time. Unlike text-to-speech systems that convert voice to text and back, this approach converts speech to mel spectrograms, modifies pronunciation-related acoustic features using deep learning, then decodes back to audio—all while the conversation is happening.
The system claims to preserve pitch and tone while altering pronunciation patterns. It’s cloud-based, meaning all audio is routed through Tomato.ai’s servers for processing. Moreover, the “zero-shot” capability works instantly for any speaker without prior training, allowing companies to deploy it to any worker without their consent or preparation.
Understanding the technology reveals its invasiveness. This isn’t “auto-correct for speech”—it’s real-time identity manipulation. Because it’s cloud-based, third-party servers process every word of every customer conversation, raising privacy concerns that extend far beyond accent modification.
Workers Forced to Hide Cultural Identity
This technology disproportionately affects offshore workers from the Philippines and India in the BPO (Business Process Outsourcing) industry. While Telus claims it “protects workers from harassment,” critics argue it forces workers to present false identities for corporate convenience.
“We’re literally taking away their voices and further devaluing them,” one developer wrote in the active Hacker News discussion. “This is cultural genocide—workers must alter their identity to appease customers.”
The data on accent bias is real: a single negative interaction tied to accent bias drops customer retention likelihood by 33%, and the industry loses $15 billion annually on communication barriers. However, the root problem isn’t worker accents—it’s customer bias and corporate outsourcing decisions.
Instead of addressing bias or relocating centers onshore, Telus chose to mask workers’ authentic identities. Philippines workers have historically trained for “neutral North American accents” and adopted Western names. This AI takes it further, treating the symptom while reinforcing the underlying problem: that workers’ real selves aren’t “good enough” for customers.
The Debate: Clarity vs. Dehumanization
The developer community is divided. Supporters argue it improves communication clarity and protects workers from accent-based harassment. “I don’t always catch everything they’re saying due to unfamiliar accents,” one Hacker News commenter noted. “This helps with clarity.”
Critics see it differently. “The real issue is outsourcing,” another commenter argued. “Rather than mask accents, companies should hire Canadians or provide proper training.” Furthermore, some warn it enables companies to offshore more aggressively by removing the reputational friction of accent complaints. Others worry it could enable scam calls to sound more legitimate.
Both sides have legitimate points. Accent bias does exist. Worker dignity does matter. Nevertheless, the lack of disclosure tips the balance decisively toward deception. Clarity and authenticity don’t have to be mutually exclusive if approached with transparency.
Mandatory Disclosure is the Bare Minimum
Current AI voice laws don’t cover real-time accent alteration. Regulations focus on deepfakes and voice cloning, not live modification. YouTube, TikTok, and Meta require disclosure of synthetic media, but phone calls are unregulated.
At bare minimum, customers deserve to know when they’re hearing AI-altered voices. Better yet: address the root causes—customer bias and offshore labor practices—rather than masking them with technology.
The technology itself isn’t inherently wrong. It could be used ethically for accessibility or language learning. However, deploying it secretly in customer service crosses ethical lines. Disclosure should be mandatory, and workers should be able to opt-in rather than being forced to participate.
Telus chose efficiency over ethics. Rogers and Bell drew a line. Consequently, regulators should do the same.











