×
OpenAI launches three new voice AI models with bespoke accent and emotion features
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

OpenAI is expanding its voice AI capabilities with three new proprietary models designed to enhance transcription and text-to-speech functionality. These offerings arrive after the company’s previous voice AI controversy with Scarlett Johansson and reflect OpenAI’s strategic push into audio AI while addressing potential concerns about voice imitation through user customization options.

The big picture: OpenAI has launched three new voice models—gpt-4o-transcribe, gpt-4o-mini-transcribe, and gpt-4o-mini-tts—initially available through its API for developers and on a limited-access demo site called OpenAI.fm.

  • The models are variants of GPT-4o specifically post-trained with additional data for transcription and speech capabilities.
  • These offerings are positioned to replace OpenAI’s two-year-old Whisper open source text-to-speech model with improved accuracy and performance.

Key features: The new voice models offer enhanced customization, allowing users to modify accents, pitch, tone, and emotional qualities through text prompts.

  • In a VentureBeat demo, OpenAI’s Jeff Harris demonstrated how the same voice could be transformed from “a cackling mad scientist” to “a zen, calm yoga teacher” using only text instructions.
  • The models support over 100 languages and show improved performance in noisy environments with lower word error rates across industry benchmarks.
  • They handle diverse accents and varying speech speeds more effectively than previous offerings.

The price tag: OpenAI has established a tiered pricing structure for its new voice AI models based on token usage.

  • The gpt-4o-transcribe model costs $6.00 per million audio input tokens (approximately $0.006 per minute).
  • The gpt-4o-mini-transcribe offers a more economical option at $3.00 per million audio input tokens (about $0.003 per minute).
  • Text-to-speech functionality through gpt-4o-mini-tts is priced at $0.60 per million text input tokens and $12.00 per million audio output tokens (roughly $0.015 per minute).

Between the lines: The customization features appear designed to address concerns about voice imitation following the Scarlett Johansson controversy.

  • OpenAI previously denied deliberately imitating Johansson’s voice but removed the contentious voice option anyway.
  • The new approach shifts responsibility to users, who can now design voice characteristics themselves rather than selecting from potentially problematic presets.
OpenAI’s new voice AI model gpt-4o-transcribe lets you add speech to your existing text apps in seconds

Recent News

AI builds architecture solutions from concept to construction

AI tools are giving architects intelligent collaborators that propose design solutions, handle technical tasks, and identify optimal materials while preserving human creative direction.

Push, pull, sniff: AI perception research advances beyond sight to touch and smell

AI systems struggle to understand sensory experiences like touch and smell because they lack physical bodies, though multimodal training is showing promise in bridging this comprehension gap.

Vibe coding shifts power dynamics in Silicon Valley

AI assistants now write most of the code for tech startups, shifting value from technical skills to creative vision and idea generation.