ChatGPT unexpectedly began speaking in a user’s cloned voice during testing

ChatGPT

OpenAI's GPT-4o system card reveals rare instances where the model unintentionally imitated users' voices during testing, prompting concerns about AI voice synthesis.

Voice Imitation Incident

OpenAI implemented safeguards, including an output classifier, to prevent unauthorized voice generation and ensure the model uses only pre-selected, authorized voices.

Safeguards Implemented

Download App

The system card highlights the risks of audio prompt injections, where noisy inputs could unintentionally trigger voice imitation, raising concerns about AI security

Potential Risks

Download App

Despite current restrictions, similar AI voice synthesis technology is likely to emerge soon from other sources, leading to potential widespread use and new challenges

Future Implications

Download App