Introducing Whisper


We’ve educated and are open-sourcing a neural internet referred to as Whisper that approaches human degree robustness and accuracy on English speech recognition.

Learn Paper


View Code


View Mannequin Card

Whisper is an computerized speech recognition (ASR) system educated on 680,000 hours of multilingual and multitask supervised information collected from the online. We present that the usage of such a big and various dataset results in improved robustness to accents, background noise and technical language. Furthermore, it permits transcription in a number of languages, in addition to translation from these languages into English. We’re open-sourcing fashions and inference code to function a basis for constructing helpful functions and for additional analysis on sturdy speech processing.

The Whisper structure is an easy end-to-end method, carried out as an encoder-decoder Transformer. Enter audio is break up into 30-second chunks, transformed right into a log-Mel spectrogram, after which handed into an encoder. A decoder is educated to foretell the corresponding textual content caption, intermixed with particular tokens that direct the only mannequin to carry out duties akin to language identification, phrase-level timestamps, multilingual speech transcription, and to-English speech translation.

Different current approaches incessantly use smaller, extra intently paired audio-text coaching datasets, or use broad however unsupervised audio pretraining. As a result of Whisper was educated on a big and various dataset and was not fine-tuned to any particular one, it doesn’t beat fashions specializing in LibriSpeech efficiency, a famously aggressive benchmark in speech recognition. Nonetheless, after we measure Whisper’s zero-shot efficiency throughout many various datasets we discover it’s rather more sturdy and makes 50% fewer errors than these fashions.

A few third of Whisper’s audio dataset is non-English, and it’s alternately given the duty of transcribing within the authentic language or translating to English. We discover this method is especially efficient at studying speech to textual content translation and outperforms the supervised SOTA on CoVoST2 to English translation zero-shot.

We hope Whisper’s excessive accuracy and ease of use will permit builders so as to add voice interfaces to a a lot wider set of functions. Try the paper, mannequin card, and code to study extra particulars and to check out Whisper.

Leave a Reply