whisper-small

Maintained By
openai

Whisper Small

PropertyValue
Parameter Count244M parameters
LicenseApache 2.0
PaperView Paper
Supported Languages99 languages

What is whisper-small?

Whisper-small is a transformer-based automatic speech recognition (ASR) model developed by OpenAI. It's part of the Whisper model family, trained on 680,000 hours of multilingual speech data. This particular variant offers an excellent balance between model size and performance, featuring 244M parameters while maintaining robust transcription capabilities.

Implementation Details

The model implements a sequence-to-sequence architecture specifically designed for speech recognition and translation tasks. It processes audio input by converting it to log-Mel spectrograms and can handle both transcription and translation tasks through specialized decoder prompts.

  • Encoder-decoder transformer architecture
  • Supports both transcription and translation tasks
  • Handles audio chunks up to 30 seconds
  • Includes timestamp prediction capabilities

Core Capabilities

  • Multilingual speech recognition across 99 languages
  • Speech translation to English
  • Zero-shot generalization to multiple domains
  • Robust performance against background noise and accents
  • Long-form transcription through chunking

Frequently Asked Questions

Q: What makes this model unique?

Whisper-small stands out for its ability to perform robust speech recognition without the need for fine-tuning, thanks to its extensive pre-training on 680k hours of labeled data. It offers a sweet spot between model size and performance, making it practical for production deployment.

Q: What are the recommended use cases?

The model is well-suited for general-purpose speech recognition, content accessibility tools, and multilingual transcription services. It's particularly effective for English ASR, with a reported WER of 3.43% on LibriSpeech clean test set.

The first platform built for prompt engineering