Decoder Layers
Components in sequence-to-sequence models (e.g., transformers) responsible for generating output sequences from encoded inputs. Each layer typically contains self-attention and feed-forward sub-layers, processing tokens incrementally to produce the final output.
Related Concepts
- Transformer
- Encoder-Decoder Architecture
- Automatic Speech Recognition
- Whisper
Applications
- Used in Whisper’s
whisper-large-v3-turbomodel for real-time ASR, as demonstrated in Fahd Mirza - getting Whisper working on Google Colab.
Backlink: 2026 04 14 Fahd Mirza getting Whisper working on Google Colab