GQA: Training Generalized Multi-Query Transformer Models from Multi-Head Checkpoints
Date : 2023-05-22
Abstract
Multi-query attention (MQA), which only uses a single key-value head, drastically speeds up decoder inference. However, MQA can lead to quality degradation, and moreover it may not be desirable to train a separate model just for faster inference. We (1) propose a recipe for uptraining existing multi-head language model checkpoints into models with MQA using 5% of original pre-training compute, and (2) introduce grouped-query attention (GQA), a generalization of multi-query attention which uses an intermediate (more than one, less than number of query heads) number of key-value heads. We show that uptrained GQA achieves quality close to multi-head attention with comparable speed to MQA.
Research paper available here
Evaluation of Sports Performance: Cognitive Biases, Vectors an...
What Tech tells us about corporate culture
Recently on :
Artificial Intelligence
Research
PITTI - 2024-09-19
A bubble in AI?
Bubble or true technological revolution? While the path forward isn't without obstacles, the value being created by AI extends ...
PITTI - 2024-09-08
Artificial Intelligence : what everyone can agree on
Artificial Intelligence is a divisive subject that sparks numerous debates about both its potential and its limitations. Howeve...
WEB - 2024-03-04
Nvidia bans using translation layers for CUDA software | Tom's Hardware
Tom's Hardware - Nvidia has banned running CUDA-based software on other hardware platforms using translation layers in its lice...
WEB - 2024-02-21
Retell AI : conversational speech engine
Retell tackle the challenge of real time conversations with voice AI.
WEB - 2024-02-21
Groq Inference Tokenomics: Speed, But At What Cost? | Semianalysis
Semianalysis - Groq, an AI hardware startup, has been making waves with their impressive demos showcasing Mistral Mixtral 8x7b ...