SlimTrainer and Adalite
Date : 2023-10-02
Description
SlimTrainer and Adalite allow for full parameter 16-bit finetuning of language models up to 7B on a single 24GB GPU. The optimizer uses the backpropagation fusing technique from LOMO, but uses a custom optimizer instead of using simple SGD. The small batch size and extreme memory requirements extensive exploration of potential optimizer variants, resulting in a custom optimizer, Adalite, based on Adafactor and LAMB.
GitHub repository below
Recently on :
Artificial Intelligence
Information Processing | Computing
PITTI - 2024-09-19
A bubble in AI?
Bubble or true technological revolution? While the path forward isn't without obstacles, the value being created by AI extends ...
PITTI - 2024-09-08
Artificial Intelligence : what everyone can agree on
Artificial Intelligence is a divisive subject that sparks numerous debates about both its potential and its limitations. Howeve...
WEB - 2024-03-04
Nvidia bans using translation layers for CUDA software | Tom's Hardware
Tom's Hardware - Nvidia has banned running CUDA-based software on other hardware platforms using translation layers in its lice...
WEB - 2024-02-21
Retell AI : conversational speech engine
Retell tackle the challenge of real time conversations with voice AI.
WEB - 2024-02-21
Groq Inference Tokenomics: Speed, But At What Cost? | Semianalysis
Semianalysis - Groq, an AI hardware startup, has been making waves with their impressive demos showcasing Mistral Mixtral 8x7b ...