Visual schemas showing how LLMs are trained and how they work
๐ Data Collection โ ๐ค Tokenization โ ๐ข Embeddings โ ๐ Pretraining โ ๐ฏ Fine-tuning โ ๐ง Reinforcement Learning โ ๐ Trained Model
๐ User Input โ ๐ค Tokenization โ ๐ข Embeddings โ ๐ฏ Attention โ ๐งฎ Feed Forward โ ๐ Layer Processing โ ๐ Output Probabilities โ ๐ฒ Token Selection โ ๐ค Detokenization โ ๐ฌ Response
Ignacio Lรณpez Luna
- ๐ GitHub: ilopezluna
- ๐ผ LinkedIn: ilopezluna
- ๐ Substack: ilopezluna.substack.com
- ๐ Medium: ignasi.lopez.luna