← Back to Models
Mixtral 8x7B
by Mistral AIA Mixture of Experts model that activates only 2 of 8 experts per token, giving you near GPT-3.5 performance while only using ~13B parameters worth of compute.
chatcodereasoning
Choose a size:
Download 8x7B in Saga
Don't have Saga? Download it first
Or use the CLI:
ollama pull mixtral:8x7bDetails
- License
- Apache 2.0
- Released
- December 2023
- Context
- 32K tokens
- Downloads
- 3.1M