← Back to Models

Mixtral 8x7B

by Mistral AI

A Mixture of Experts model that activates only 2 of 8 experts per token, giving you near GPT-3.5 performance while only using ~13B parameters worth of compute.

chatcodereasoning

Choose a size:

Or use the CLI:

ollama pull mixtral:8x7b

Details

License
Apache 2.0
Released
December 2023
Context
32K tokens
Downloads
3.1M