Text / Chat

Mixtral 8x7B Environmental Impact

StandardEstimated

Most popular open-source Mixture-of-Experts model

Architecture
Sparse Mixture-of-Experts (8 experts, 2 active)
Parameters
46.7B
Context
32,000 tokens
Provider
Mistral AI
0.60 Wh
Energy per query
0.38 g
CO₂ per query
3 mL
Water per query
2x more than
vs Google search

Energy per query

0.60 Wh

2x more than a Google search (0.3 Wh)

CO2 per query

0.38 g

Global Average grid (475 gCO₂/kWh)

Water per query

3 mL

~400 queries to fill 1 litre

Processing location

Self-hosted (varies)

Provider

Mistral AI

Category

Text / Chat

Grid carbon intensity

475 g CO2/kWh (27% renewable)

How does Mixtral 8x7B compare?

Ranked #39 of 152 models by energy per query

0 Wh0.15 Wh0.3 Wh0.45 Wh0.6 WhLLaMA 3.2 1BGemini 1.5 ProGPT-4.1 NanoMixtral 8x7BGoogle search (0.3 Wh)

Detailed Breakdown

Energy Consumption

Mixtral 8x7B has 46.7B total parameters but only activates ~12.9B per token through its MoE architecture, giving near-GPT-3.5 quality at ~0.6 Wh per query. The MoE design means it uses far less energy than a dense 47B model would, making it an excellent efficiency/capability trade-off.

Power Source & Carbon

Fully open-source (Apache 2.0). Widely deployed on cloud and on-premises. Mistral AI's own inference runs from European data centres on relatively clean grids.

Water Usage

At ~2.5 mL per query, Mixtral 8x7B has a moderate water footprint. The MoE architecture keeps this lower than a comparably-capable dense model.

About Mixtral 8x7B

Mixtral 8x7B is an open-source text and chat model from Mistral AI, released in December 11, 2023, that runs well below the category average for energy consumption at 0.60 Wh per query. Because its weights are publicly available, it can be self-hosted on any infrastructure — meaning its carbon footprint depends entirely on where and how you choose to run it. At 46.7B parameters, it most popular open-source mixture-of-experts model.

These figures are estimates derived from hardware specifications and API benchmarks — Mistral AI has not published official energy data for Mixtral 8x7B. Actual consumption may vary significantly depending on batching, quantisation, and infrastructure optimisations that we cannot observe from outside.

Key Insights

Uses less than a third of the average energy for text and chat models
Open-source weights — can be self-hosted on infrastructure you control

What does your Mixtral 8x7B usage cost the planet?

Use our calculator to estimate your personal environmental footprint based on how often you use Mixtral 8x7B.

Calculate My Compute

Frequently Asked Questions

How much energy does Mixtral 8x7B use per query?

Each Mixtral 8x7B query consumes approximately 0.60 Wh of energy. This is 2x more than a traditional Google search (~0.3 Wh).

What is Mixtral 8x7B's carbon footprint?

Based on the carbon intensity of Self-hosted (varies), each query produces approximately 0.38 g of CO2. The grid in this region has a carbon intensity of 475 g CO2/kWh with 27% renewable energy.

How much water does Mixtral 8x7B use?

Each query consumes approximately 3 mL of water, primarily used for cooling the data centers that process the request.

How does Mixtral 8x7B compare to a Google search?

A Mixtral 8x7B query uses 2x more than a Google search in terms of energy. A Google search uses approximately 0.3 Wh, while Mixtral 8x7B uses 0.60 Wh.

Technical Details

Architecture

Sparse Mixture-of-Experts (8 experts, 2 active)

Parameters

46.7B

Context window

32,000 tokens

Release date

2023-12-11

Open source

Yes

Training data cutoff

2023-12