Text / Chat

DeepSeek-R1-0528 Environmental Impact

Very heavyEstimated

Major upgrade to R1 reasoning — significantly improved capabilities

Architecture
Mixture-of-Experts with chain-of-thought reasoning
Parameters
671B
Context
128,000 tokens
Provider
DeepSeek
32.0 Wh
Energy per query
17.6 g
CO₂ per query
175 mL
Water per query
107x more than
vs Google search

Energy per query

32.0 Wh

107x more than a Google search (0.3 Wh)

CO2 per query

17.6 g

China grid (550 gCO₂/kWh)

Water per query

175 mL

~6 queries to fill 1 litre

Processing location

DeepSeek Cloud (China)

Provider

DeepSeek

Category

Text / Chat

Grid carbon intensity

550 g CO2/kWh (30% renewable)

How does DeepSeek-R1-0528 compare?

Ranked #131 of 152 models by energy per query

0 Wh8 Wh16 Wh24 Wh32 WhLLaMA 3.2 1BGemini 1.5 ProGPT-4.1 NanoDeepSeek-R1-0528Google search (0.3 Wh)

Detailed Breakdown

Energy Consumption

DeepSeek-R1-0528 is a significant upgrade to the original R1, with improved reasoning quality and reduced refusal rates. At ~32 Wh per reasoning query, it remains one of the most energy-intensive models per-query due to extended chain-of-thought generation. As an open-source model, it's widely self-hosted on cheaper infrastructure.

Power Source & Carbon

DeepSeek's hosted API runs on Chinese infrastructure (550 g CO2/kWh). Self-hosted deployments can use any hardware. The model's open-source nature (MIT licence) enables deployment on cleaner grids.

Water Usage

At ~175 mL per reasoning query on DeepSeek's hosted API. Self-hosted deployments vary by location.

About DeepSeek-R1-0528

DeepSeek-R1-0528 is a 671B-parameter text and chat model from DeepSeek, released May 28, 2025. Major upgrade to R1 reasoning — significantly improved capabilities. At 32.0 Wh per query, it uses 107x the energy of a Google search. It runs on a Mixture-of-Experts with chain-of-thought reasoning architecture.

These figures are estimates derived from hardware specifications and API benchmarks — DeepSeek has not published official energy data for DeepSeek-R1-0528. Actual consumption may vary significantly depending on batching, quantisation, and infrastructure optimisations that we cannot observe from outside.

DeepSeek-R1-0528 in Context

100%
potential savings

The efficiency alternative

Gemini Nano performs the same type of task using just 0.01 Wh per query — 100% less energy than DeepSeek-R1-0528. For a user sending 25 queries per day, switching would save 291.9 kWh per year.

292.0 kWh
per year

Your yearly DeepSeek-R1-0528 footprint

At 25 queries per day, your annual DeepSeek-R1-0528 usage consumes 292.0 kWh — a meaningful fraction of household electricity. That produces 160.6 kg of CO₂.

Key Insights

Uses 10x more energy than the category average — reasoning models are inherently compute-intensive
Open-source weights — can be self-hosted on infrastructure you control

What does your DeepSeek-R1-0528 usage cost the planet?

Use our calculator to estimate your personal environmental footprint based on how often you use DeepSeek-R1-0528.

Calculate My Compute

Frequently Asked Questions

How much energy does DeepSeek-R1-0528 use per query?

Each DeepSeek-R1-0528 query consumes approximately 32.0 Wh of energy. This is 107x more than a traditional Google search (~0.3 Wh).

What is DeepSeek-R1-0528's carbon footprint?

Based on the carbon intensity of DeepSeek Cloud (China), each query produces approximately 17.6 g of CO2. The grid in this region has a carbon intensity of 550 g CO2/kWh with 30% renewable energy.

How much water does DeepSeek-R1-0528 use?

Each query consumes approximately 175 mL of water, primarily used for cooling the data centers that process the request.

How does DeepSeek-R1-0528 compare to a Google search?

A DeepSeek-R1-0528 query uses 107x more than a Google search in terms of energy. A Google search uses approximately 0.3 Wh, while DeepSeek-R1-0528 uses 32.0 Wh.

Technical Details

Architecture

Mixture-of-Experts with chain-of-thought reasoning

Parameters

671B

Context window

128,000 tokens

Release date

2025-05-28

Open source

Yes

Training data cutoff

2025-05