Gemini 2.5 Flash Environmental Impact
Google's fast reasoning model — best price-performance
Per query = Median text prompt (~300 tokens)
Energy per query
0.35 Wh
CO2 per query
0.05 g
Water per query
0.38 mL
Processing location
Google global network (64% renewable)
Provider
Category
Text / Chat
Grid carbon intensity
300 g CO2/kWh (64% renewable)
How does Gemini 2.5 Flash compare?
Detailed Breakdown
Energy Consumption
Gemini 2.5 Flash is optimised for speed and cost, consuming approximately 0.35 Wh per median query. It supports adaptive thinking but defaults to minimal reasoning overhead for simple queries. At roughly 1.5x the base Gemini median (0.24 Wh), it's significantly more efficient than 2.5 Pro while still supporting chain-of-thought reasoning when needed.
Power Source & Carbon
Runs on Google's custom TPU infrastructure with 100% renewable energy matching and a PUE of 1.10.
Water Usage
At approximately 0.38 mL per query, Gemini 2.5 Flash has a very low water footprint.
What does your Gemini 2.5 Flash usage cost the planet?
Use our calculator to estimate your personal environmental footprint based on how often you use Gemini 2.5 Flash.
Calculate My ComputeFrequently Asked Questions
How much energy does Gemini 2.5 Flash use per query?
Each Gemini 2.5 Flash query consumes approximately 0.35 Wh of energy. This is about the same as a traditional Google search (~0.3 Wh).
What is Gemini 2.5 Flash's carbon footprint?
Based on the carbon intensity of Google global network (64% renewable), each query produces approximately 0.05 g of CO2. The grid in this region has a carbon intensity of 300 g CO2/kWh with 64% renewable energy.
How much water does Gemini 2.5 Flash use?
Each query consumes approximately 0.38 mL of water, primarily used for cooling the data centers that process the request.
How does Gemini 2.5 Flash compare to a Google search?
A Gemini 2.5 Flash query uses about the same as a Google search in terms of energy. A Google search uses approximately 0.3 Wh, while Gemini 2.5 Flash uses 0.35 Wh.
Technical Details
Architecture
Multimodal Transformer (MoE, adaptive thinking)
Context window
1,000,000 tokens
Release date
2025-05-20
Open source
No
Training data cutoff
2025-03