Text / Chat

Jamba 1.5 Large Environmental Impact

StandardEstimated

Hybrid SSM-Transformer architecture for efficient long context

Architecture
SSM-Transformer Hybrid (Mamba + Attention)
Parameters
398B
Context
256,000 tokens
Provider
AI21 Labs
1.0 Wh
Energy per query
0.45 g
CO₂ per query
2 mL
Water per query
3x more than
vs Google search

Energy per query

1.0 Wh

3x more than a Google search (0.3 Wh)

CO2 per query

0.45 g

US East (Virginia) grid (450 gCO₂/kWh)

Water per query

2 mL

~476 queries to fill 1 litre

Processing location

Cloud (US)

Provider

AI21 Labs

Category

Text / Chat

Grid carbon intensity

450 g CO2/kWh (25% renewable)

How does Jamba 1.5 Large compare?

Ranked #54 of 152 models by energy per query

0 Wh0.25 Wh0.5 Wh0.75 Wh1 WhLLaMA 3.2 1BGemini 1.5 ProGPT-4.1 NanoJamba 1.5 LargeGoogle search (0.3 Wh)

Detailed Breakdown

Energy Consumption

Jamba 1.5 Large uses a novel SSM-Transformer hybrid architecture that combines Mamba state-space layers with traditional attention. This gives it significantly better efficiency for long contexts — O(n) rather than O(n²) scaling. At ~1.0 Wh for short queries, it's competitive with smaller models, and the efficiency advantage grows with longer contexts.

Power Source & Carbon

Available as open-source or via AI21 API. The hybrid architecture makes it particularly attractive for long-context inference on more modest hardware.

Water Usage

At ~2.1 mL per query, Jamba is water-efficient relative to its 398B parameter count, thanks to the SSM components reducing overall compute.

About Jamba 1.5 Large

Jamba 1.5 Large is an open-source text and chat model from AI21 Labs, released in August 22, 2024, that runs well below the category average for energy consumption at 1.0 Wh per query. Because its weights are publicly available, it can be self-hosted on any infrastructure — meaning its carbon footprint depends entirely on where and how you choose to run it. At 398B parameters, it hybrid ssm-transformer architecture for efficient long context.

These figures are estimates derived from hardware specifications and API benchmarks — AI21 Labs has not published official energy data for Jamba 1.5 Large. Actual consumption may vary significantly depending on batching, quantisation, and infrastructure optimisations that we cannot observe from outside.

Key Insights

Open-source weights — can be self-hosted on infrastructure you control

What does your Jamba 1.5 Large usage cost the planet?

Use our calculator to estimate your personal environmental footprint based on how often you use Jamba 1.5 Large.

Calculate My Compute

Frequently Asked Questions

How much energy does Jamba 1.5 Large use per query?

Each Jamba 1.5 Large query consumes approximately 1.0 Wh of energy. This is 3x more than a traditional Google search (~0.3 Wh).

What is Jamba 1.5 Large's carbon footprint?

Based on the carbon intensity of Cloud (US), each query produces approximately 0.45 g of CO2. The grid in this region has a carbon intensity of 450 g CO2/kWh with 25% renewable energy.

How much water does Jamba 1.5 Large use?

Each query consumes approximately 2 mL of water, primarily used for cooling the data centers that process the request.

How does Jamba 1.5 Large compare to a Google search?

A Jamba 1.5 Large query uses 3x more than a Google search in terms of energy. A Google search uses approximately 0.3 Wh, while Jamba 1.5 Large uses 1.0 Wh.

Technical Details

Architecture

SSM-Transformer Hybrid (Mamba + Attention)

Parameters

398B

Context window

256,000 tokens

Release date

2024-08-22

Open source

Yes

Training data cutoff

2024-08