Modern Protocol

Native HTTP/2

Multiplexed connections. Single TCP stream. 50% faster globally.
No nginx required. No extra hops.

MultiplexingHeader CompressionStream Priority

The HTTP/1.1 Problem

HTTP/1.1 can only send one request per connection at a time. Browsers open 6-8 parallel connections, but that's still a bottleneck. High-latency connections (intercontinental, mobile) suffer the most.

HTTP/1.1 (6 connections, 50 images):

Connection 1: [img1]──────[img7]──────[img13]──────...
Connection 2: [img2]──────[img8]──────[img14]──────...
Connection 3: [img3]──────[img9]──────[img15]──────...
Connection 4: [img4]──────[img10]─────[img16]──────...
Connection 5: [img5]──────[img11]─────[img17]──────...
Connection 6: [img6]──────[img12]─────[img18]──────...

• 8 round trips needed (50 images / 6 connections)
• Each round trip = network latency
• 500ms latency × 8 trips = 4+ seconds of waiting
6-8
Connection Limit
8 trips
For 50 images
4+ sec
At 500ms latency

HTTP/2 Multiplexing

HTTP/2 sends all requests over a single connection simultaneously. 50 images? One connection, one round trip for all of them. Latency becomes almost irrelevant.

HTTP/2 (1 connection, 50 images):

Connection 1: [img1][img2][img3][img4][img5]...[img50]
              ─────────────────────────────────────────
              All 50 requests sent in parallel
              All 50 responses received together

• 1 round trip needed (all requests multiplexed)
• 500ms latency × 1 trip = 500ms of waiting
• 8x faster than HTTP/1.1!
1
Connection
1 trip
For 50 images
500ms
Total wait

Real Benchmark Results

Test: 50 Images (~800KB each)

LatencyHTTP/2HTTP/1.1Improvement
0ms (local)1.23s1.45s+15%
50ms (regional)1.54s2.06s+25%
100ms (cross-country)2.34s4.12s+43%
200ms (international)4.56s8.23s+45%
500ms (intercontinental)11.43s22.72s+50%

Higher latency = bigger HTTP/2 advantage. Critical for global audiences.

Who Benefits Most

🌍

Global E-commerce

Customers in Asia accessing US servers. 200-500ms base latency. HTTP/2 cuts perceived load time in half.

50% faster
📱

Mobile Users

High latency on cellular networks. Single connection = less battery, faster loads, fewer dropped requests.

Better UX
🖼️

Image-Heavy Sites

Product galleries, portfolios, media sites. 50+ images per page load instantly with multiplexing.

8x fewer trips

API Backends

Multiple API calls per page. All requests sent in parallel, responses arrive together.

Lower TTFB

Zero Configuration Required

HTTP/2 is enabled by default when TLS is configured. No special setup needed.

# trident.toml

[server]
listen = "0.0.0.0:443"

[server.tls]
cert_file = "/etc/trident/cert.pem"
key_file = "/etc/trident/key.pem"
# HTTP/2 automatically enabled!

# Optional: fine-tuning
[server.http2]
max_concurrent_streams = 250     # Default: 250
initial_window_size = "64KB"     # Default: 64KB
max_frame_size = "16KB"          # Default: 16KB

Client Connections

  • • HTTP/2 with TLS (ALPN negotiation)
  • • Automatic fallback to HTTP/1.1
  • • No client changes required

Backend Connections

  • • HTTP/2 to backends (if supported)
  • • Connection pooling
  • • Multiplexed backend requests

HTTP/2 Availability

ProductHTTP/2 ClientHTTP/2 Backend
Trident Velocity✓ Native✓ Native
Varnish OSS✗ Not available✗ Not available
Varnish Plus✗ Not available✗ Not available
Nginx (proxy)✓ Available✓ Available
HAProxy✓ Available✗ Not available

* Using Varnish with HTTP/2 requires nginx in front, adding latency and complexity

Simplified Architecture

Varnish + HTTP/2

Client (HTTP/2) ↓ Nginx (TLS termination) ↓ Varnish (HTTP/1.1 only) ↓ Backend • Extra hop = extra latency • Two services to maintain • Two configs to manage • Two points of failure

Trident

Client (HTTP/2) ↓ Trident (native HTTP/2 + TLS) ↓ Backend • Direct path = lowest latency • Single service • Single config • Simpler operations

Modern Protocols. Zero Effort.

HTTP/2 support is included in all Trident plans.