Native HTTP/2
Multiplexed connections. Single TCP stream. 50% faster globally.
No nginx required. No extra hops.
The HTTP/1.1 Problem
HTTP/1.1 can only send one request per connection at a time. Browsers open 6-8 parallel connections, but that's still a bottleneck. High-latency connections (intercontinental, mobile) suffer the most.
HTTP/1.1 (6 connections, 50 images):
Connection 1: [img1]──────[img7]──────[img13]──────...
Connection 2: [img2]──────[img8]──────[img14]──────...
Connection 3: [img3]──────[img9]──────[img15]──────...
Connection 4: [img4]──────[img10]─────[img16]──────...
Connection 5: [img5]──────[img11]─────[img17]──────...
Connection 6: [img6]──────[img12]─────[img18]──────...
• 8 round trips needed (50 images / 6 connections)
• Each round trip = network latency
• 500ms latency × 8 trips = 4+ seconds of waitingHTTP/2 Multiplexing
HTTP/2 sends all requests over a single connection simultaneously. 50 images? One connection, one round trip for all of them. Latency becomes almost irrelevant.
HTTP/2 (1 connection, 50 images):
Connection 1: [img1][img2][img3][img4][img5]...[img50]
─────────────────────────────────────────
All 50 requests sent in parallel
All 50 responses received together
• 1 round trip needed (all requests multiplexed)
• 500ms latency × 1 trip = 500ms of waiting
• 8x faster than HTTP/1.1!Real Benchmark Results
Test: 50 Images (~800KB each)
| Latency | HTTP/2 | HTTP/1.1 | Improvement |
|---|---|---|---|
| 0ms (local) | 1.23s | 1.45s | +15% |
| 50ms (regional) | 1.54s | 2.06s | +25% |
| 100ms (cross-country) | 2.34s | 4.12s | +43% |
| 200ms (international) | 4.56s | 8.23s | +45% |
| 500ms (intercontinental) | 11.43s | 22.72s | +50% |
Higher latency = bigger HTTP/2 advantage. Critical for global audiences.
Who Benefits Most
Global E-commerce
Customers in Asia accessing US servers. 200-500ms base latency. HTTP/2 cuts perceived load time in half.
Mobile Users
High latency on cellular networks. Single connection = less battery, faster loads, fewer dropped requests.
Image-Heavy Sites
Product galleries, portfolios, media sites. 50+ images per page load instantly with multiplexing.
API Backends
Multiple API calls per page. All requests sent in parallel, responses arrive together.
Zero Configuration Required
HTTP/2 is enabled by default when TLS is configured. No special setup needed.
# trident.toml
[server]
listen = "0.0.0.0:443"
[server.tls]
cert_file = "/etc/trident/cert.pem"
key_file = "/etc/trident/key.pem"
# HTTP/2 automatically enabled!
# Optional: fine-tuning
[server.http2]
max_concurrent_streams = 250 # Default: 250
initial_window_size = "64KB" # Default: 64KB
max_frame_size = "16KB" # Default: 16KBClient Connections
- • HTTP/2 with TLS (ALPN negotiation)
- • Automatic fallback to HTTP/1.1
- • No client changes required
Backend Connections
- • HTTP/2 to backends (if supported)
- • Connection pooling
- • Multiplexed backend requests
HTTP/2 Availability
| Product | HTTP/2 Client | HTTP/2 Backend |
|---|---|---|
| Trident Velocity | ✓ Native | ✓ Native |
| Varnish OSS | ✗ Not available | ✗ Not available |
| Varnish Plus | ✗ Not available | ✗ Not available |
| Nginx (proxy) | ✓ Available | ✓ Available |
| HAProxy | ✓ Available | ✗ Not available |
* Using Varnish with HTTP/2 requires nginx in front, adding latency and complexity
Simplified Architecture
Varnish + HTTP/2
Client (HTTP/2)
↓
Nginx (TLS termination)
↓
Varnish (HTTP/1.1 only)
↓
Backend
• Extra hop = extra latency
• Two services to maintain
• Two configs to manage
• Two points of failureTrident
Client (HTTP/2)
↓
Trident (native HTTP/2 + TLS)
↓
Backend
• Direct path = lowest latency
• Single service
• Single config
• Simpler operationsModern Protocols. Zero Effort.
HTTP/2 support is included in all Trident plans.