🧪

The Mad Scientist Lab

Where good ideas go to become features.
And questionable ideas go to be debated over coffee.

This is our public roadmap, idea board, and occasional fever dream journal.
Early adopters get to vote on what we build next.

Seriously Considering
On The Roadmap
Thinking About It
Fever Dream
Maybe Someday

Edge Template Rendering (ETR)

Seriously ConsideringRating: 9/10

TL;DR: Cache HTML templates, inject dynamic data at the edge. Bypass your entire frontend framework.

Imagine your product page loads in 50ms with real-time prices. The template is cached, only a tiny JSON API call fetches the dynamic bits. Your Magento installation weeps with joy.

Use case: E-commerce pages with dynamic pricing/stock
Craziness:🌶️🌶️🌶️🌶️🌶️

Edge Side Includes (ESI)

On The RoadmapRating: 8/10

TL;DR: Industry-standard fragment assembly at the edge. Because sometimes you need to Frankenstein your pages.

Assemble pages from multiple cached fragments. Header from one cache entry, product grid from another, footer from a third. It's like LEGO for HTML.

Use case: Complex pages with independently-cached sections
Craziness:🌶️🌶️🌶️🌶️🌶️

Cache Warming API

On The RoadmapRating: 7/10

TL;DR: Pre-heat your cache like a pizza oven before the dinner rush.

Import your sitemap, hit an API endpoint, and watch Trident crawl and cache your entire site. Deploy on Friday afternoon with confidence. (Just kidding, never deploy on Friday.)

Use case: Post-deployment cache population
Craziness:🌶️🌶️🌶️🌶️🌶️

Smart Tiering (L1/L2/L3 Cache)

Thinking About ItRating: 8/10

TL;DR: RAM for the hot stuff, SSD for the warm, S3 for the "wait, people still access this?"

Your product images don't need to live in RAM next to your API responses. Let the cache be smart about where things live. Memory is expensive, SSDs are cheap, S3 is basically free.

Use case: Large catalogs with varying access patterns
Craziness:🌶️🌶️🌶️🌶️🌶️

GraphQL-Aware Caching

On The RoadmapRating: 7/10

TL;DR: Finally, caching that understands your GraphQL queries instead of treating them like angry POST requests.

Parse GraphQL queries, normalize them, cache by operation. Automatic cache invalidation based on types. Your Apollo Server will send you a thank-you card.

Use case: Modern storefronts with GraphQL APIs
Craziness:🌶️🌶️🌶️🌶️🌶️

ML Cache Prediction

Fever DreamRating: ???

TL;DR: Let the robots decide what to cache. What could possibly go wrong?

Train a model on your traffic patterns. Predict what users will request next. Pre-warm before they even click. Probably overkill. Definitely cool. Might require a PhD.

Use case: When you have too much time and too many GPUs
Craziness:🌶️🌶️🌶️🌶️🌶️

Full VCL-like DSL

Maybe SomedayRating: 6/10

TL;DR: A proper scripting language for cache logic. For when JSON config isn't enough to express your pain.

Lexer, parser, bytecode compiler, runtime interpreter. 6 months of work. Probably unnecessary for 90% of users. But that 10%? They'd love it. We're watching you, VCL power users.

Use case: Complex routing and manipulation logic
Craziness:🌶️🌶️🌶️🌶️🌶️

Multi-Server Clustering

On The Roadmap (H2 2026)Rating: 9/10

TL;DR: Multiple Trident instances, one distributed cache. The cache mesh of your dreams.

Purge once, purge everywhere. Consistent hashing for cache distribution. Automatic failover. The thing enterprises ask about in every sales call.

Use case: High-availability deployments
Craziness:🌶️🌶️🌶️🌶️🌶️

Obligatory Disclaimer

Everything on this page is subject to change, pivoting, or being abandoned at 2 AM when we realize it was a terrible idea. That said, if you're an early adopter and you REALLY want something here, let us know. We take bribes in the form of good feature arguments and ⭐ on GitLab.

Got an idea we haven't thought of?

Early adopters get direct access to shape Trident's roadmap. Your feature request could be the next item on this page.