The Mad Scientist Lab
Where good ideas go to become features.
And questionable ideas go to be debated over coffee.
This is our public roadmap, idea board, and occasional fever dream journal.
Early adopters get to vote on what we build next.
Edge Template Rendering (ETR)
TL;DR: Cache HTML templates, inject dynamic data at the edge. Bypass your entire frontend framework.
Imagine your product page loads in 50ms with real-time prices. The template is cached, only a tiny JSON API call fetches the dynamic bits. Your Magento installation weeps with joy.
Edge Side Includes (ESI)
TL;DR: Industry-standard fragment assembly at the edge. Because sometimes you need to Frankenstein your pages.
Assemble pages from multiple cached fragments. Header from one cache entry, product grid from another, footer from a third. It's like LEGO for HTML.
Cache Warming API
TL;DR: Pre-heat your cache like a pizza oven before the dinner rush.
Import your sitemap, hit an API endpoint, and watch Trident crawl and cache your entire site. Deploy on Friday afternoon with confidence. (Just kidding, never deploy on Friday.)
Smart Tiering (L1/L2/L3 Cache)
TL;DR: RAM for the hot stuff, SSD for the warm, S3 for the "wait, people still access this?"
Your product images don't need to live in RAM next to your API responses. Let the cache be smart about where things live. Memory is expensive, SSDs are cheap, S3 is basically free.
GraphQL-Aware Caching
TL;DR: Finally, caching that understands your GraphQL queries instead of treating them like angry POST requests.
Parse GraphQL queries, normalize them, cache by operation. Automatic cache invalidation based on types. Your Apollo Server will send you a thank-you card.
ML Cache Prediction
TL;DR: Let the robots decide what to cache. What could possibly go wrong?
Train a model on your traffic patterns. Predict what users will request next. Pre-warm before they even click. Probably overkill. Definitely cool. Might require a PhD.
Full VCL-like DSL
TL;DR: A proper scripting language for cache logic. For when JSON config isn't enough to express your pain.
Lexer, parser, bytecode compiler, runtime interpreter. 6 months of work. Probably unnecessary for 90% of users. But that 10%? They'd love it. We're watching you, VCL power users.
Multi-Server Clustering
TL;DR: Multiple Trident instances, one distributed cache. The cache mesh of your dreams.
Purge once, purge everywhere. Consistent hashing for cache distribution. Automatic failover. The thing enterprises ask about in every sales call.
Obligatory Disclaimer
Everything on this page is subject to change, pivoting, or being abandoned at 2 AM when we realize it was a terrible idea. That said, if you're an early adopter and you REALLY want something here, let us know. We take bribes in the form of good feature arguments and ⭐ on GitLab.
Got an idea we haven't thought of?
Early adopters get direct access to shape Trident's roadmap. Your feature request could be the next item on this page.