Nobody caches enough.
A cache is fast shared storage that sits between your application and your database. Instead of asking your database for the same user profile 10,000 times per hour, you ask once and remember the answer. Simple idea. Massive impact. Caching is the single highest-leverage optimization most applications can make.
So why doesn't everyone cache aggressively? Because caching is expensive, and expensive things get rationed.
#The cache tax
Traditional managed Redis® costs orders of magnitude more per gigabyte than cloud storage or SSDs. When you're paying that premium, you think carefully about what goes in there. Session tokens make the cut. Rate limiters, obviously. But that user profile you fetch on every page load? That product catalog? Those stay in the database, because "they don't get hit that often."
So you ration. One cache shared across dev, staging, and production. Key prefixes to fake isolation between tenants. Careful decisions about what's "worth" caching and what isn't.
We've seen this pattern before. When computers were expensive, we time-shared them. When storage was expensive, we compressed everything. Scarcity shapes behavior, and right now, cache scarcity is shaping how we build software in ways we've stopped noticing.
We want to build a world where cache is free.
Not cheap. Not "affordable at scale." Free enough that you stop thinking about the cost and start thinking about what you'd build if it were unlimited.
#What we believe
Two truths about infrastructure make this possible.
Access patterns are fractal. The famous 80/20 rule says 20% of your keys get 80% of your reads. But within that hot 20%, it's 80/20 again. And again. Your top 1% of keys might handle 50% of all requests. Your top 0.1% might handle 25%.
The same applies to who's accessing: most tenants are idle, most branches are stale, most agents are between tasks. Most caches are idle most of the time. Most keys are cold most of the time. The hot data that actually needs to be in RAM is a tiny, shifting fraction of the whole.
Capital efficiency matters. The fastest option isn't always the right option. We have four modes of moving cargo: trains, ships, trucks, and planes. Each serves a purpose. You don't air-freight coal, and you don't send organs by container ship.
Forcing developers to choose between a McLaren and a dump truck is bad design. Most applications need a Honda Accord. Good enough performance at a price that lets you actually use it.
Everything else follows from these.
#Why now
This shift isn't hypothetical. It's already happened to the rest of your infrastructure.
Vercel made deploys so cheap that every PR gets its own preview environment. Nobody shares staging anymore.
Neon and Turso made databases so cheap that every branch gets its own copy. Database branching went from "enterprise feature" to table stakes.
Fly Machines, Modal, AWS Lambda, and Cloudflare Workers made compute scale to zero. You pay for what you use, not what you provision.
Same pattern each time: an architectural shift collapses costs, new workflows become obvious.
Cache is the last holdout.
#What must exist
The cache primitive that solves this will have certain properties:
Free to create. Spinning up a cache costs nothing. No provisioning delay.
Cost scales with usage. Pay for what you use, not peak capacity.
Built on cheap primitives. NVMe, not just RAM.
Exceptional performance. Meet or approach RAM speed for real workloads.
Instant wake-up. Sleeping caches come back in milliseconds.
Redis® compatible. Drop-in. No migration pain.
#What becomes possible
When cache is free, you stop sharing and start isolating.
Not one cache per company. Not one per team. One per branch. One per developer. One per PR. One per test run. One per tenant. One per agent. One per task.
A junior dev spins up a cache to test a feature. It costs nothing. When they're done, it disappears. They never thought about the cost.
This sounds extravagant by today's standards. But so did "a computer on every desk" in 1975. So did "a phone in every pocket" in 1995. Cost changes what people try.
You stop asking "should we cache this?" and start asking "why wouldn't we?"
We're building this. We're in private alpha. Join us to help build the future.