How It Works
How it works
Section titled “How it works”At a high level, memocache checks cache stores in priority order, returns data immediately when it can, and refreshes stale entries in the background.
Read flow
Section titled “Read flow”- The cache reads stores from first to last.
- If no store has the key,
queryFnruns and the result is written to every store. - If a store has the key and the entry is still fresh, the cached value is returned immediately.
- If a store has the key but the entry is stale, the stale value is returned and a background revalidation starts.
- If a lower-priority store returns a fresh hit, memocache backfills higher-priority stores in the background.
- If the entry is past TTL, it is treated as expired and fresh data must be fetched.
fresh vs ttl
Section titled “fresh vs ttl”Timeline: [0 --- fresh --- ttl --- infinity] [0, fresh] -> serve from cache [fresh, ttl] -> serve stale data + revalidate in background [ttl, inf] -> cache miss, fetch fresh datafresh: how long cached data is considered fresh.ttl: how long cached data is kept at all.
createCache() defaults to 30 * Time.Second for fresh and 5 * Time.Minute for ttl.
Why this exists
Section titled “Why this exists”Without a helper like memocache, most cached functions repeat the same work:
- derive a stable key
- check for a hit
- handle misses
- write results
- pick TTLs
- decide whether stale data can be served
- repeat the whole flow across multiple stores
memocache centralizes that logic and generates stable keys from the function and its arguments, while still allowing explicit query keys through cacheQuery().
Key safety
Section titled “Key safety”For memoized functions, include any data that changes the result in the function arguments. That includes identifiers that might otherwise be implied by auth or request context. If the output varies by customerId, that customerId should be part of the cache key.
Use cacheQuery() when you want full control over the query key.