Node.js Interview Prep
Database Integration

Caching with Redis

The Cache-Aside Pattern and Beyond

LinkedIn Hook

"Your database is on fire. Again."

Every request hits Postgres. Every page load runs the same SELECT over and over. The query planner is begging for mercy, your p99 latency is climbing, and your CFO is asking why the RDS bill doubled this quarter.

The fix is not a bigger database. The fix is not asking the database in the first place.

Redis is an in-memory key-value store that answers in microseconds. Put it in front of your slow queries, your session lookups, your rate limiters, your hot product pages — and 90% of your traffic never touches Postgres again.

But naive caching is a trap. Cache stampedes can take down your database the moment a popular key expires. Stale data can show users yesterday's price. Tag-based invalidation gets messy fast.

In Lesson 7.4, I break down the cache-aside pattern, TTL strategies, pub/sub invalidation, and how to avoid the cache stampede that brought down a Black Friday checkout last year.

Read the full lesson -> [link]

#NodeJS #Redis #Caching #BackendEngineering #SystemDesign #InterviewPrep


Caching with Redis thumbnail


What You'll Learn

  • Why caching matters — reducing database load and shrinking response times
  • Core Redis commands: GET, SET, EXPIRE, DEL, INCR
  • Connecting to Redis from Node.js with the ioredis client
  • The cache-aside pattern and how to implement a getOrFetch helper
  • Write-through and write-behind strategies (and when to use each)
  • Cache invalidation: TTL, tag-based, and event-based
  • Storing user sessions in Redis instead of memory
  • Pub/sub basics for cross-instance cache busting
  • How to avoid cache stampedes when a hot key expires

The Sticky Note Analogy — Why You Don't Re-Read the Manual

Imagine you work at a bakery. Every time someone orders the chocolate cake, you have to walk to the back office, open a thick recipe binder, find the page, read the price, and walk back. It takes thirty seconds. Now imagine a hundred customers an hour all ordering the chocolate cake. You'd never make it through the day.

So you grab a sticky note. You write "Chocolate cake — $12" and stick it on the fridge by the counter. Now when a customer asks, you glance at the fridge. Half a second. Done. The recipe binder (the database) is still there for rare items, but the common ones live on a sticky note (the cache) right where you need them.

That sticky note is Redis. The fridge is your application server's memory. And the rule "if the price changes, update the sticky note OR throw it away" is cache invalidation — the hardest part of the whole job.

Caches work because real-world traffic is not uniform. A small number of items get the vast majority of requests. The Pareto principle is alive and well in production: 20% of your data gets 80% of the lookups. Cache that 20% and you've solved most of your scaling problem.

+---------------------------------------------------------------+
|              WITHOUT CACHE (The Problem)                      |
+---------------------------------------------------------------+
|                                                                |
|   Client ----request----> App Server ----query----> Postgres   |
|                                                          |     |
|   Client <---response---- App Server <---rows---------- +     |
|                                                                |
|   Every request hits the database.                             |
|   100 req/s = 100 queries/s = melting CPU.                     |
|   p99 latency: 250 ms.                                         |
|                                                                |
+---------------------------------------------------------------+

+---------------------------------------------------------------+
|              CACHE-ASIDE (The Solution)                       |
+---------------------------------------------------------------+
|                                                                |
|                       +-------+                                |
|   Client --req-> App--| Redis |--HIT--> return cached value    |
|                       +-------+                                |
|                          |                                     |
|                         MISS                                   |
|                          |                                     |
|                          v                                     |
|                       Postgres ---rows---> App                 |
|                                              |                 |
|                                              v                 |
|                                          SET in Redis          |
|                                          (with TTL)            |
|                                              |                 |
|                                              v                 |
|                                          return to client      |
|                                                                |
|   First request: DB hit + cache fill (slow).                   |
|   Next 999 requests: cache hit (fast).                         |
|   p99 latency: 5 ms. DB load: -99%.                            |
|                                                                |
+---------------------------------------------------------------+

Why Cache — The Two Goals

Caching has exactly two jobs, and every cache decision should serve at least one of them.

Goal 1: Reduce database load. Databases are expensive to scale. A single Postgres instance can handle a few thousand queries per second before you start sharding, replicating, or moving to a managed service that costs four figures a month. Redis can handle a hundred thousand operations per second on a small instance. Moving 95% of reads to Redis is the cheapest scaling lever in your toolbox.

Goal 2: Speed up responses. A cached lookup takes 0.5-2 ms. A Postgres query with a join takes 20-200 ms. A third-party API call takes 200-2000 ms. Caching turns slow operations into fast ones, which directly improves user-perceived performance and conversion rates.

If a cache decision doesn't measurably help one of these goals, don't cache it. Caching has real costs: memory, complexity, and the eternal headache of invalidation.


Redis Basics — The Five Commands You'll Use Every Day

Redis has hundreds of commands, but five of them cover 90% of caching work.

+---------------------------------------------------------------+
|              CORE REDIS COMMANDS                              |
+---------------------------------------------------------------+
|                                                                |
|   SET key value           Store a value at a key               |
|   GET key                 Retrieve the value at a key          |
|   EXPIRE key seconds      Set a TTL on an existing key         |
|   SETEX key secs value    SET + EXPIRE in one atomic op        |
|   DEL key [key ...]       Delete one or more keys              |
|   INCR key                Atomically increment an integer      |
|   TTL key                 Seconds remaining before expiry      |
|   EXISTS key              1 if key exists, 0 otherwise         |
|                                                                |
+---------------------------------------------------------------+

A quick walkthrough in the redis-cli:

127.0.0.1:6379> SET user:42 "Alice"
OK
127.0.0.1:6379> GET user:42
"Alice"
127.0.0.1:6379> EXPIRE user:42 60
(integer) 1
127.0.0.1:6379> TTL user:42
(integer) 58
127.0.0.1:6379> INCR page:views
(integer) 1
127.0.0.1:6379> INCR page:views
(integer) 2
127.0.0.1:6379> DEL user:42
(integer) 1

Two things to notice. First, keys are just strings — there is no schema, no table, no migration. Convention is everything. Use colons as namespace separators (user:42, product:slug:awesome-widget, session:abc123). Second, INCR is atomic. Two processes incrementing the same counter at the same time will both see the correct result. This makes Redis ideal for rate limiters and counters.


Connecting from Node.js — ioredis

The ioredis client is the de-facto standard Redis client for Node.js. It supports promises, pipelining, cluster mode, sentinel, and pub/sub out of the box.

// db/redis.js
// ioredis is fully promise-based and handles reconnection automatically
const Redis = require('ioredis');

// Create a single shared client for the whole application.
// ioredis lazily connects on the first command.
const redis = new Redis({
  host: process.env.REDIS_HOST || '127.0.0.1',
  port: Number(process.env.REDIS_PORT) || 6379,
  password: process.env.REDIS_PASSWORD || undefined,
  // Retry strategy: exponential backoff capped at 2 seconds
  retryStrategy: (times) => Math.min(times * 50, 2000),
  // Fail fast if Redis is unreachable for too long
  maxRetriesPerRequest: 3,
});

// Log connection lifecycle events for observability
redis.on('connect', () => console.log('[redis] connected'));
redis.on('error', (err) => console.error('[redis] error', err.message));

module.exports = redis;

Why one shared client? Each new Redis() opens a TCP connection. Creating a client per request would exhaust file descriptors and add 1-2 ms of handshake overhead to every operation. Share one client across the whole process. ioredis pipelines commands internally, so concurrent calls do not block each other.


The Cache-Aside Pattern — A Reusable Helper

Cache-aside (also called "lazy loading") is the most common caching pattern. The application is responsible for both reading from and writing to the cache. Redis itself knows nothing about your database.

The flow is always the same:

  1. Try to read from Redis.
  2. If found ("cache hit") — return it.
  3. If not found ("cache miss") — query the database, store the result in Redis with a TTL, and return it.

You will write this pattern dozens of times. Wrap it in a helper.

// cache/getOrFetch.js
const redis = require('../db/redis');

/**
 * Cache-aside helper.
 * @param {string} key       Redis key (use namespaced format like "user:42")
 * @param {number} ttl       Seconds until the cached value expires
 * @param {Function} fetcher Async function that loads the value on a miss
 */
async function getOrFetch(key, ttl, fetcher) {
  // Step 1: try the cache first
  const cached = await redis.get(key);
  if (cached !== null) {
    // Cache hit -- parse JSON and return immediately
    return JSON.parse(cached);
  }

  // Step 2: cache miss -- run the expensive fetcher (DB query, API call, etc.)
  const fresh = await fetcher();

  // Step 3: store the fresh value with a TTL so it eventually expires.
  // SETEX is atomic: SET + EXPIRE in a single round trip.
  // We stringify because Redis values are bytes, not objects.
  await redis.setex(key, ttl, JSON.stringify(fresh));

  return fresh;
}

module.exports = getOrFetch;

Using it in a route handler:

// routes/products.js
const express = require('express');
const getOrFetch = require('../cache/getOrFetch');
const db = require('../db/postgres');

const router = express.Router();

router.get('/products/:slug', async (req, res, next) => {
  try {
    const { slug } = req.params;

    // Cache the product for 5 minutes (300 seconds).
    // Hot products will be served from Redis 99% of the time.
    const product = await getOrFetch(
      `product:slug:${slug}`,
      300,
      async () => {
        // This fetcher only runs on a cache miss
        const { rows } = await db.query(
          'SELECT * FROM products WHERE slug = $1',
          [slug]
        );
        return rows[0] || null;
      }
    );

    if (!product) return res.status(404).json({ error: 'Not found' });
    res.json(product);
  } catch (err) {
    next(err);
  }
});

module.exports = router;

That's the entire pattern. One helper, one TTL, dramatic performance gains.


Write-Through vs Write-Behind — The Other Two Patterns

Cache-aside is read-driven. The other two patterns are write-driven and used less often, but interviewers love asking about them.

Write-through. Every write goes to the cache and the database, synchronously, in the same operation. The cache is always consistent with the database because they are updated together. The trade-off is write latency: every write pays the cost of two systems instead of one. Use write-through when read traffic dominates and you can never tolerate stale data.

Write-behind (write-back). Writes go to the cache immediately and are flushed to the database asynchronously, in batches. Writes are extremely fast because the database is not on the critical path. The trade-off is durability: if Redis crashes before the flush, you lose those writes. Use write-behind for high-volume, low-importance data — analytics counters, view counts, last-seen timestamps.

+---------------------------------------------------------------+
|              CACHE WRITE STRATEGIES                           |
+---------------------------------------------------------------+
|                                                                |
|   CACHE-ASIDE       App writes DB. Cache invalidated on next   |
|                     read or via DEL. Most common.              |
|                                                                |
|   WRITE-THROUGH     App writes cache + DB synchronously.       |
|                     Always consistent. Slower writes.          |
|                                                                |
|   WRITE-BEHIND      App writes cache only. Background job      |
|                     flushes to DB later. Fastest writes,       |
|                     risk of data loss on crash.                |
|                                                                |
+---------------------------------------------------------------+

Cache Invalidation — The Hard Part

There are only two hard things in computer science: cache invalidation and naming things. The naming part is a joke. The invalidation part is not.

You have three main strategies, and most production systems use all three together.

1. TTL (Time-To-Live)

The simplest and most reliable strategy. Every cached entry has an expiration time. After that time, Redis deletes it automatically. The next read triggers a fresh fetch. You accept that data may be stale for up to TTL seconds.

// SETEX = SET + EXPIRE in one atomic command.
// Cache the user profile for 10 minutes.
await redis.setex('user:42', 600, JSON.stringify(profile));

// Equivalent two-command form (NOT atomic, avoid):
// await redis.set('user:42', JSON.stringify(profile));
// await redis.expire('user:42', 600);

Pick TTLs based on how stale the data can be:

  • Public product listings: 5-15 minutes
  • User profiles: 1-5 minutes
  • Authentication tokens: matches token lifetime
  • Static config: 1 hour or more
  • Real-time prices: 5-30 seconds, or do not cache at all

2. Tag-Based Invalidation

Sometimes you need to invalidate a group of related keys at once. For example, when a product is updated, you want to invalidate the product page, the category listing, and the search results that mention it. Redis sets are perfect for this.

// cache/tags.js
const redis = require('../db/redis');

// Cache a value AND tag it with one or more invalidation tags.
// We use a Redis SET per tag to track which keys belong to it.
async function cacheWithTags(key, value, ttl, tags) {
  const pipeline = redis.pipeline();

  // Store the actual cached value with a TTL
  pipeline.setex(key, ttl, JSON.stringify(value));

  // For each tag, add the key to a set named "tag:<tagName>"
  // so we can later find every key that belongs to this tag.
  for (const tag of tags) {
    pipeline.sadd(`tag:${tag}`, key);
    // Tag sets get a longer TTL than individual keys
    // so they outlive the keys they track.
    pipeline.expire(`tag:${tag}`, ttl * 2);
  }

  await pipeline.exec();
}

// Invalidate every key that was tagged with this tag.
async function invalidateTag(tag) {
  // Read all keys that belong to this tag
  const keys = await redis.smembers(`tag:${tag}`);
  if (keys.length === 0) return 0;

  // Delete the cached values AND the tag set itself in one batch
  const pipeline = redis.pipeline();
  pipeline.del(...keys);
  pipeline.del(`tag:${tag}`);
  await pipeline.exec();

  return keys.length;
}

module.exports = { cacheWithTags, invalidateTag };

Usage:

// When caching a product page, tag it with the product ID and category
await cacheWithTags(
  'page:product:42',
  renderedHtml,
  600,
  ['product:42', 'category:widgets']
);

// Later, when product 42 is updated, invalidate everything tagged with it
await invalidateTag('product:42');

3. Event-Based Invalidation

For multi-instance deployments, you need to invalidate caches across every server when a write happens. Redis pub/sub is the classic tool for this. We'll cover pub/sub in detail below.


Session Storage in Redis

By default, Express stores sessions in memory. This breaks the moment you scale beyond one process: a user logs in on server A, then their next request hits server B which has no session. Worse, restarting the server logs everyone out.

Redis fixes this. Every server reads sessions from the same Redis instance.

// app.js
const express = require('express');
const session = require('express-session');
const RedisStore = require('connect-redis').default;
const redis = require('./db/redis');

const app = express();

app.use(
  session({
    // Tell express-session to use Redis instead of memory
    store: new RedisStore({
      client: redis,
      prefix: 'sess:', // namespace so session keys don't collide
      ttl: 86400,      // 1 day in seconds
    }),
    secret: process.env.SESSION_SECRET,
    resave: false,            // don't write back unchanged sessions
    saveUninitialized: false, // don't save empty sessions
    cookie: {
      httpOnly: true,
      secure: process.env.NODE_ENV === 'production',
      maxAge: 86400000, // 1 day in ms (matches Redis TTL above)
    },
  })
);

module.exports = app;

Now any server in your fleet can read any user's session. Restarting a server doesn't log anyone out. Horizontal scaling just works.


Pub/Sub — Cross-Instance Cache Busting

Redis pub/sub lets one process broadcast messages to many subscribers. The classic use case in caching: when one server invalidates a key, every other server hears about it and clears its local in-process cache.

// cache/pubsub.js
const Redis = require('ioredis');

// IMPORTANT: pub/sub uses a dedicated connection.
// A subscribed client cannot run normal commands like GET/SET.
// So we create TWO clients: one publisher, one subscriber.
const publisher = new Redis();
const subscriber = new Redis();

const CHANNEL = 'cache:invalidate';

// Subscribe to the invalidation channel on this server
subscriber.subscribe(CHANNEL, (err, count) => {
  if (err) console.error('[pubsub] subscribe failed', err);
  else console.log(`[pubsub] subscribed to ${count} channels`);
});

// Whenever any server publishes a key, every subscriber receives it
subscriber.on('message', (channel, message) => {
  if (channel !== CHANNEL) return;
  const { key } = JSON.parse(message);
  console.log(`[pubsub] invalidating local cache for ${key}`);
  // Clear the in-process cache (e.g., an LRU map) for this key
  localCache.delete(key);
});

// Call this whenever you mutate data and need to bust the cache fleet-wide
async function broadcastInvalidate(key) {
  await publisher.publish(CHANNEL, JSON.stringify({ key }));
}

const localCache = new Map();
module.exports = { broadcastInvalidate, localCache };

When a PUT /products/42 lands on server A, server A updates Postgres, calls broadcastInvalidate('product:42'), and within milliseconds servers B, C, and D have all dropped their stale entry. The next request on any server triggers a fresh fetch.

Pub/sub is fire and forget — there is no delivery guarantee. If a server is offline when a message is published, it misses it. For mission-critical invalidation, use Redis Streams or a dedicated message bus instead.


Avoiding Cache Stampedes — The Thundering Herd

Picture this: a popular product page is cached for 5 minutes. The TTL expires at exactly 12:00:00. At 12:00:00.001, a thousand requests for that page arrive simultaneously. Every single one finds the cache empty. Every single one runs the same expensive database query. Your database falls over.

This is a cache stampede (also called the thundering herd, or dogpile effect). It is the single most common way that "we added Redis" turns into "we took down production."

Three defenses, in order of complexity:

1. Add jitter to TTLs. Instead of all entries expiring at exactly the same moment, add random variance.

// Instead of a fixed TTL, add +/- 10% jitter so keys
// don't all expire at the same instant after a deploy.
function jitteredTtl(base) {
  const jitter = Math.floor(base * 0.1 * (Math.random() * 2 - 1));
  return base + jitter;
}

await redis.setex('product:42', jitteredTtl(300), payload);

2. Use a lock so only one request rebuilds. When the cache misses, grab a short Redis lock. Whoever gets the lock fetches from the DB and refills the cache. Everyone else waits a few milliseconds and re-reads from the cache.

// cache/getOrFetchLocked.js
const redis = require('../db/redis');

async function getOrFetchLocked(key, ttl, fetcher) {
  const cached = await redis.get(key);
  if (cached !== null) return JSON.parse(cached);

  // Try to acquire a short-lived lock for this key.
  // SET key value NX EX 5 = "set if not exists, expire in 5 seconds".
  const lockKey = `lock:${key}`;
  const gotLock = await redis.set(lockKey, '1', 'EX', 5, 'NX');

  if (!gotLock) {
    // Someone else is already rebuilding. Wait briefly and retry.
    await new Promise((r) => setTimeout(r, 50));
    return getOrFetchLocked(key, ttl, fetcher);
  }

  try {
    const fresh = await fetcher();
    await redis.setex(key, ttl, JSON.stringify(fresh));
    return fresh;
  } finally {
    // Always release the lock, even if the fetcher threw
    await redis.del(lockKey);
  }
}

module.exports = getOrFetchLocked;

3. Refresh-ahead. Before the TTL expires, a background job refreshes the value. The cache is never empty, so the stampede never happens. Best for a small set of known-hot keys.


Common Mistakes

1. No TTL on cached values. Forgetting EXPIRE (or using plain SET instead of SETEX) means cached entries live forever. Redis fills up, gets evicted under memory pressure, and you spend Saturday debugging why your eviction policy is dropping the wrong keys. Always set a TTL — even a long one is better than none.

2. Caching without solving cache stampedes. Adding Redis without protection turns a slow database into a slow database that gets hammered every time a popular key expires. Use jittered TTLs, locks, or refresh-ahead for hot keys. Never assume "it's cached, we're safe."

3. Stale data because of missing invalidation. You cache a user profile for 10 minutes. The user updates their email. They reload the page. They see their old email. They submit a support ticket. Always invalidate (DEL key) on writes, or accept the staleness window in your TTL choice — and document it.

4. One Redis client per request. Creating a new new Redis() for every HTTP request opens a new TCP connection each time, exhausts file descriptors under load, and adds handshake latency. Create one client at startup and share it.

5. Using a subscribed client for normal commands. Once a Redis client calls SUBSCRIBE, it enters subscriber mode and rejects GET/SET/DEL. Always create a separate client for pub/sub work, distinct from your main command client.


Interview Questions

1. "Walk me through the cache-aside pattern. Why is it the most common caching strategy?"

Cache-aside is read-driven and lazy. On a read, the application first checks Redis. If the key exists ("cache hit"), it returns the value immediately. If the key is missing ("cache miss"), the application queries the database, stores the result in Redis with a TTL, and returns the result. Writes go directly to the database, and the cache is either invalidated (DEL) or allowed to expire on its own. It is the most common pattern because it is simple, the cache stays out of the write path (so failures in Redis never break writes), and only data that is actually requested gets cached — which keeps memory usage proportional to actual demand. The main downside is the first request after a miss is always slow, and you must handle cache stampedes when popular keys expire.

2. "What is a cache stampede and how do you prevent it?"

A cache stampede happens when a hot cached key expires and a flood of concurrent requests all miss the cache simultaneously. Each one independently runs the expensive backing query, hammering the database with hundreds or thousands of duplicate queries within milliseconds. Three defenses: First, add jitter to TTLs so keys expire at slightly different times instead of all at once after a deploy. Second, use a Redis-based lock — when a cache misses, the first request acquires a short lock with SET NX EX, fetches from the database, fills the cache, and releases the lock; concurrent requests detect the lock and briefly wait for the cache to be repopulated. Third, refresh-ahead: a background job refreshes hot keys before they expire, so the cache is never actually empty for them. In production you typically combine jitter with locking.

3. "Explain the difference between write-through, write-behind, and cache-aside. When would you pick each?"

Cache-aside leaves the cache out of the write path entirely — the application writes the database and either invalidates or refreshes the cache separately. It's simple and the cache failing never breaks writes. Use it as your default. Write-through writes to the cache and the database synchronously in the same operation; reads are always consistent because both stores are updated together, but every write pays double latency. Use write-through when reads dominate and you cannot tolerate even brief staleness — for example, financial display data. Write-behind writes only to the cache, and a background process flushes to the database asynchronously in batches. Writes are extremely fast because the database is off the critical path, but you risk losing recent writes if Redis crashes before the flush. Use write-behind for high-volume, loss-tolerant data like view counters, analytics events, or last-seen timestamps.

4. "How would you invalidate every cached page that mentions a specific product?"

Plain TTL is not enough — you need targeted invalidation. The standard approach is tag-based. When you cache a page, you also add the cache key to one or more Redis sets named after the tags it belongs to. For a product page, you'd add the cache key to tag:product:42 and tag:category:widgets. When product 42 is updated, you call SMEMBERS tag:product:42 to get every cache key that touches it, then DEL them in a single pipelined batch. In a multi-server deployment, you also publish an invalidation message on a Redis pub/sub channel so every application instance can drop any in-process caches it holds for the same data. For mission-critical invalidation where pub/sub's at-most-once delivery is too weak, use Redis Streams or a dedicated message bus.

5. "Why store sessions in Redis instead of in memory? What changes when you do?"

In-memory sessions break horizontal scaling. If a user logs in on server A and their next request lands on server B, server B has no record of the session and treats them as logged out. Sticky sessions partially fix this but couple users to specific servers, defeating load balancing and breaking when a server restarts. Storing sessions in Redis solves both problems: every server reads from the same shared store, so any server can handle any request, and a server restart or crash doesn't log anyone out. The trade-off is one extra network round trip per request to fetch the session. In practice this is sub-millisecond and worth it. With connect-redis and express-session, the swap is one line of configuration. The session cookie still lives in the browser, but the server-side state lives in Redis with a TTL matching the cookie's max age.


Quick Reference — Redis Caching Cheat Sheet

+---------------------------------------------------------------+
|              REDIS COMMANDS CHEAT SHEET                       |
+---------------------------------------------------------------+
|                                                                |
|   READS                                                        |
|   GET key                  Read a value                        |
|   EXISTS key               1 if key exists                     |
|   TTL key                  Seconds until expiry (-1 = none)    |
|                                                                |
|   WRITES                                                       |
|   SET key value            Store a value (no TTL)              |
|   SETEX key secs value     Store with TTL (atomic)             |
|   SET key value NX EX 5    Set if not exists + 5s TTL (lock)   |
|   INCR key                 Atomic increment                    |
|                                                                |
|   DELETES                                                      |
|   DEL key [key ...]        Delete one or more keys             |
|   EXPIRE key seconds       Add/change TTL on existing key      |
|                                                                |
|   SETS (for tag-based invalidation)                            |
|   SADD tag:foo key1 key2   Add keys to a tag set               |
|   SMEMBERS tag:foo         List keys in the tag set            |
|                                                                |
|   PUB/SUB                                                      |
|   PUBLISH channel message  Broadcast a message                 |
|   SUBSCRIBE channel        Receive messages (dedicated conn)   |
|                                                                |
+---------------------------------------------------------------+

+---------------------------------------------------------------+
|              KEY RULES                                         |
+---------------------------------------------------------------+
|                                                                |
|   1. Always set a TTL -- use SETEX, never plain SET            |
|   2. Namespace keys with colons: user:42, product:slug:foo     |
|   3. One shared client per process (not per request)           |
|   4. Add jitter to TTLs to avoid simultaneous expiry           |
|   5. Use SET NX EX as a lock to prevent stampedes              |
|   6. Pub/sub needs a dedicated subscriber connection           |
|   7. Invalidate on write OR accept the TTL staleness window    |
|   8. Cache-aside is the default; write-through if needed       |
|   9. Tag sets outlive the keys they track (longer TTL)         |
|  10. JSON.stringify on write, JSON.parse on read               |
|                                                                |
+---------------------------------------------------------------+
StrategyRead SpeedWrite SpeedConsistencyBest For
Cache-asideFast (after warmup)NormalEventual (TTL)General purpose
Write-throughFastSlow (2 writes)StrongRead-heavy critical data
Write-behindFastVery fastWeak (risk of loss)Counters, analytics
TTL onlyFastN/ABounded stalenessPublic, slow-changing data
Tag-based invalidationFastNormal + tag writeStrong on writePages with related entities
Event-based (pub/sub)FastNormal + publishStrong (best effort)Multi-instance fleets

Prev: Lesson 7.3 -- Database Patterns Next: Lesson 8.1 -- JWT Authentication


This is Lesson 7.4 of the Node.js Interview Prep Course -- 10 chapters, 42 lessons.

On this page