V8 & Memory Management
How Node.js Uses Your RAM
LinkedIn Hook
"Your Node.js process just crashed at 3 AM with
JavaScript heap out of memory. Again."You scale the pods. You bump
--max-old-space-size. The crashes slow down — but they never stop. Because the real enemy isn't RAM. It's a closure you wrote six months ago that's quietly holding references to every request your server has ever seen.Most Node.js developers never learn how V8 actually stores their objects. They treat the heap like a black box, blame "the garbage collector" when things go wrong, and reach for bigger servers instead of bigger brains.
But V8 is not magic. It's a precise machine with two compilers (Ignition and TurboFan), four heap spaces (new space, old space, large object space, code space), and two garbage collectors (Scavenger and Mark-Sweep-Compact). Once you understand how they fit together, OOM crashes stop being mysterious — they become debuggable.
In Lesson 1.4, I break down V8's internals, how generational GC really works, what
process.memoryUsage()is actually telling you, and how to take a heap snapshot that catches leaks red-handed.Read the full lesson -> [link]
#NodeJS #V8 #MemoryManagement #BackendDevelopment #InterviewPrep
What You'll Learn
- How V8 sits inside Node.js and why Node.js inherits V8's limits
- The JIT compilation pipeline: Ignition interpreter to TurboFan optimizer
- Hidden classes and inline caches — why object shape matters
- The V8 heap structure: new space, old space, large object space, code space
- Generational garbage collection: Scavenger vs Mark-Sweep-Compact
- The
--max-old-space-sizeflag and why bumping it is rarely the real fix - How to identify memory leaks (growing old space, retained closures, global caches)
- Taking heap snapshots with
--inspectand theheapdumpmodule - Common leak patterns and how to avoid them
The Warehouse Analogy — Why V8 Has Two Shelves
Imagine you run a warehouse. Every day, hundreds of packages arrive. Most of them — maybe 95% — are temporary: they get unpacked, the contents get used, and the empty boxes go straight to the recycler the same day. A few, though, are valuable long-term inventory: machinery, records, spare parts. Those get moved to the back shelves and stay for months or years.
A smart warehouse manager doesn't store everything the same way. Short-term packages live on a small fast-access shelf near the loading dock, swept clean several times a day. Long-term inventory lives on a huge back shelf that's only reorganized occasionally — because reorganizing it is expensive.
That's exactly how V8 manages memory. Most JavaScript objects are short-lived: request objects, temporary strings, intermediate calculations. They're born, used once, and die in milliseconds. A small number — your cache, your server instance, your connection pool — live for the entire lifetime of the process. V8 puts them in different "shelves" of the heap and cleans each shelf with a different strategy.
The small fast shelf is new space (young generation), swept by a fast copying GC called the Scavenger. The big back shelf is old space, cleaned less often by a slower but thorough Mark-Sweep-Compact collector. Objects that survive two Scavenger passes get "promoted" from the new shelf to the old shelf — just like packages that don't leave in the first 48 hours get moved to long-term storage.
When your app leaks memory, it's almost always because objects that should have died in new space are being kept alive by a reference you forgot about. They get promoted to old space, and old space keeps growing until you hit --max-old-space-size and crash.
+---------------------------------------------------------------+
| V8 INSIDE NODE.JS |
+---------------------------------------------------------------+
| |
| +---------------------------------------------------------+ |
| | NODE.JS PROCESS | |
| | | |
| | +----------------+ +-------------------------+ | |
| | | libuv | | V8 ENGINE | | |
| | | (event loop, |<----->| | | |
| | | thread pool) | | +-------------------+ | | |
| | +----------------+ | | Ignition (interp) | | | |
| | | +-------------------+ | | |
| | +----------------+ | +-------------------+ | | |
| | | Node.js C++ |<----->| | TurboFan (JIT) | | | |
| | | bindings | | +-------------------+ | | |
| | +----------------+ | +-------------------+ | | |
| | | | GC (Scavenger + | | | |
| | | | Mark-Sweep-Compact)| | | |
| | | +-------------------+ | | |
| | | +-------------------+ | | |
| | | | V8 HEAP | | | |
| | | +-------------------+ | | |
| | +-------------------------+ | |
| +---------------------------------------------------------+ |
| |
+---------------------------------------------------------------+
Napkin AI Visual Prompt: "Dark gradient (#0a1a0a -> #0d2e16). A large 'Node.js' outer container in Node green (#68a063). Inside, two main boxes: 'libuv' on the left (event loop, thread pool) and 'V8 Engine' on the right. Inside V8, nested boxes labeled 'Ignition', 'TurboFan', 'GC', and 'Heap'. Arrows show data flowing between libuv and V8. Amber (#ffb020) glow around the 'Heap' box. White monospace labels throughout."
The JIT Compilation Pipeline — Ignition and TurboFan
V8 does not simply interpret JavaScript. It runs a two-tier pipeline that trades startup speed for peak performance.
Tier 1: Ignition (the interpreter)
When V8 first sees your JavaScript, it parses it into an AST, then generates bytecode for its internal interpreter called Ignition. Bytecode runs immediately — no waiting for optimization. This is why Node.js starts fast even for large applications.
Ignition also collects type feedback as it runs. Every time a function executes, Ignition records the shapes of the arguments it saw, the types of values it returned, and how often the function was called. This profile is the raw material TurboFan uses later.
Tier 2: TurboFan (the optimizing compiler)
When a function becomes "hot" — usually after being called a few thousand times — V8 hands it to TurboFan, the optimizing compiler. TurboFan uses the type feedback Ignition collected to generate optimized machine code that assumes those types will stay the same. A function that always received two integers gets compiled with integer-add machine instructions, skipping all the dynamic dispatch JavaScript normally requires.
But here's the catch: if your "hot" function suddenly receives a string where it used to get a number, TurboFan's assumptions break. V8 triggers a deoptimization, throws away the optimized code, and falls back to Ignition bytecode. Your function just got slower.
+---------------------------------------------------------------+
| V8 JIT COMPILATION PIPELINE |
+---------------------------------------------------------------+
| |
| Source Code -> Parser -> AST -> Ignition Bytecode |
| | |
| v |
| +-------------------+ |
| | Run + profile | |
| | (type feedback) | |
| +-------------------+ |
| | |
| function hot? (many calls) |
| | |
| v |
| +-------------------+ |
| | TurboFan | |
| | (optimized code) | |
| +-------------------+ |
| | |
| type assumptions broken? |
| | |
| v |
| +-------------------+ |
| | Deoptimize | |
| | (back to bytecode) |
| +-------------------+ |
| |
+---------------------------------------------------------------+
Hidden Classes and Inline Caches (briefly)
JavaScript objects look like hash maps, but V8 treats them like structs. When you write { x: 1, y: 2 }, V8 creates a hidden class (also called a "shape" or "map") that describes the object's layout. Two objects created with the same properties in the same order share the same hidden class — and that means TurboFan can compile property accesses to direct memory offsets instead of hash lookups.
Inline caches are the mechanism that remembers "last time I accessed .x on an object with hidden class C, the offset was 16." Next time, V8 skips the lookup entirely. This is the single biggest reason modern JavaScript is fast.
The practical rule: always create objects with the same properties in the same order. Adding a property later, deleting a property, or mixing insertion orders creates new hidden classes and blows up the inline cache — silently slowing your hot path.
The V8 Heap — Four Spaces, One Process
When Node.js starts, V8 allocates a heap divided into several logical "spaces." Each space has a specific purpose and its own allocation and collection strategy.
+---------------------------------------------------------------+
| V8 HEAP STRUCTURE |
+---------------------------------------------------------------+
| |
| +-----------------------+ +-----------------------------+ |
| | NEW SPACE | | OLD SPACE | |
| | (young generation) | | (long-lived objects) | |
| | | | | |
| | ~1-8 MB | | ~1.4 GB default (64-bit) | |
| | | | | |
| | +----------+ | | cache, server instance, | |
| | | From | | | connection pool, etc. | |
| | +----------+ | | | |
| | +----------+ | | collected by | |
| | | To | | | Mark-Sweep-Compact | |
| | +----------+ | | | |
| | | | | |
| | collected by | | | |
| | Scavenger (fast) | | | |
| +-----------------------+ +-----------------------------+ |
| |
| +-----------------------+ +-----------------------------+ |
| | LARGE OBJECT SPACE | | CODE SPACE | |
| | | | | |
| | objects > ~512 KB | | JIT-compiled machine code | |
| | allocated directly, | | (from TurboFan) | |
| | never moved | | | |
| +-----------------------+ +-----------------------------+ |
| |
+---------------------------------------------------------------+
New Space (Young Generation)
Small, roughly 1-8 MB by default. Split into two halves called from-space and to-space. All new object allocations happen here. When from-space fills up, V8 runs the Scavenger: it copies every still-reachable object into to-space, then swaps the roles. Dead objects are simply left behind — no explicit free, no marking, no sweeping. This is called a Cheney-style copying collector, and it's extremely fast because it only touches live objects.
Old Space
This is where long-lived objects live. Any object that survives two Scavenger passes in new space gets promoted to old space. Old space is where your caches, module-level variables, HTTP server instance, and database connection pools live. Old space defaults to roughly 1.4 GB on 64-bit systems in older Node.js versions, and higher on recent versions. It's also the number you change with --max-old-space-size.
Large Object Space
Any allocation larger than ~512 KB bypasses new space entirely and goes directly into large object space. Large objects are never copied (too expensive) — they're allocated in their own pages and freed in place. Big buffers, giant strings, and large arrays end up here.
Code Space
Where V8 stores the JIT-compiled machine code produced by TurboFan. You rarely think about code space, but if you dynamically generate a lot of functions (e.g., template engines compiling at runtime), code space can grow significantly.
Generational Garbage Collection
V8's GC design rests on one empirical observation called the generational hypothesis: most objects die young. If you allocate 1,000,000 objects, roughly 950,000 of them will be garbage within milliseconds. Only a few thousand will survive long enough to matter. V8 exploits this by collecting new space aggressively and old space lazily.
The Scavenger (Minor GC)
Runs against new space only. Very fast — usually under 1 ms. Triggered whenever new space fills up. Steps:
- Walk all live roots (the stack, globals, etc.) into new space.
- Copy every reachable object from from-space to to-space.
- Update all pointers to point to the new locations.
- Swap from-space and to-space. Anything left in old from-space is garbage — just abandoned.
- Objects that survive two Scavenger passes are promoted into old space.
Mark-Sweep-Compact (Major GC)
Runs against old space. Much slower — tens to hundreds of milliseconds on a large heap. Triggered when old space gets close to the limit. Steps:
- Mark: walk the object graph from roots and mark every reachable object.
- Sweep: walk old space and free every unmarked object.
- Compact: optionally move live objects together to reduce fragmentation.
Modern V8 runs most of this work incrementally and concurrently on background threads, so the main thread pauses are much shorter than in older Node.js versions. But the fundamental cost remains: the more live data you have in old space, the slower major GC becomes.
The --max-old-space-size Flag
By default, Node.js caps old space at roughly 1.4 GB on 64-bit systems (it varies by version and architecture). You can raise this cap:
# Raise old space cap to 4 GB
node --max-old-space-size=4096 server.js
# Check current defaults and all V8 options
node --v8-options | grep -i "max.*size"
This is almost never the right fix for a memory leak. Bumping --max-old-space-size just delays the crash — if your app leaks, it will still crash, just later and with a bigger heap dump. Use the flag when your app legitimately needs more memory (large caches, big data processing), not as a Band-Aid for a leak you haven't investigated.
A useful rule: set the flag to roughly 75% of the container's memory limit. If your Docker container has 2 GB, set --max-old-space-size=1536. This leaves headroom for Buffers (which live outside the V8 heap), native modules, and the OS.
Example 1: A Classic Memory Leak — The Global Cache
// A leaky in-memory cache.
// Every request adds an entry. Nothing is ever removed.
// Old space grows forever until the process is killed by OOM.
const cache = new Map();
function handleRequest(req, res) {
// Use the full URL as the cache key (includes query string, so unbounded)
const key = req.url;
if (cache.has(key)) {
return res.end(cache.get(key));
}
// Simulate an expensive computation producing a large payload
const payload = computeExpensiveResponse(req);
// Store forever — this is the leak
cache.set(key, payload);
res.end(payload);
}
// Why this leaks:
// 1. Every unique URL becomes a new Map entry.
// 2. The Map is module-scoped, so entries live in OLD SPACE.
// 3. Nothing ever calls cache.delete() — entries are permanently reachable.
// 4. Old space grows on every new request until --max-old-space-size is hit.
The fix: bound the cache. Use an LRU policy, a TTL, or a size cap. A Map with no eviction is not a cache — it's a memory leak with extra steps.
// A bounded LRU-style cache using a simple size limit.
// Entries beyond MAX are evicted in insertion order.
const MAX = 1000;
const cache = new Map();
function set(key, value) {
// If the key already exists, delete it so re-insertion moves it to the end
if (cache.has(key)) cache.delete(key);
cache.set(key, value);
// Evict the oldest entry (first key) when we exceed the cap
if (cache.size > MAX) {
const oldest = cache.keys().next().value;
cache.delete(oldest);
}
}
function get(key) {
if (!cache.has(key)) return undefined;
// Touch: delete and re-set to mark as most recently used
const value = cache.get(key);
cache.delete(key);
cache.set(key, value);
return value;
}
Example 2: WeakMap for Reference-Safe Caching
When you want to attach data to an object without preventing that object from being garbage collected, use a WeakMap. Keys in a WeakMap are held weakly — if no other references to the key exist, both the key and its value become eligible for GC automatically.
// A per-request metadata store that does NOT leak.
// When the request object is garbage collected,
// the WeakMap entry disappears automatically.
const requestMetadata = new WeakMap();
function onRequest(req, res) {
// Attach metadata keyed by the request object itself
requestMetadata.set(req, {
startTime: Date.now(),
userId: req.headers['x-user-id'],
traceId: req.headers['x-trace-id'],
});
res.on('finish', () => {
const meta = requestMetadata.get(req);
const duration = Date.now() - meta.startTime;
console.log(`request ${meta.traceId} took ${duration}ms`);
// No need to call requestMetadata.delete(req).
// Once the req object is no longer referenced elsewhere,
// the WeakMap entry becomes unreachable and GC collects it.
});
}
// Why this is safe:
// - WeakMap keys must be objects.
// - WeakMap holds keys WEAKLY — they do not prevent GC.
// - No explicit cleanup needed. No risk of a growing Map.
// - WeakMap is NOT iterable, which is a feature: you cannot
// accidentally walk over dead entries.
Example 3: Reading process.memoryUsage()
Node.js exposes live heap statistics through process.memoryUsage(). This is your first line of defense when diagnosing memory problems.
// Print memory usage every 5 seconds.
// Watch which numbers grow over time — growth in heapUsed
// (specifically in old space) is the signature of a leak.
function formatMB(bytes) {
return (bytes / 1024 / 1024).toFixed(2) + ' MB';
}
setInterval(() => {
const mem = process.memoryUsage();
console.log({
// Resident Set Size: total memory allocated to the Node.js process
// by the OS, including heap, code, and C++ objects
rss: formatMB(mem.rss),
// Total size of the V8 heap (new + old + large object + code)
heapTotal: formatMB(mem.heapTotal),
// Actual JavaScript objects currently alive in the heap
heapUsed: formatMB(mem.heapUsed),
// Memory used by C++ objects bound to JavaScript objects
// (Buffers, TLS sockets, native addons)
external: formatMB(mem.external),
// Memory allocated for ArrayBuffers and SharedArrayBuffers
arrayBuffers: formatMB(mem.arrayBuffers),
});
}, 5000);
// Interpretation:
// - heapUsed climbing continuously -> likely a JavaScript-level leak
// - external/arrayBuffers climbing -> likely a Buffer leak
// - rss climbing but heapUsed flat -> likely a native-module leak
// - all numbers flat after warm-up -> healthy steady state
A stable, healthy Node.js process shows a sawtooth pattern: heapUsed rises between GCs and drops after each major GC, oscillating around a stable baseline. A leaking process shows a staircase: heapUsed rises, drops partially, and each new baseline is higher than the last.
Example 4: Taking a Heap Snapshot
When process.memoryUsage() confirms you're leaking, the next step is to capture a heap snapshot — a complete dump of every live object in the V8 heap, which you can open in Chrome DevTools to find who is holding what.
Option A: The built-in v8 module (no extra dependency)
// Programmatic heap snapshot — works in any modern Node.js version.
// Triggered by SIGUSR2, a signal commonly used for diagnostics.
const v8 = require('v8');
const fs = require('fs');
function takeSnapshot() {
// writeHeapSnapshot returns the filename it wrote to
const filename = `heap-${Date.now()}.heapsnapshot`;
const stream = v8.getHeapSnapshot();
const out = fs.createWriteStream(filename);
stream.pipe(out);
out.on('finish', () => {
console.log(`heap snapshot written to ${filename}`);
});
}
// Trigger on SIGUSR2 so you can dump the heap on demand
// without restarting the process:
//
// kill -SIGUSR2 <pid>
//
process.on('SIGUSR2', takeSnapshot);
Option B: The --inspect flag + Chrome DevTools
# Start the process with the inspector enabled
node --inspect server.js
# Open Chrome and visit:
# chrome://inspect
#
# Click "inspect" under your Node.js target,
# go to the Memory panel, and click "Take heap snapshot".
# You can take multiple snapshots and diff them — this is
# how you find what is growing between two points in time.
Reading a snapshot
In Chrome DevTools' Memory panel, the Comparison view between two snapshots shows you which object types grew. The Retainers panel shows the chain of references keeping an object alive — this is how you find the closure, Map, or global that owns your leaked data. Look for:
- Objects with huge
Retained Sizethat you didn't expect to exist. - Long retention chains ending in a
Closure (context)— a classic sign of a captured variable. (array)orMapentries that are many times larger than they should be.
Common Mistakes
1. Treating --max-old-space-size as a fix for leaks.
Bumping the heap limit buys you time, not a solution. If your process leaked 500 MB in two hours at the old limit, it will leak 1 GB in four hours at the new limit. Always investigate a leak before touching the flag.
2. Caching with an unbounded Map.
Using new Map() as a cache with no eviction policy is the #1 Node.js memory leak in production. Always pick an eviction strategy: LRU, TTL, or a hard size cap. Prefer a real library (lru-cache) over rolling your own.
3. Registering event listeners without removing them.
emitter.on(...) inside a request handler, a timer, or a long-lived loop accumulates listeners forever. Every listener captures its closure, which captures everything it references. Always pair .on() with .off(), or use .once() when possible.
4. Global arrays that log or buffer forever.
const requests = []; requests.push(req); inside a handler is an immediate, obvious leak. Less obvious variants: debug buffers, rolling metrics arrays, in-memory queues with no drain. If it's module-scoped and it grows, it leaks.
5. Mixing object shapes in hot paths.
Creating objects with the same properties in different orders, or adding/deleting properties at runtime, creates many hidden classes and destroys inline caches. This isn't a leak — it's a silent performance cliff. Always initialize every property in the same order, even if the value is null.
6. Assuming Buffers live in the V8 heap.
Buffers (and other native memory) live outside the V8 heap. A Buffer leak won't show up in heapUsed — you'll see it in external and rss. If heapUsed is flat but rss is climbing, you're leaking native memory, not JavaScript objects.
Interview Questions
1. "What are the different spaces in the V8 heap, and why does V8 separate them?"
V8 divides the heap into several spaces because different objects have different lifetimes and access patterns, and one-size-fits-all GC is slow. The main spaces are: new space (young generation), small (~1-8 MB), where all new allocations happen and the fast Scavenger runs; old space, large (~1.4 GB default on 64-bit), where long-lived objects live after being promoted from new space, collected by Mark-Sweep-Compact; large object space, where allocations larger than ~512 KB go directly (never copied, too expensive); and code space, where TurboFan stores JIT-compiled machine code. The separation exists because of the generational hypothesis: most objects die young, so collecting new space aggressively and old space lazily is far faster than collecting everything uniformly.
2. "What is the difference between the Scavenger and Mark-Sweep-Compact?"
The Scavenger is a minor GC that runs only on new space. It uses a Cheney-style copying algorithm: it copies every live object from from-space to to-space, then swaps them. Dead objects are abandoned, not explicitly freed. It runs in under 1 ms because it only touches live objects, and since most new-space objects are dead, there's very little to copy. Mark-Sweep-Compact is a major GC that runs on old space. It walks the entire object graph from roots (mark), frees everything unmarked (sweep), and optionally compacts live objects to reduce fragmentation (compact). It's much slower than the Scavenger — tens to hundreds of milliseconds — because old space is much larger and most old-space objects are alive. Modern V8 runs major GC incrementally and concurrently, so the main-thread pause is shorter than the total GC work.
3. "What is the role of Ignition and TurboFan in V8?"
Ignition is V8's bytecode interpreter. When V8 parses JavaScript, it generates Ignition bytecode that runs immediately — this is why Node.js starts fast. Ignition also collects type feedback as it runs: for each function, it records the argument types, return types, and call frequency. TurboFan is V8's optimizing compiler. When a function becomes "hot" (called thousands of times), V8 hands it to TurboFan, which uses Ignition's type feedback to generate optimized machine code that assumes those types will stay stable. The result is massively faster execution — close to native speed for well-behaved code. If the type assumptions break (e.g., you pass a string to a function that always got numbers), V8 deoptimizes: it throws away the optimized code and falls back to Ignition bytecode until the function re-stabilizes.
4. "How would you identify and fix a memory leak in a Node.js process?"
First, confirm the leak exists by logging process.memoryUsage() over time. A leak looks like a rising heapUsed baseline — a staircase pattern instead of a sawtooth. Second, localize it by taking heap snapshots at two points in time, either with node --inspect plus Chrome DevTools' Memory panel or programmatically with v8.getHeapSnapshot(). Compare the two snapshots: Chrome's Comparison view shows which object types grew. Third, trace retention by looking at the Retainers panel in DevTools — it shows the chain of references keeping each leaked object alive. The chain usually ends at a closure, a global Map, an event emitter, or a module-scoped array. Fourth, fix the root cause: add eviction to unbounded caches, remove stale listeners with .off(), use WeakMap/WeakRef when appropriate, and null out large references when you're done with them. Only raise --max-old-space-size after confirming the growth is real, not leaked.
5. "What does --max-old-space-size control, and when should you change it?"
--max-old-space-size sets the maximum size (in megabytes) of V8's old space — the heap region where long-lived objects live. The default is roughly 1.4 GB on 64-bit systems in older Node.js versions, higher in recent ones. When old space hits this cap and Mark-Sweep-Compact can't free enough memory, Node.js crashes with JavaScript heap out of memory. You should raise it when your application legitimately needs more memory — large in-memory caches, big data processing, or large variable-size working sets. You should not raise it as a fix for a memory leak; that just postpones the crash. A common production setting is roughly 75% of the container's memory limit (e.g., --max-old-space-size=1536 inside a 2 GB container), leaving headroom for Buffers, native memory, and the OS. Note that this flag only controls old space — it doesn't limit Buffers, ArrayBuffers, or native addon memory, which live outside the V8 heap.
Quick Reference — V8 & Memory Cheat Sheet
+---------------------------------------------------------------+
| V8 HEAP SPACES |
+---------------------------------------------------------------+
| |
| NEW SPACE small, ~1-8 MB |
| new allocations, Scavenger GC |
| |
| OLD SPACE large, ~1.4 GB default |
| promoted objects, Mark-Sweep-Compact |
| |
| LARGE OBJECT objects > ~512 KB |
| SPACE never moved, allocated in own pages |
| |
| CODE SPACE TurboFan-compiled machine code |
| |
+---------------------------------------------------------------+
+---------------------------------------------------------------+
| GARBAGE COLLECTORS |
+---------------------------------------------------------------+
| |
| SCAVENGER (minor GC) |
| runs on: new space |
| speed: sub-millisecond |
| method: Cheney copy from-space -> to-space |
| promote: objects surviving 2 passes -> old space |
| |
| MARK-SWEEP-COMPACT (major GC) |
| runs on: old space |
| speed: 10s to 100s of ms (mostly concurrent/incremental) |
| method: mark reachable, sweep dead, compact live |
| |
+---------------------------------------------------------------+
+---------------------------------------------------------------+
| COMMANDS & FLAGS |
+---------------------------------------------------------------+
| |
| node --max-old-space-size=4096 server.js |
| raise old space cap to 4 GB |
| |
| node --inspect server.js |
| enable DevTools inspector for heap snapshots |
| |
| node --trace-gc server.js |
| log every GC event to stderr |
| |
| node --v8-options |
| list every V8 flag available |
| |
| kill -SIGUSR2 <pid> |
| trigger heap snapshot (if handler registered) |
| |
+---------------------------------------------------------------+
+---------------------------------------------------------------+
| LEAK DETECTION CHECKLIST |
+---------------------------------------------------------------+
| |
| 1. Log process.memoryUsage() every 5s in production |
| 2. Watch for staircase in heapUsed (not sawtooth) |
| 3. Take 2+ heap snapshots with --inspect |
| 4. Compare snapshots in DevTools Memory panel |
| 5. Inspect Retainers to find the owning reference |
| 6. Fix unbounded caches, stale listeners, global arrays |
| 7. Use WeakMap/WeakRef for reference-safe caching |
| |
+---------------------------------------------------------------+
| Symptom | Likely Cause | Where to Look |
|---|---|---|
heapUsed staircase | JS-level leak | Heap snapshot, Retainers panel |
external climbing | Buffer leak | Native allocations, streams not closed |
rss climbing, heapUsed flat | Native module leak | C++ addons, libuv handles |
| Long GC pauses | Old space too large | --trace-gc, reduce retained set |
| OOM after hours | Slow leak | Snapshot diff over time |
| OOM at startup | Working set > limit | Raise --max-old-space-size |
Prev: Lesson 1.3 -- libuv and the Thread Pool Next: Lesson 1.5 -- Blocking vs Non-Blocking
This is Lesson 1.4 of the Node.js Interview Prep Course -- 10 chapters, 42 lessons.