Node.js Interview Prep
Asynchronous Patterns

Worker Threads

CPU Parallelism in Node.js

LinkedIn Hook

"Your Node.js server handles 10,000 concurrent requests effortlessly -- until one user uploads a 20MB image for resizing. Suddenly every other request hangs for 4 seconds. Why?"

Because Node.js runs your JavaScript on a single thread. The event loop is brilliant at juggling I/O, but the moment you ask it to do real CPU work -- image processing, PDF generation, bcrypt, heavy regex, JSON over megabytes -- everything else stops. The loop is blocked.

For years the only escape was spawning a child process: expensive, slow to start, and awkward to communicate with. Then Node added worker_threads: real OS threads, sharing the same process, talking via fast message passing or even shared memory. True parallelism, finally, inside a single Node process.

Most developers never touch them. The ones who do can keep an API responsive while crunching numbers in the background -- and that's exactly the kind of thing senior interviewers love to dig into.

In Lesson 5.4, I break down worker_threads end to end: when to reach for them, how postMessage and SharedArrayBuffer actually work, how to build a worker pool, and how it compares to child_process and cluster.

Read the full lesson -> [link]

#NodeJS #WorkerThreads #Concurrency #BackendEngineering #InterviewPrep


Worker Threads thumbnail


What You'll Learn

  • Why and when single-threaded Node.js is not enough (CPU-bound vs I/O-bound)
  • The worker_threads module API: Worker, parentPort, workerData, isMainThread
  • How postMessage and on('message') communicate between threads
  • Sharing memory with SharedArrayBuffer and Atomics for zero-copy state
  • Using transferList to move large buffers without copying them
  • Inline workers via the eval flag or data URLs (no separate file)
  • How to build a minimal worker pool, and when to reach for piscina
  • The 3-way comparison: worker_threads vs child_process vs cluster

The Kitchen Analogy — Hiring a Prep Cook

Imagine a chef running a busy restaurant alone. He's incredibly fast at coordinating: he takes orders, plates dishes, answers the phone, talks to suppliers. As long as every task is short, he keeps up. He is the Node.js event loop -- a single, fast, reactive worker.

Now a customer orders something that requires 30 minutes of dicing onions. The chef can do it. But while he's chopping, nobody else gets served. Orders pile up. Phone rings unanswered. The whole restaurant freezes -- not because the chef is slow, but because he's the only one, and he's busy.

The chef's solution is obvious: hire a prep cook. Send him into the back kitchen with the onions. He works in parallel, using his own knife, his own board, his own hands. When he's done, he passes the chopped onions through the kitchen window. The chef never stopped serving customers.

That is exactly what a worker thread is. A second JavaScript runtime, with its own event loop and its own V8 isolate, running in parallel inside the same Node.js process. Your main thread keeps serving HTTP requests. The worker chops the onions -- resizes the image, hashes the password, parses the gigantic CSV -- and hands the result back through postMessage. The kitchen never freezes again.

+---------------------------------------------------------------+
|           SINGLE THREAD (The Problem)                         |
+---------------------------------------------------------------+
|                                                                |
|  Main thread:  [req1][req2][======= RESIZE IMAGE =======][req3]|
|                                       ^                        |
|                                       |                        |
|                                req2,req3 BLOCKED               |
|                                for 4000 ms                     |
|                                                                |
+---------------------------------------------------------------+

+---------------------------------------------------------------+
|           WORKER THREADS (The Solution)                       |
+---------------------------------------------------------------+
|                                                                |
|  Main thread:    [req1][req2][req3][req4][req5][req6][req7]    |
|                            \                       /          |
|                             postMessage      message          |
|                              \                 /              |
|  Worker thread:               [== RESIZE IMAGE ==]             |
|                                                                |
|  Main loop never blocks. CPU work runs in parallel.            |
|                                                                |
+---------------------------------------------------------------+

When the Single Thread Isn't Enough

Node.js is famously good at I/O-bound workloads -- HTTP servers, databases, file streaming, message queues. The event loop handles thousands of concurrent sockets because it never waits: it hands the work to the kernel and processes the next thing.

But the moment your code spends time computing instead of waiting, the same architecture becomes a liability. The event loop is JavaScript's only worker, and JavaScript is single-threaded. Every millisecond it spends hashing a password is a millisecond no other request can be served.

Typical CPU-bound tasks that justify a worker thread:

  • Image processing -- resizing, cropping, format conversion (sharp, jimp)
  • PDF generation -- rendering large reports with pdfkit or puppeteer-style work
  • Cryptography -- bcrypt rounds, scrypt, AES on big payloads, signing
  • Heavy regex -- catastrophic backtracking on user-supplied patterns
  • Parsing or transforming large data -- multi-MB JSON, CSV, XML
  • Compression -- gzip/brotli on big buffers (the sync variants)
  • Machine learning inference -- running a small model on each request

The rule of thumb: if a single function call takes more than ~50 ms of pure JavaScript CPU work, it's a candidate for a worker. Below that, the cost of message-passing usually outweighs the parallelism benefit.


The worker_threads Module API

worker_threads is a built-in Node.js module (stable since Node 12). The four most important exports:

// Built-in module, no install required
const {
  Worker,        // Constructor for spawning a new worker thread
  isMainThread,  // true in the main thread, false in a worker
  parentPort,    // MessagePort to talk back to the parent (worker side)
  workerData,    // Initial data passed when the worker was created
} = require('node:worker_threads');

A worker is just another JavaScript file (or inline string). It runs in a separate V8 isolate -- a separate heap, separate event loop, separate globalThis. The two threads share nothing by default. They communicate exclusively through messages, which are structured-cloned (the same algorithm browsers use for postMessage).


Example 1 -- Fibonacci in a Worker (main + worker file)

A classic CPU-bound benchmark: computing Fibonacci recursively. On the main thread it would block the event loop for seconds. In a worker it runs in parallel.

// fib-worker.js  -- runs inside the worker thread
// This file has no HTTP server, no Express, just compute.
const { parentPort, workerData } = require('node:worker_threads');

// Pure CPU work -- intentionally naive recursive fib
function fib(n) {
  if (n < 2) return n;
  return fib(n - 1) + fib(n - 2);
}

// workerData was passed by the parent in `new Worker(..., { workerData })`
const result = fib(workerData.n);

// Send the result back to the parent and exit
parentPort.postMessage({ n: workerData.n, result });
// main.js  -- the main Node.js process
const { Worker } = require('node:worker_threads');
const path = require('node:path');

// Helper that wraps Worker creation in a Promise
function runFib(n) {
  return new Promise((resolve, reject) => {
    // Spawn a new worker thread, pointing at the worker file
    const worker = new Worker(path.resolve(__dirname, 'fib-worker.js'), {
      workerData: { n },   // Initial payload available as `workerData` inside
    });

    // Listen for messages coming from the worker
    worker.on('message', resolve);

    // Worker errors (uncaught exceptions inside the worker)
    worker.on('error', reject);

    // Non-zero exit code means the worker crashed
    worker.on('exit', (code) => {
      if (code !== 0) reject(new Error(`Worker stopped with code ${code}`));
    });
  });
}

// The main thread stays responsive while fib(42) runs in parallel
console.time('fib');
runFib(42).then((msg) => {
  console.log('Got result:', msg);
  console.timeEnd('fib');
});

// This log appears IMMEDIATELY -- the main thread is not blocked
console.log('Main thread is still free to do other work');

The key observation: the console.log after runFib prints before the result, because the heavy computation runs on a separate OS thread. The main event loop never paused.


Example 2 -- postMessage and Bidirectional Data Transfer

Workers are not one-shot. You can keep them alive and exchange many messages, both directions. Each side has a MessagePort -- the parent gets one from the Worker instance, the child gets one as parentPort.

// echo-worker.js
const { parentPort } = require('node:worker_threads');

// Listen for messages from the parent
parentPort.on('message', (msg) => {
  // Echo back with some transformation
  // Note: every message is structured-cloned, not shared
  parentPort.postMessage({
    received: msg,
    processedAt: Date.now(),
    upper: typeof msg.text === 'string' ? msg.text.toUpperCase() : null,
  });
});

// Optional: graceful shutdown when the parent says "close"
parentPort.on('close', () => {
  // Final cleanup before the worker exits
});
// main.js
const { Worker } = require('node:worker_threads');

// Long-lived worker -- created once, reused for many requests
const worker = new Worker('./echo-worker.js');

// Handle every message coming back from the worker
worker.on('message', (response) => {
  console.log('Worker said:', response);
});

// Send three messages -- the worker processes them in order
worker.postMessage({ text: 'hello' });
worker.postMessage({ text: 'parallel' });
worker.postMessage({ text: 'world' });

// When you are done with the worker, terminate it explicitly.
// Otherwise the Node process will not exit.
setTimeout(() => worker.terminate(), 1000);

Transferring Buffers Without Copying

By default, postMessage copies the data (structured clone). For large ArrayBuffer payloads -- megabytes of pixels, audio samples, parsed protobuf -- copying is wasteful. The second argument to postMessage is a transferList: ownership of those buffers is moved to the other side, with zero copy. After transfer, the sending side can no longer access them.

// Allocate a 10 MB buffer in the main thread
const big = new ArrayBuffer(10 * 1024 * 1024);

// Transfer ownership to the worker -- no memory copy happens.
// `big` becomes detached on this side after the call.
worker.postMessage({ pixels: big }, [big]);

// Trying to use `big` here would throw -- it has been moved.

Example 3 -- SharedArrayBuffer and Atomics

transferList moves a buffer once. SharedArrayBuffer is something stronger: a chunk of memory visible to both threads at the same time. No messages, no copies -- both threads read and write the same bytes. This is real shared-memory parallelism, and it requires Atomics to coordinate safely.

A worked example: a counter that two threads increment in parallel.

// counter-worker.js
const { parentPort, workerData } = require('node:worker_threads');

// Wrap the shared buffer in a typed array view.
// Both threads see the same underlying bytes.
const counter = new Int32Array(workerData.shared);

// Atomically increment the counter 1,000,000 times
for (let i = 0; i < 1_000_000; i++) {
  // Atomics.add returns the OLD value and applies the update
  // safely even if another thread is writing at the same time.
  Atomics.add(counter, 0, 1);
}

parentPort.postMessage('done');
// main.js
const { Worker } = require('node:worker_threads');

// Allocate 4 bytes of SHARED memory (one Int32 slot)
const shared = new SharedArrayBuffer(4);
const counter = new Int32Array(shared);

// Spawn two workers, both pointing at the same shared buffer
function spawn() {
  return new Promise((resolve) => {
    const w = new Worker('./counter-worker.js', {
      workerData: { shared },   // SharedArrayBuffer is shared, not copied
    });
    w.on('message', resolve);
  });
}

Promise.all([spawn(), spawn()]).then(() => {
  // Each worker added 1,000,000 -- total should be exactly 2,000,000
  // Without Atomics this number would be wrong due to race conditions.
  console.log('Final counter:', Atomics.load(counter, 0));
});

Without Atomics.add, two threads writing counter[0]++ would race: read-modify-write is not atomic, so updates would be lost and the final number would be unpredictable. Atomics provides the primitives -- add, sub, compareExchange, wait, notify -- that make shared memory safe.

Security note: SharedArrayBuffer requires cross-origin isolation in browsers, but in Node.js it works out of the box. Use it sparingly -- shared mutable state is the hardest kind of code to debug.


Inline Workers — No Separate File

Sometimes a separate .js file is overkill. The Worker constructor accepts an eval: true option that lets you pass the worker source as a string.

const { Worker } = require('node:worker_threads');

// Inline worker source -- runs as if it were its own file
const workerSource = `
  const { parentPort, workerData } = require('node:worker_threads');
  // Compute square of the number passed in
  parentPort.postMessage(workerData.n * workerData.n);
`;

const worker = new Worker(workerSource, {
  eval: true,                 // Treat the first argument as code, not a path
  workerData: { n: 9 },
});

worker.on('message', (sq) => console.log('Square:', sq));  // 81

This pattern is great for libraries that ship a single file and need to spawn helpers without asking users to manage extra paths. Another option is a data: URL pointing at a JavaScript MIME type, useful for ESM workers.


Example 4 -- A Minimal Worker Pool

Spawning a fresh worker per task is wasteful: starting a worker takes 30-80 ms. The right pattern is a pool: a fixed number of long-lived workers, and a queue of jobs handed out as workers become free.

// pool.js  -- a tiny but functional worker pool
const { Worker } = require('node:worker_threads');
const os = require('node:os');

class WorkerPool {
  constructor(workerFile, size = os.cpus().length) {
    this.workers = [];   // All workers, busy or idle
    this.idle = [];      // Workers ready to accept a new job
    this.queue = [];     // Pending jobs waiting for a free worker

    // Pre-spawn `size` workers up front
    for (let i = 0; i < size; i++) {
      const worker = new Worker(workerFile);
      this.workers.push(worker);
      this.idle.push(worker);
    }
  }

  // Submit a job -- returns a Promise that resolves with the worker's reply
  run(job) {
    return new Promise((resolve, reject) => {
      const task = { job, resolve, reject };

      // If a worker is free, dispatch immediately. Otherwise queue.
      if (this.idle.length > 0) {
        this._dispatch(this.idle.pop(), task);
      } else {
        this.queue.push(task);
      }
    });
  }

  _dispatch(worker, task) {
    // One-shot listeners for this specific job
    const onMessage = (result) => {
      cleanup();
      task.resolve(result);
      this._release(worker);
    };
    const onError = (err) => {
      cleanup();
      task.reject(err);
      this._release(worker);
    };
    const cleanup = () => {
      worker.off('message', onMessage);
      worker.off('error', onError);
    };

    worker.on('message', onMessage);
    worker.on('error', onError);
    worker.postMessage(task.job);
  }

  _release(worker) {
    // If jobs are waiting, hand the worker the next one immediately
    if (this.queue.length > 0) {
      this._dispatch(worker, this.queue.shift());
    } else {
      this.idle.push(worker);
    }
  }

  // Clean shutdown -- terminate every worker
  async destroy() {
    await Promise.all(this.workers.map((w) => w.terminate()));
  }
}

module.exports = WorkerPool;
// usage.js
const WorkerPool = require('./pool');

const pool = new WorkerPool('./fib-worker.js', 4);  // 4 parallel workers

// Submit 10 jobs at once -- the pool will run 4 in parallel,
// queueing the rest until a worker becomes free.
const jobs = [40, 41, 42, 39, 38, 40, 41, 42, 39, 38];
Promise.all(jobs.map((n) => pool.run({ n }))).then((results) => {
  console.log(results);
  pool.destroy();
});

This is the same pattern that production libraries use. Piscina is the de facto standard worker pool for Node.js -- it adds task cancellation, timeouts, transferList plumbing, dynamic resizing, and graceful drain. Unless you have a very specific reason to roll your own, use Piscina:

// npm install piscina
const Piscina = require('piscina');
const pool = new Piscina({ filename: './fib-worker.js' });

// Each `run()` returns a Promise that resolves with the worker's result
const result = await pool.run({ n: 42 });

worker_threads vs child_process vs cluster

Node.js gives you three different ways to escape the single-thread limit, and they exist for different reasons. Senior interviews almost always ask you to compare them.

+----------------------------------------------------------------------+
|          THREE WAYS TO PARALLELIZE IN NODE.JS                        |
+----------------------------------------------------------------------+
|                                                                       |
|                  worker_threads     child_process       cluster       |
|                  --------------     -------------       -------       |
|  Unit            OS thread          OS process          OS process    |
|  Memory model    Same V8 isolate    Separate process    Separate proc |
|                  per worker, but    with its own        with its own  |
|                  same process       heap and PID        heap and PID  |
|                                                                       |
|  Startup cost    ~30-80 ms          ~100-300 ms         ~100-300 ms   |
|  Memory cost     Lower (shared      Higher (full        Higher (full  |
|                  process)           Node process)       Node process) |
|                                                                       |
|  Communication   postMessage,       stdin/stdout pipes, IPC channel   |
|                  SharedArrayBuffer, IPC channel,        + auto load   |
|                  transferList       send()              balancing on  |
|                                                         a shared port |
|                                                                       |
|  Shared memory   YES                NO                  NO            |
|                  (SharedArrayBuffer)                                  |
|                                                                       |
|  Best for        CPU-bound JS work  Running other       Scaling an    |
|                  inside one Node    binaries (ffmpeg,   HTTP server   |
|                  process            python, git);       across CPU    |
|                                     isolating crashes   cores         |
|                                                                       |
|  Crash isolation Weak (one bad      Strong (process     Strong        |
|                  worker can corrupt boundary)           (process      |
|                  shared memory)                         boundary)     |
|                                                                       |
|  Built on        libuv threads      OS fork/exec        child_process |
|                                                         + net IPC     |
|                                                                       |
+----------------------------------------------------------------------+

A simple decision tree:

  • Need to run pure JavaScript CPU work fast? -> worker_threads
  • Need to call an external program (ffmpeg, ImageMagick, python)? -> child_process.spawn
  • Need to scale a stateless HTTP server across CPU cores? -> cluster (or, in production, just run multiple Node processes behind a load balancer / use PM2)

These are not mutually exclusive. A real backend often uses cluster to fan out across cores and worker_threads inside each cluster worker for CPU offloading.


Common Mistakes

1. Spawning a new worker per request. Worker startup costs 30-80 ms and allocates a fresh V8 heap. Doing it for every HTTP request makes things slower than just running the work on the main thread. Always use a pool (yours, or Piscina) and reuse workers.

2. Forgetting to terminate workers. A Worker instance keeps the Node process alive until you call worker.terminate() or the worker exits on its own. If you spawn workers in a script and never terminate them, the process hangs forever. In long-running servers, attach worker.on('exit', ...) and respawn if a worker dies unexpectedly.

3. Sending huge buffers without transferList. By default postMessage copies the data via structured clone. Sending a 100 MB buffer copies 100 MB. Pass the buffer in the second argument as a transferable to move ownership instead -- it's effectively free.

4. Using SharedArrayBuffer without Atomics. Plain reads and writes on shared memory are racy. Two threads doing arr[0]++ will lose updates. Always use Atomics.add, Atomics.compareExchange, etc., for any cell that more than one thread touches.

5. Putting I/O-bound work in workers. Workers help CPU-bound code. If your bottleneck is waiting on a database or HTTP API, moving it to a worker just adds message-passing overhead -- the event loop was already idle during the wait. Workers are for compute, not for I/O.

6. Sharing complex objects expecting them to mutate. Structured clone copies. If you postMessage an object to a worker, mutate it inside the worker, and expect the parent to see the changes -- it won't. The worker has its own copy. Use SharedArrayBuffer if you need real shared state.


Interview Questions

1. "When should you use worker_threads instead of just running code on the main thread?"

Use worker_threads when you have a CPU-bound task that takes long enough to noticeably block the event loop -- typically more than ~50 ms of pure JavaScript work. Examples: image resizing, PDF generation, bcrypt with high cost, parsing very large JSON or CSV files, heavy regex on user input, cryptographic operations on big payloads. The goal is to keep the main thread free to handle incoming requests while the heavy compute runs in parallel on another OS thread. Do not use workers for I/O-bound tasks like database queries or HTTP calls; the event loop already handles those efficiently, and adding worker overhead would only slow things down.

2. "Explain the difference between postMessage with transferList and SharedArrayBuffer."

postMessage with a transferList moves ownership of an ArrayBuffer from one thread to the other. After the transfer, the sender can no longer access that buffer -- it's detached. The data is not copied, so it's effectively free, but only one thread owns it at a time. SharedArrayBuffer is fundamentally different: the same memory is simultaneously visible to both threads. Neither side gives anything up; they read and write the same bytes concurrently. That power requires Atomics operations to avoid race conditions. Use transferList when you want to hand off a large buffer and have only one side use it at a time (e.g. handing pixels to a worker for processing). Use SharedArrayBuffer when multiple threads must read or update the same state continuously (e.g. a shared counter, lock-free queue, or shared cache).

3. "Compare worker_threads, child_process, and cluster. When do you reach for each?"

All three give you parallelism, but at different boundaries. worker_threads creates a new OS thread inside the same Node process, with its own V8 isolate. It has the lowest startup cost, the lowest memory overhead, and is the only one that supports shared memory -- ideal for CPU-bound JavaScript work. child_process forks an entirely separate OS process, which can run any program (including a different binary like ffmpeg or python). It has stronger isolation but no shared memory and higher startup cost. cluster is a higher-level wrapper around child_process specifically designed to scale a network server across CPU cores: it spawns N worker processes and load-balances incoming connections across them on a shared port. Decision rule: pure JS compute -> worker_threads; external binary or strict isolation -> child_process; scaling an HTTP server -> cluster (or a process manager like PM2 in production).

4. "Why is creating a new worker for every request a bad idea, and how do you fix it?"

Worker creation is expensive. Spinning up a fresh V8 isolate, parsing the worker file, and initializing the new event loop typically costs 30-80 milliseconds and a few megabytes of memory. If you create a worker per HTTP request, you pay that cost on every single request -- and for short tasks, the overhead can exceed the actual work, making your code slower than the single-threaded version. The fix is a worker pool: pre-spawn a fixed number of long-lived workers (usually equal to os.cpus().length) and dispatch incoming jobs to whichever worker is idle, queueing the rest. Each worker handles many tasks over its lifetime, so the startup cost is amortized over thousands of jobs. In production, use the Piscina library, which implements this pattern with cancellation, timeouts, transferable plumbing, and dynamic sizing.

5. "What is Atomics for, and what happens if you ignore it when using SharedArrayBuffer?"

Atomics provides indivisible read-modify-write operations on SharedArrayBuffer views, plus low-level synchronization primitives like Atomics.wait and Atomics.notify. Operations like Atomics.add, Atomics.compareExchange, and Atomics.store are guaranteed to complete as a single, uninterruptible step from the perspective of every other thread. If you ignore Atomics and use ordinary JavaScript on shared memory -- e.g. arr[0]++ from two threads -- you get lost updates: both threads read the same old value, both increment locally, both write back, and one of the increments vanishes. The final number becomes nondeterministic and depends on timing. There is no warning and no exception; the program just produces wrong answers. Anytime more than one thread can touch the same shared cell, every read and write to that cell must go through Atomics.


Quick Reference — Worker Threads Cheat Sheet

+---------------------------------------------------------------+
|           WORKER_THREADS CHEAT SHEET                          |
+---------------------------------------------------------------+
|                                                                |
|  IMPORTS:                                                      |
|  const { Worker, isMainThread, parentPort, workerData }        |
|    = require('node:worker_threads');                           |
|                                                                |
|  SPAWN A WORKER (file):                                        |
|  const w = new Worker('./worker.js', { workerData: {...} });   |
|                                                                |
|  SPAWN A WORKER (inline):                                      |
|  new Worker(sourceString, { eval: true });                     |
|                                                                |
|  PARENT -> WORKER:                                             |
|  worker.postMessage(data)                                      |
|  worker.postMessage(data, [bigArrayBuffer])  // transfer       |
|                                                                |
|  WORKER -> PARENT:                                             |
|  parentPort.postMessage(result)                                |
|                                                                |
|  LISTEN:                                                       |
|  worker.on('message', fn)   // parent side                     |
|  parentPort.on('message',fn)// worker side                     |
|  worker.on('error', fn)                                        |
|  worker.on('exit', code => ...)                                |
|                                                                |
|  SHARED MEMORY:                                                |
|  const sab = new SharedArrayBuffer(1024);                      |
|  const view = new Int32Array(sab);                             |
|  Atomics.add(view, 0, 1);                                      |
|  Atomics.load(view, 0);                                        |
|                                                                |
|  TERMINATE:                                                    |
|  await worker.terminate();                                     |
|                                                                |
+---------------------------------------------------------------+

+---------------------------------------------------------------+
|           KEY RULES                                            |
+---------------------------------------------------------------+
|                                                                |
|  1. Use workers for CPU-bound work, never for I/O              |
|  2. Pool workers -- never spawn one per request                |
|  3. Use transferList for large buffers (zero copy)             |
|  4. Use SharedArrayBuffer + Atomics for shared state           |
|  5. Always terminate workers or the process won't exit         |
|  6. Pool size ~= os.cpus().length is a good default            |
|  7. For production, prefer Piscina over a hand-rolled pool     |
|  8. worker_threads != child_process != cluster -- pick right   |
|                                                                |
+---------------------------------------------------------------+
FeatureMain thread onlyworker_threads
CPU work blocks serverYesNo (runs in parallel)
Memory overheadLowest+few MB per worker
Shared stateTrivial (same heap)Via SharedArrayBuffer
Startup costZero~30-80 ms per worker
Ideal workloadI/O-boundCPU-bound
Crash isolationNoneWeak (same process)
Built-in poolingN/ANo (use Piscina)

Prev: Lesson 5.3 -- Error Handling Strategy Next: Lesson 6.1 -- Express Basics and Middleware


This is Lesson 5.4 of the Node.js Interview Prep Course -- 10 chapters, 42 lessons.

On this page