Node.js Interview Prep
Testing

Testing Strategies

Pyramid, TDD, and Killing Flakes

LinkedIn Hook

"Your test suite has 4,000 tests. It takes 47 minutes to run. And it still misses bugs in production."

Sound familiar? Most Node.js teams write tests the wrong way around. They start with end-to-end tests because "they feel real," then bolt unit tests on later when the suite gets too slow to run on every commit.

The result? A pyramid that's upside down. Slow feedback loops. Flaky tests that fail randomly at 3am. Mocks that hide real bugs. And a CI pipeline so brittle developers --skip-tests just to merge a PR.

The fix is not "more tests." The fix is the right shape of tests. A solid backend testing strategy is 70% unit, 20% integration, 10% end-to-end. Fast at the bottom, broad at the top, and ruthlessly deterministic at every layer.

In Lesson 9.3, I cover the testing pyramid for Node services, the arrange-act-assert pattern, TDD red-green-refactor with Jest, when to mock vs integrate, how to test auth flows, a real GitHub Actions pipeline, and the seven causes of flaky tests (and how to kill each one).

Read the full lesson -> [link]

#NodeJS #Testing #TDD #CICD #BackendDevelopment #InterviewPrep


Testing Strategies thumbnail


What You'll Learn

  • The testing pyramid for backend services and why most teams get it inverted
  • The arrange-act-assert pattern and why every test should follow it
  • How to test error scenarios (network failure, DB down, invalid input, timeouts)
  • Testing authentication flows: login, JWT verification, session expiry, RBAC
  • Building a CI/CD pipeline with GitHub Actions that runs tests on every PR
  • The TDD red-green-refactor cycle applied to a real Node.js function
  • When to mock vs when to integrate — the decision rules that actually work
  • The seven causes of flaky tests and a checklist to eliminate each one

The Restaurant Kitchen Analogy — Why Pyramid Shape Matters

Imagine a restaurant kitchen that only does full dress rehearsals. Every time the chef wants to verify a new sauce, the entire staff comes in, every table is set, real customers are seated, the full menu is prepared, and at the end someone tastes the sauce. If the sauce is wrong, you've wasted three hours, ten people, and a hundred dollars of ingredients to learn one fact.

That is what testing looks like when your suite is mostly end-to-end tests. Slow feedback. Expensive failures. And when something breaks, you have no idea which of the fifty moving parts caused it.

A real kitchen works differently. The line cook tastes the sauce alone, with one spoon, in five seconds. The sous chef tastes the dish (sauce + protein + sides) before plating. And only the final dress rehearsal — done rarely — verifies the whole experience. Fast feedback at the smallest unit, broader feedback as confidence grows, full-stack verification only as a final gate.

That is the testing pyramid: lots of cheap, fast unit tests at the bottom; some integration tests in the middle; a tiny number of end-to-end tests at the top. Each layer catches a different class of bug, and each layer pays for itself in feedback speed.

+---------------------------------------------------------------+
|              THE BACKEND TESTING PYRAMID                       |
+---------------------------------------------------------------+
|                                                                |
|                          /\                                    |
|                         /  \                                   |
|                        / E2E\         ~5-10%   slow, brittle   |
|                       /------\        seconds-minutes per test |
|                      /        \                                |
|                     / INTEGR.  \      ~20%     medium speed    |
|                    /------------\     hits real DB, queues     |
|                   /              \                             |
|                  /     UNIT       \   ~70%     fast, isolated  |
|                 /                  \  millis per test          |
|                +--------------------+                          |
|                                                                |
|  Bottom layer: pure functions, services with mocked deps       |
|  Middle:       route + service + real DB (test container)      |
|  Top:          full HTTP + DB + auth + maybe browser           |
|                                                                |
|  Goal: catch bugs at the LOWEST layer that can find them.      |
|                                                                |
+---------------------------------------------------------------+

Napkin AI Visual Prompt: "Dark gradient (#0a1a0a -> #0d2e16). Three-tier pyramid: bottom tier wide and green (#68a063) labeled 'UNIT 70%', middle tier amber (#ffb020) labeled 'INTEGRATION 20%', top tier narrow white labeled 'E2E 10%'. To the right, a speed gauge: bottom = milliseconds, middle = seconds, top = minutes. To the left, a cost gauge: bottom = cheap, top = expensive. White monospace labels."


The Arrange-Act-Assert Pattern

Every test, regardless of layer, should follow the same three-step shape: Arrange the world, Act on the system, Assert the outcome. This is not a style preference — it is a debugging tool. When a test fails, you can immediately tell whether the setup broke, the call broke, or the expectation broke.

// tests/unit/calculateDiscount.test.js
// AAA pattern: every test reads top-to-bottom in three blocks.

const { calculateDiscount } = require('../../src/pricing');

describe('calculateDiscount', () => {
  test('applies 10% off for orders over $100', () => {
    // ARRANGE -- build the inputs and any state the test needs
    const order = {
      items: [
        { price: 60, qty: 1 },
        { price: 50, qty: 1 },
      ],
      customerTier: 'standard',
    };

    // ACT -- invoke exactly ONE thing under test
    const result = calculateDiscount(order);

    // ASSERT -- verify the observable outcome, nothing else
    expect(result.discountPercent).toBe(10);
    expect(result.finalTotal).toBe(99);
  });

  test('returns zero discount for empty cart', () => {
    // ARRANGE
    const order = { items: [], customerTier: 'standard' };

    // ACT
    const result = calculateDiscount(order);

    // ASSERT
    expect(result.discountPercent).toBe(0);
    expect(result.finalTotal).toBe(0);
  });
});

Rules that make AAA actually work:

  1. One Act per test. If you have two act calls, you have two tests.
  2. No assertions inside Arrange. Setup that needs assertions belongs in a helper.
  3. Assertions are about outcomes, not implementation. Don't assert "this private method was called" — assert "the result was X".
  4. Blank lines (or comments) between blocks. The eye should land on each phase instantly.

Testing Error Scenarios — Where Bugs Actually Live

Happy-path tests are easy. The bugs that take down production live in error paths: network blips, database failovers, malformed input, timeouts, partial writes. A good suite spends more time on errors than on the happy path.

// tests/unit/userService.test.js
// Testing how the service handles failures of its dependencies.

const { UserService } = require('../../src/services/userService');

describe('UserService.getUser - error scenarios', () => {
  let service;
  let mockRepo;
  let mockHttpClient;

  beforeEach(() => {
    // Fresh mocks per test -- never share mock state across tests
    mockRepo = {
      findById: jest.fn(),
    };
    mockHttpClient = {
      get: jest.fn(),
    };
    service = new UserService(mockRepo, mockHttpClient);
  });

  test('throws NotFoundError when DB returns null', async () => {
    // ARRANGE -- simulate a missing row
    mockRepo.findById.mockResolvedValue(null);

    // ACT + ASSERT -- expect a typed error, not a generic Error
    await expect(service.getUser('u_123'))
      .rejects
      .toThrow('User u_123 not found');
  });

  test('throws DatabaseError when DB connection fails', async () => {
    // ARRANGE -- simulate the DB being completely down
    mockRepo.findById.mockRejectedValue(new Error('ECONNREFUSED'));

    // ACT + ASSERT
    await expect(service.getUser('u_123'))
      .rejects
      .toThrow('Database unavailable');
  });

  test('falls back to cached data when external profile API times out', async () => {
    // ARRANGE -- DB works, external API hangs
    mockRepo.findById.mockResolvedValue({ id: 'u_123', name: 'Ada' });
    mockHttpClient.get.mockRejectedValue(new Error('ETIMEDOUT'));

    // ACT
    const user = await service.getUser('u_123');

    // ASSERT -- service degrades gracefully instead of crashing
    expect(user.name).toBe('Ada');
    expect(user.profileEnriched).toBe(false);
  });

  test('rejects invalid user IDs before touching the DB', async () => {
    // ACT + ASSERT -- input validation runs first, DB never called
    await expect(service.getUser(''))
      .rejects
      .toThrow('Invalid user id');

    expect(mockRepo.findById).not.toHaveBeenCalled();
  });

  test('rejects SQL-injection-shaped IDs', async () => {
    await expect(service.getUser("'; DROP TABLE users;--"))
      .rejects
      .toThrow('Invalid user id');
  });
});

Error scenarios every backend service should test:

  • Dependency returns null/undefined when something was expected
  • Dependency throws (DB down, network refused, timeout)
  • Input is missing, empty, wrong type, too long, malformed
  • Input contains injection payloads (SQL, NoSQL, command)
  • Concurrent calls produce a race (use Promise.all to provoke it)
  • Auth token is missing, expired, wrong audience, wrong signature

Testing Authentication Flows

Auth bugs are the most expensive bugs in a Node service. Every auth path — login, refresh, logout, role check, expiry — needs explicit tests. Treat auth like a cryptographic primitive: if it isn't tested, assume it's broken.

// tests/integration/auth.test.js
// Integration test against a real Express app, real JWT, mocked user repo.

const request = require('supertest');
const jwt = require('jsonwebtoken');
const { buildApp } = require('../../src/app');

describe('Authentication flow', () => {
  let app;
  const SECRET = 'test-secret-do-not-use-in-prod';
  const userRepo = {
    findByEmail: jest.fn(),
  };

  beforeEach(() => {
    process.env.JWT_SECRET = SECRET;
    app = buildApp({ userRepo });
  });

  test('POST /login returns a JWT for valid credentials', async () => {
    // ARRANGE -- a known user with a hashed password
    userRepo.findByEmail.mockResolvedValue({
      id: 'u_1',
      email: 'ada@example.com',
      // bcrypt hash of "correct-horse"
      passwordHash: '$2b$10$abcdefghijklmnopqrstuv',
      role: 'admin',
    });

    // ACT
    const res = await request(app)
      .post('/login')
      .send({ email: 'ada@example.com', password: 'correct-horse' });

    // ASSERT
    expect(res.status).toBe(200);
    expect(res.body.token).toBeDefined();

    // Verify the token is actually valid and contains the right claims
    const decoded = jwt.verify(res.body.token, SECRET);
    expect(decoded.sub).toBe('u_1');
    expect(decoded.role).toBe('admin');
  });

  test('POST /login rejects wrong password with 401', async () => {
    userRepo.findByEmail.mockResolvedValue({
      id: 'u_1',
      email: 'ada@example.com',
      passwordHash: '$2b$10$abcdefghijklmnopqrstuv',
    });

    const res = await request(app)
      .post('/login')
      .send({ email: 'ada@example.com', password: 'wrong' });

    expect(res.status).toBe(401);
    expect(res.body.token).toBeUndefined();
  });

  test('protected route rejects requests with no token', async () => {
    const res = await request(app).get('/admin/users');
    expect(res.status).toBe(401);
  });

  test('protected route rejects expired tokens', async () => {
    // ARRANGE -- forge a token that expired one second ago
    const expired = jwt.sign(
      { sub: 'u_1', role: 'admin' },
      SECRET,
      { expiresIn: '-1s' }
    );

    const res = await request(app)
      .get('/admin/users')
      .set('Authorization', `Bearer ${expired}`);

    expect(res.status).toBe(401);
  });

  test('protected route rejects tokens with wrong role (RBAC)', async () => {
    // A valid token, but role is "user" not "admin"
    const userToken = jwt.sign({ sub: 'u_2', role: 'user' }, SECRET);

    const res = await request(app)
      .get('/admin/users')
      .set('Authorization', `Bearer ${userToken}`);

    expect(res.status).toBe(403);
  });

  test('protected route rejects tokens signed with the wrong secret', async () => {
    const forged = jwt.sign({ sub: 'u_1', role: 'admin' }, 'attacker-secret');

    const res = await request(app)
      .get('/admin/users')
      .set('Authorization', `Bearer ${forged}`);

    expect(res.status).toBe(401);
  });
});

CI/CD Test Pipeline — GitHub Actions

A test suite that runs only on the developer's laptop is not a test suite. It's a hope. CI runs your tests on every push, on every PR, in a clean environment, with the same Node version everyone else uses. Here is a production-grade GitHub Actions pipeline.

# .github/workflows/test.yml
# Runs lint, unit, integration, and coverage on every PR and push to main.

name: Test

on:
  push:
    branches: [main]
  pull_request:
    branches: [main]

jobs:
  test:
    runs-on: ubuntu-latest

    # Real Postgres for integration tests -- not a mock
    services:
      postgres:
        image: postgres:16
        env:
          POSTGRES_USER: test
          POSTGRES_PASSWORD: test
          POSTGRES_DB: test_db
        ports:
          - 5432:5432
        # Wait until Postgres is actually ready before running tests
        options: >-
          --health-cmd pg_isready
          --health-interval 10s
          --health-timeout 5s
          --health-retries 5

    strategy:
      matrix:
        # Run on multiple Node versions to catch version-specific breakage
        node-version: [18.x, 20.x, 22.x]

    steps:
      - name: Checkout source
        uses: actions/checkout@v4

      - name: Setup Node ${{ matrix.node-version }}
        uses: actions/setup-node@v4
        with:
          node-version: ${{ matrix.node-version }}
          cache: 'npm'

      - name: Install dependencies
        run: npm ci

      - name: Lint
        run: npm run lint

      - name: Run unit tests
        run: npm run test:unit -- --ci --coverage

      - name: Run integration tests
        run: npm run test:integration -- --ci
        env:
          DATABASE_URL: postgres://test:test@localhost:5432/test_db
          JWT_SECRET: test-secret-not-real

      - name: Enforce coverage threshold
        run: npm run test:coverage -- --coverageThreshold='{"global":{"lines":80,"branches":75}}'

      - name: Upload coverage report
        if: matrix.node-version == '20.x'
        uses: actions/upload-artifact@v4
        with:
          name: coverage-report
          path: coverage/

What this pipeline buys you:

  • Every PR is tested in a clean Linux environment, not the author's MacBook
  • Real Postgres catches bugs that mocked DBs hide
  • Multi-version matrix catches Node 18 -> 20 -> 22 regressions
  • Coverage threshold prevents PRs from quietly tanking test quality
  • Lint runs first so style failures don't waste test time

TDD Red-Green-Refactor in Node.js

Test-Driven Development is not "write tests after the code." It is a three-phase loop: write a failing test (Red), write the minimum code to make it pass (Green), then improve the code without changing behavior (Refactor). Each cycle is small — minutes, not hours.

// ============================================================
// CYCLE 1: RED -- write the failing test FIRST
// ============================================================
// tests/unit/slugify.test.js
const { slugify } = require('../../src/slugify');

test('converts a simple title to a slug', () => {
  expect(slugify('Hello World')).toBe('hello-world');
});

// Run: npm test
// Result: FAIL -- module './src/slugify' does not exist. Good. We are RED.

// ============================================================
// CYCLE 1: GREEN -- minimum code to pass, nothing more
// ============================================================
// src/slugify.js
function slugify(input) {
  return input.toLowerCase().replace(' ', '-');
}
module.exports = { slugify };

// Run: npm test
// Result: PASS. We are GREEN. Do NOT add more features yet.

// ============================================================
// CYCLE 2: RED -- add the next failing test
// ============================================================
test('handles multiple spaces', () => {
  expect(slugify('Hello   Big   World')).toBe('hello-big-world');
});
// FAIL -- our naive replace only handles one space.

// CYCLE 2: GREEN
function slugify(input) {
  return input.toLowerCase().replace(/\s+/g, '-');
}
// PASS.

// ============================================================
// CYCLE 3: RED -- strip non-alphanumerics
// ============================================================
test('strips punctuation', () => {
  expect(slugify("Node.js: Tips & Tricks!")).toBe('node-js-tips-tricks');
});
// FAIL.

// CYCLE 3: GREEN
function slugify(input) {
  return input
    .toLowerCase()
    .replace(/[^a-z0-9\s-]/g, '')  // strip punctuation
    .replace(/\s+/g, '-')           // spaces to dashes
    .replace(/-+/g, '-')            // collapse repeated dashes
    .replace(/^-|-$/g, '');         // trim edge dashes
}
// PASS.

// ============================================================
// REFACTOR -- clean up without changing behavior
// ============================================================
// All tests still pass after each change. We can rename, extract, simplify.
const SLUG_STRIP = /[^a-z0-9\s-]/g;
const SLUG_SPACES = /\s+/g;
const SLUG_DASHES = /-+/g;
const SLUG_EDGES = /^-|-$/g;

function slugify(input) {
  return input
    .toLowerCase()
    .replace(SLUG_STRIP, '')
    .replace(SLUG_SPACES, '-')
    .replace(SLUG_DASHES, '-')
    .replace(SLUG_EDGES, '');
}
// Run all tests -- still GREEN. Refactor is safe.

Why TDD works: the test exists before the code, so the code is forced to be testable. Forced testability produces small functions, clear inputs/outputs, and minimal hidden state. You also get a regression suite for free, because every line of production code was written to satisfy a test.


When to Mock vs When to Integrate

This is the question that separates junior and senior testers. Mock too much and your tests pass while production burns. Mock too little and your suite takes 30 minutes to run.

+---------------------------------------------------------------+
|              MOCK vs INTEGRATE -- DECISION TABLE              |
+---------------------------------------------------------------+
|                                                                |
|  MOCK when the dependency is...                                |
|    * Slow         (network, 3rd-party API, email)              |
|    * Non-deterministic  (Date.now, Math.random, clocks)        |
|    * Costly       (Stripe charges, SMS sends)                  |
|    * External     (GitHub API, AWS SES, weather service)       |
|    * Hard to provoke  (specific error codes, rate limits)      |
|                                                                |
|  INTEGRATE when the dependency is...                           |
|    * Your own DB queries  (the schema IS the contract)         |
|    * Your own HTTP routes (Express middleware chain)           |
|    * Your own queue logic (BullMQ producer/consumer)           |
|    * Your own auth middleware                                  |
|    * SQL or ORM behavior  (joins, transactions, locks)         |
|                                                                |
|  RULE OF THUMB:                                                |
|    Mock things you do not OWN.                                 |
|    Integrate things you DO own.                                |
|                                                                |
+---------------------------------------------------------------+

Killing Flaky Tests

A flaky test is one that passes sometimes and fails sometimes with no code change. Flaky tests are worse than no tests — they teach the team to ignore red builds, which hides real failures. Every flake has a cause. Find it and kill it.

+---------------------------------------------------------------+
|              THE SEVEN CAUSES OF FLAKY TESTS                  |
+---------------------------------------------------------------+
|                                                                |
|  1. TIME -- Date.now / setTimeout / "wait 100ms then check"   |
|     FIX: jest.useFakeTimers(), inject a clock                  |
|                                                                |
|  2. ORDER -- test A leaves state that test B reads             |
|     FIX: reset DB / mocks in beforeEach, never share state     |
|                                                                |
|  3. RANDOMNESS -- Math.random, UUIDs, faker without a seed     |
|     FIX: seed the RNG, inject id generators                    |
|                                                                |
|  4. NETWORK -- real HTTP calls in unit tests                   |
|     FIX: nock, msw, or pure mocks                              |
|                                                                |
|  5. RACES -- async operations awaited incorrectly              |
|     FIX: await every promise; lint with no-floating-promises   |
|                                                                |
|  6. SHARED RESOURCES -- two tests hit the same DB row          |
|     FIX: per-test schemas, transactions rolled back, UUIDs     |
|                                                                |
|  7. ENVIRONMENT -- works on Mac, fails on Linux CI             |
|     FIX: run CI image locally with Docker; pin Node version    |
|                                                                |
|  GOLDEN RULE: never retry a flaky test. FIX it or DELETE it.   |
|                                                                |
+---------------------------------------------------------------+

Common Mistakes

1. Inverting the pyramid (too many E2E tests). Teams reach for end-to-end tests because they "feel real," but E2E tests are slow, brittle, and expensive to debug. A 10-minute E2E suite kills your feedback loop. Push as much coverage as possible down to unit tests. Use E2E only for the handful of critical user journeys (login, checkout, signup). Aim for 70/20/10 unit/integration/e2e.

2. Testing implementation details instead of behavior. Bad: expect(service._privateMethod).toHaveBeenCalled(). Good: expect(result).toEqual({...}). If you assert that a private method was called, your test breaks every time you refactor — even when behavior is unchanged. Test the observable outcome (return value, DB state, HTTP response), not the path the code took to get there.

3. Over-mocking. Mocking your own database, your own ORM, your own routes. The result is a test suite that always passes while production has bugs in the seams between modules. Mock external services. Integrate your own code with a real test database (Docker container in CI). The seams are where bugs live.

4. Ignoring or retrying flaky tests. Adding jest.retryTimes(3) is admitting defeat. A test that passes "usually" is a test that catches bugs "usually." Track flaky tests in a quarantine list, fix them within a sprint, or delete them. Never let flake become normal.

5. No CI gating. Tests that run only on developer laptops are not enforced. Without a CI job that blocks merge on test failure, tests rot. Set up GitHub Actions (or GitLab CI / CircleCI) to run on every PR, fail the PR check on red, and require green checks before merge.


Interview Questions

1. "Explain the testing pyramid for a Node.js backend service. Why does shape matter?"

The pyramid has three layers. The base — about 70% of tests — is unit tests: pure functions and services with their dependencies mocked, running in milliseconds. The middle — about 20% — is integration tests: real database, real HTTP routes, real middleware chain, running in seconds. The top — about 10% — is end-to-end tests: full system including external services, running in minutes. Shape matters because feedback speed compounds: unit tests run on every save, integration tests on every commit, E2E tests on every PR. An inverted pyramid (mostly E2E) means slow feedback, brittle tests, and an unmaintainable suite. The pyramid shape catches each class of bug at the cheapest layer that can find it.

2. "When should you mock a dependency vs use the real thing?"

Mock things you do not own: third-party APIs (Stripe, Twilio), external services, slow or expensive resources, non-deterministic sources (clocks, RNG). Integrate things you do own: your own database queries, your own HTTP routes, your own queue handlers, your own auth middleware. The reason is contracts. Your own code's contract IS your test surface — if you mock it, you're testing the mock instead of the code. External code has contracts you control through versioning, so a recorded mock is fine. A practical heuristic: mock the network boundary, integrate everything inside it.

3. "What is a flaky test and how do you eliminate one?"

A flaky test passes and fails non-deterministically with no code change. Flakes have seven typical causes: time-dependent code (Date.now, setTimeout-based waits), test order dependencies (leftover state), unseeded randomness, real network calls, unawaited promises, shared database resources, and environment differences (Mac dev vs Linux CI). To eliminate: identify the cause by running the test 100 times in a row, then fix the root — fake timers for time, beforeEach resets for state, seeded RNGs for randomness, nock/msw for network, lint rules for floating promises, per-test data isolation for shared resources, Docker images for environment parity. Never retryTimes — that hides the bug instead of fixing it.

4. "Walk me through TDD red-green-refactor with a real example."

You start in Red: write a failing test for behavior that doesn't exist yet — for example, expect(slugify('Hello World')).toBe('hello-world'). Run it; it fails because the function isn't implemented. You move to Green: write the absolute minimum code to make the test pass — return input.toLowerCase().replace(' ', '-'). Run; it passes. You resist the urge to add more features. Then Refactor: clean up the code without changing behavior — extract regex constants, rename variables, simplify logic. Run all tests after each change; they must all stay green. Then start the next cycle with another failing test. Each cycle takes minutes. The discipline ensures every line of production code exists to satisfy a test, your code stays small and testable, and you build a regression suite as a byproduct.

5. "How would you set up a CI pipeline for a Node.js service, and what would it run?"

I'd use GitHub Actions with a workflow triggered on push to main and on every PR. It runs on a matrix of Node versions (18, 20, 22) on Ubuntu. It spins up real service containers — Postgres, Redis — so integration tests hit real dependencies. The pipeline steps are: checkout, setup-node with npm cache, npm ci, lint, unit tests with coverage, integration tests against the containers, then a coverage threshold gate (e.g., 80% lines, 75% branches). Coverage reports upload as artifacts. The job is required for PR merge, so a red build blocks the merge button. For larger services I'd split unit and integration into parallel jobs to keep wall-clock time low, and add a separate nightly job for slow E2E tests that don't need to run on every PR.


Quick Reference — Testing Strategy Cheat Sheet

+---------------------------------------------------------------+
|              TESTING STRATEGY CHEAT SHEET                      |
+---------------------------------------------------------------+
|                                                                |
|  PYRAMID RATIOS:                                               |
|    Unit         ~70%   millis    mock externals               |
|    Integration  ~20%   seconds   real DB, real routes         |
|    E2E          ~10%   minutes   full stack, critical flows   |
|                                                                |
|  EVERY TEST:                                                   |
|    1. ARRANGE  -- build inputs and state                       |
|    2. ACT      -- one call to the system under test            |
|    3. ASSERT   -- one observable outcome                       |
|                                                                |
|  TDD CYCLE:                                                    |
|    RED      write a failing test                               |
|    GREEN    minimum code to pass                               |
|    REFACTOR clean up, keep tests green                         |
|                                                                |
|  MOCK vs INTEGRATE:                                            |
|    Mock things you do NOT own                                  |
|    Integrate things you DO own                                 |
|                                                                |
+---------------------------------------------------------------+

+---------------------------------------------------------------+
|              KILL FLAKY TESTS                                  |
+---------------------------------------------------------------+
|                                                                |
|  1. Time          -> jest.useFakeTimers()                      |
|  2. Order         -> beforeEach reset                          |
|  3. Randomness    -> seed the RNG                              |
|  4. Network       -> nock / msw                                |
|  5. Async races   -> await every promise                       |
|  6. Shared data   -> per-test isolation                        |
|  7. Environment   -> Docker, pin Node version                  |
|                                                                |
|  Never retry flakes. Fix or delete.                            |
|                                                                |
+---------------------------------------------------------------+
LayerSpeedScopeMocksWhen to Use
UnitmsOne function/classAll depsPure logic, branches, edge cases
IntegrationsecondsRoute + service + DBExternal APIs onlyWiring, SQL, middleware
E2EminutesFull systemNoneCritical user journeys only
ContractsecondsAPI boundaryProvider stubVersioned consumer/provider pairs
LoadminutesOne endpointNoneCapacity planning, perf regression

Prev: Lesson 9.2 -- Integration Testing Next: Lesson 10.1 -- Process Management with PM2 and Clustering


This is Lesson 9.3 of the Node.js Interview Prep Course -- 10 chapters, 42 lessons.

On this page