Node.js Interview Prep
Production and Scaling

Dockerizing Node.js

Multi-Stage Builds, Tini, and Images Under 100 MB

LinkedIn Hook

"Your Node.js Docker image is 1.4 GB, runs as root, ignores SIGTERM, and your Kubernetes cluster kills it after 30 seconds every deploy."

Most teams write a Dockerfile once, copy it forever, and never look at it again. The result is a 1.4 GB image that ships gcc, python, every dev dependency, and the entire git history of node_modules. It runs as root because nobody removed the default user. It ignores SIGTERM because npm start swallows signals. And every rolling deploy drops in-flight requests because the process never gets a chance to finish them.

The fix is not exotic. A multi-stage Dockerfile separates the build from the runtime. A .dockerignore keeps node_modules and .git out of the build context. A non-root user closes the most common container escape. tini becomes PID 1 and forwards signals correctly. A HEALTHCHECK directive lets the orchestrator know when your app is actually ready. And graceful SIGTERM handling in your Node code drains in-flight requests before exit.

Do these six things and your image drops from 1.4 GB to under 150 MB, your deploys stop dropping requests, and your security scanner stops screaming.

In Lesson 10.4, I break down Dockerizing Node.js: multi-stage builds, signal handling, image optimization, health checks, and a docker-compose setup for local development with Postgres and Redis.

Read the full lesson -> [link]

#NodeJS #Docker #DevOps #Kubernetes #ContainerSecurity #BackendDevelopment #InterviewPrep


Dockerizing Node.js thumbnail


What You'll Learn

  • Why a multi-stage Dockerfile is the only sensible way to ship Node.js to production
  • How .dockerignore keeps your build context small and your secrets out of the image
  • Running Node.js as a non-root user and why it matters for security
  • Signal handling in containers — SIGTERM, PID 1, and why tini exists
  • How to optimize image size with alpine, distroless, and npm ci --omit=dev
  • The HEALTHCHECK directive and how orchestrators use it for liveness and readiness
  • A complete docker-compose.yml setup for local development with Postgres and Redis
  • Graceful shutdown in Node.js so rolling deploys never drop in-flight requests

The Shipping Container Analogy — Why Docker Forces Discipline

Before shipping containers existed, cargo was loaded onto ships piece by piece — barrels of oil, sacks of grain, crates of machinery, all stacked by dockworkers who had to know how to handle each item. A single ship could take a week to load. Theft was rampant. Damage was constant. Every port had its own equipment, its own procedures, its own labor union.

The shipping container changed everything. A standardized 20-foot steel box. Anything fits inside. The crane does not care what is in the box — it just lifts the box. The truck does not care, the train does not care, the ship does not care. Load once, ship anywhere, with the same equipment everywhere. The contents are the developer's problem. The box is the platform's problem.

A Docker image is exactly that container. Inside, you put your Node.js app and everything it needs to run — the right Node version, your code, your node_modules, your config defaults. Outside, the orchestrator (Docker, Kubernetes, ECS) treats every image identically. Same start command, same health probe, same signal handling, same resource limits. The contents are your problem. The runtime is the platform's problem.

But just like real shipping containers, Docker only delivers on its promise if you respect the discipline. Stuff a 20-foot container with junk you do not need and you waste fuel on every voyage. Leave the door unlocked and your cargo gets stolen. Forget to label which way is up and your cargo arrives broken. Dockerizing Node.js correctly is mostly about not making these mistakes.

+---------------------------------------------------------------+
|           NAIVE DOCKER IMAGE (The Problem)                    |
+---------------------------------------------------------------+
|                                                                |
|  FROM node:20             <- 1.1 GB base image                 |
|  COPY . .                 <- copies .git, tests, .env          |
|  RUN npm install          <- installs devDependencies          |
|  CMD ["npm", "start"]     <- npm swallows SIGTERM              |
|                                                                |
|  Result:                                                       |
|   - 1.4 GB image                                               |
|   - Runs as root                                               |
|   - Includes build tools, gcc, python                          |
|   - Drops requests on every deploy                             |
|   - Secrets baked into layers                                  |
|                                                                |
+---------------------------------------------------------------+

+---------------------------------------------------------------+
|           OPTIMIZED IMAGE (The Goal)                          |
+---------------------------------------------------------------+
|                                                                |
|  FROM node:20-alpine AS build  <- 180 MB build stage           |
|  FROM node:20-alpine AS runtime <- final stage, prod deps only |
|  USER node                      <- non-root                    |
|  ENTRYPOINT ["/sbin/tini", "--"] <- signals work               |
|  HEALTHCHECK ...                <- orchestrator-aware          |
|                                                                |
|  Result:                                                       |
|   - 120 MB image                                               |
|   - Non-root user                                              |
|   - Production deps only                                       |
|   - Graceful drain on SIGTERM                                  |
|   - Build cache hits on every code change                      |
|                                                                |
+---------------------------------------------------------------+

Napkin AI Visual Prompt: "Dark gradient (#0a1a0a -> #0d2e16). Two side-by-side Docker container illustrations: LEFT labeled 'Naive 1.4 GB' is bloated, overflowing with red icons (gcc, python, .git, devDeps), padlock broken, root user crown. RIGHT labeled 'Optimized 120 MB' is slim and tidy, Node green (#68a063) outline, amber (#ffb020) padlock closed, 'node' user tag, tini PID 1 label. White monospace labels. Amber arrow showing the optimization journey."


The Naive Dockerfile — What Not to Do

Let us start with the Dockerfile most teams ship in their first version. It works, in the sense that the container starts and serves traffic. It is also a security and performance disaster.

# Dockerfile.naive — DO NOT COPY THIS
# Every line below has at least one problem. Read on for the fixes.

FROM node:20

# Copies the entire build context, including .git, tests, .env, README, etc.
# Anything not in .dockerignore ends up in the image layer.
WORKDIR /app
COPY . .

# Installs ALL dependencies including devDependencies. Build tools, test
# frameworks, type definitions — none of which production needs.
RUN npm install

# Runs as the default root user. A container escape grants root on the host.
# npm start spawns a child process that does not forward signals.
EXPOSE 3000
CMD ["npm", "start"]

Counting the problems:

  1. node:20 base image is ~1.1 GB. It includes Debian, build toolchains, and everything you do not need at runtime.
  2. COPY . . busts the layer cache on every code change. Even a one-character README edit forces npm install to run again.
  3. No .dockerignore. node_modules, .git, .env, and coverage/ all get copied into the build context and into the image.
  4. npm install brings devDependencies. TypeScript, Jest, ESLint, and their transitive trees all end up in the production image.
  5. No USER directive. The container runs as root.
  6. CMD ["npm", "start"] breaks signals. npm spawns a child Node process and does not forward SIGTERM. Kubernetes sends SIGTERM, npm ignores it, and after 30 seconds the orchestrator sends SIGKILL — dropping every in-flight request.
  7. No HEALTHCHECK. The orchestrator has no idea when the app is actually ready.

Now let us fix every one of these.


The Optimized Multi-Stage Dockerfile

A multi-stage build uses two (or more) FROM directives. The first stage is the build stage — it has all the tools needed to install and compile dependencies. The second stage is the runtime stage — it copies only the artifacts it needs from the build stage and discards everything else. The final image contains only the runtime stage. Build tools, devDependencies, and source maps never make it into production.

# Dockerfile — production-ready Node.js image
# Multi-stage build: separate "build" and "runtime" stages so the final
# image contains only what is needed to run the app, not to build it.

# ---------- Stage 1: build ----------
# Use a specific version tag, never "latest". Alpine keeps the base small.
FROM node:20-alpine AS build

# Working directory inside the container. /app is the convention.
WORKDIR /app

# Copy ONLY the manifest files first. This is the key to layer caching:
# if package.json and package-lock.json have not changed, Docker reuses
# the cached "npm ci" layer on every subsequent build.
COPY package.json package-lock.json ./

# npm ci is faster and stricter than npm install — it requires a lockfile
# and produces a deterministic install. We need devDependencies here
# because the build step (TypeScript compile, bundling) needs them.
RUN npm ci

# Now copy the source code. Because this is a SEPARATE layer from the
# install, code-only changes do not invalidate the npm ci cache above.
COPY . .

# Run the build script (TypeScript compile, bundling, etc.). Adjust to
# whatever your project uses. The output goes to /app/dist by convention.
RUN npm run build

# Reinstall dependencies, this time without devDependencies. The result
# is a production-only node_modules that we will copy into the runtime
# stage. --omit=dev replaces the older --production flag.
RUN npm ci --omit=dev && npm cache clean --force

# ---------- Stage 2: runtime ----------
# Start from a fresh, minimal base. Nothing from the build stage carries
# over unless we explicitly COPY --from=build it.
FROM node:20-alpine AS runtime

# Install tini — a tiny init system that becomes PID 1 inside the
# container. Tini reaps zombie processes and forwards signals like
# SIGTERM to the Node.js process so graceful shutdown actually works.
RUN apk add --no-cache tini

# Set NODE_ENV so frameworks like Express enable production optimizations
# (template caching, minimal error output, etc.).
ENV NODE_ENV=production

WORKDIR /app

# Copy the production node_modules and the build output from the build
# stage. Nothing else from the build stage exists in this image — no
# source code, no devDependencies, no build tools, no .git history.
COPY --from=build --chown=node:node /app/node_modules ./node_modules
COPY --from=build --chown=node:node /app/dist ./dist
COPY --from=build --chown=node:node /app/package.json ./package.json

# Switch to the built-in non-root "node" user that ships with the
# official Node.js images. Any container escape is contained to a
# non-privileged user instead of host root.
USER node

# Document the port the app listens on. EXPOSE is informational only —
# the orchestrator still has to publish it.
EXPOSE 3000

# HEALTHCHECK lets Docker (and orchestrators that respect it) probe the
# app. The command must exit 0 when healthy, non-zero when not.
# We use a tiny inline Node script to avoid adding curl to the image.
HEALTHCHECK --interval=30s --timeout=5s --start-period=10s --retries=3 \
  CMD node -e "require('http').get('http://localhost:3000/healthz', (r) => process.exit(r.statusCode === 200 ? 0 : 1)).on('error', () => process.exit(1))"

# tini becomes PID 1 and execs our Node process as a child. Signals
# delivered to the container (SIGTERM, SIGINT) are forwarded correctly.
# Use the JSON "exec form" so there is no shell in between.
ENTRYPOINT ["/sbin/tini", "--"]
CMD ["node", "dist/server.js"]

A few subtle but important details:

  • COPY package.json package-lock.json ./ before COPY . . is the single biggest layer-caching win. Code changes happen on every commit; dependency changes happen rarely. By copying manifests first, the slow npm ci step caches across hundreds of code-only builds.
  • npm ci --omit=dev in the build stage, followed by copying that node_modules to the runtime stage, gives you a production-only dependency tree without ever shipping devDependencies.
  • --chown=node:node on the COPY directives means the files belong to the non-root node user from the moment they land in the image — no separate chown step that doubles the layer size.
  • apk add --no-cache tini on Alpine adds tini in a single layer without leaving package index files behind.
  • The HEALTHCHECK uses a tiny inline Node script instead of curl or wget so we do not have to install another binary.

.dockerignore — The File Everyone Forgets

The .dockerignore file controls what gets sent from your local directory to the Docker daemon as the build context. Anything in the build context is fair game for COPY directives — and anything you forget to exclude can leak into image layers, slow down builds, and bake secrets into the registry.

# .dockerignore — keep the build context small and the image clean

# Version control — never goes in the image
.git
.gitignore
.gitattributes

# Local node_modules — must be installed fresh in the build stage
# so binaries match the container's Linux/musl, not your host macOS
node_modules
npm-debug.log*

# Build output from previous local builds
dist
build
coverage
.nyc_output

# Editor and OS junk
.vscode
.idea
.DS_Store
Thumbs.db

# Environment files — NEVER ship a real .env into an image
.env
.env.*
!.env.example

# Test artifacts
*.test.js
*.spec.js
__tests__
__snapshots__

# Documentation
README.md
docs
*.md

# CI and tooling
.github
.gitlab-ci.yml
.circleci
Dockerfile*
docker-compose*.yml

Three reasons every Node.js project needs this file from day one:

  1. node_modules from your host machine has the wrong binaries. If you built bcrypt or sharp on macOS and copy that node_modules into a Linux container, it crashes at startup. Excluding node_modules forces the container to install fresh ones for the right platform.
  2. .env files contain secrets. Without .dockerignore, COPY . . happily bakes your local .env into a layer that is uploaded to your registry forever.
  3. A smaller build context is a faster build. Sending 200 MB of node_modules and .git to the Docker daemon on every build is slow. A clean .dockerignore cuts the context to a few megabytes.

Graceful SIGTERM Handling in Node.js

A Docker container, when stopped, receives SIGTERM. The orchestrator then waits a grace period (Kubernetes default: 30 seconds) before sending SIGKILL. During the grace period, your app should:

  1. Stop accepting new connections.
  2. Let in-flight requests finish.
  3. Close database connections, flush logs, drain queues.
  4. Exit cleanly.

Most apps do none of this. They keep accepting requests until SIGKILL chops the process mid-response.

// server.js — graceful shutdown for HTTP servers
// Combine this with tini as PID 1 so SIGTERM actually reaches Node.

const express = require('express');
const app = express();

// A simple health endpoint that the Docker HEALTHCHECK probes.
// During shutdown, we flip a flag so this returns 503 and the load
// balancer stops sending us new traffic immediately.
let isShuttingDown = false;

app.get('/healthz', (req, res) => {
  if (isShuttingDown) {
    return res.status(503).json({ status: 'shutting_down' });
  }
  return res.status(200).json({ status: 'ok' });
});

app.get('/', async (req, res) => {
  // Simulate a request that takes a moment to finish
  await new Promise((r) => setTimeout(r, 1000));
  res.json({ hello: 'world' });
});

// Start the server and capture the returned http.Server instance so we
// can call its .close() method during shutdown.
const server = app.listen(3000, () => {
  console.log('Listening on 3000 as PID', process.pid);
});

// Graceful shutdown handler — runs on SIGTERM (Docker stop, k8s rolling
// deploy) and SIGINT (Ctrl+C in local dev).
async function shutdown(signal) {
  console.log(`Received ${signal}, starting graceful shutdown`);

  // 1. Flip the health flag so the load balancer marks us unhealthy and
  //    stops routing new requests within one health-check interval.
  isShuttingDown = true;

  // 2. Stop accepting NEW connections, but let in-flight requests finish.
  //    server.close() invokes the callback once all sockets are idle.
  server.close((err) => {
    if (err) {
      console.error('Error during server.close', err);
      process.exit(1);
    }
    console.log('HTTP server closed cleanly');
  });

  // 3. Close database connections, flush logs, etc. Wrap each in its own
  //    try/catch so one failure does not block the others.
  try {
    // await db.end()
    // await redis.quit()
    // await logger.flush()
  } catch (err) {
    console.error('Error closing dependencies', err);
  }

  // 4. Hard-stop after a maximum drain window. If something is hung,
  //    we still exit before the orchestrator escalates to SIGKILL.
  setTimeout(() => {
    console.error('Graceful shutdown timed out, forcing exit');
    process.exit(1);
  }, 25_000).unref();
}

process.on('SIGTERM', () => shutdown('SIGTERM'));
process.on('SIGINT', () => shutdown('SIGINT'));

The key insight: without tini, none of this matters. If npm start is PID 1, it ignores SIGTERM by default (Linux protects PID 1 from signals it has no handler for). With tini as PID 1, the signal arrives at your Node process, your handler runs, and the orchestrator sees a clean exit instead of a forced kill.


docker-compose for Local Development

In production you ship a single image to Kubernetes. In local development you want the same image plus its supporting services — a database, a cache, maybe a message broker. docker-compose declares the entire dev stack in one YAML file and brings it up with a single command.

# docker-compose.yml — local development stack
# Run with: docker compose up
# The "app" service is rebuilt from your local Dockerfile; the database
# and cache use stock images so you do not have to install them locally.

services:
  app:
    # Build from the Dockerfile in the current directory.
    # Target the build stage explicitly so you get devDependencies and
    # source maps for local debugging — production deploys still use
    # the runtime stage via "docker build" without --target.
    build:
      context: .
      dockerfile: Dockerfile
      target: build
    # Mount the source directory so code changes are visible without
    # rebuilding. The /app/node_modules volume prevents the host's
    # node_modules from shadowing the container's installed copy.
    volumes:
      - ./src:/app/src
      - /app/node_modules
    ports:
      - '3000:3000'
    environment:
      NODE_ENV: development
      DATABASE_URL: postgres://app:app@postgres:5432/app_dev
      REDIS_URL: redis://redis:6379
      LOG_LEVEL: debug
    # Wait for postgres and redis to be healthy before starting the app.
    # condition: service_healthy requires the dependencies to define
    # their own healthchecks (see below).
    depends_on:
      postgres:
        condition: service_healthy
      redis:
        condition: service_healthy
    # Run the dev script with file watching instead of the production CMD
    command: ['npm', 'run', 'dev']

  postgres:
    image: postgres:16-alpine
    environment:
      POSTGRES_USER: app
      POSTGRES_PASSWORD: app
      POSTGRES_DB: app_dev
    # Persist data across "docker compose down" runs
    volumes:
      - postgres_data:/var/lib/postgresql/data
    ports:
      - '5432:5432'
    healthcheck:
      test: ['CMD-SHELL', 'pg_isready -U app -d app_dev']
      interval: 5s
      timeout: 5s
      retries: 5

  redis:
    image: redis:7-alpine
    ports:
      - '6379:6379'
    healthcheck:
      test: ['CMD', 'redis-cli', 'ping']
      interval: 5s
      timeout: 3s
      retries: 5

# Named volume so postgres data survives container recreation
volumes:
  postgres_data:

A few things this setup gets right:

  • depends_on with condition: service_healthy means the app does not start until Postgres and Redis are actually accepting connections. Without it, your app boots first, fails its DB connection, crashes, restarts, and races the database forever.
  • The volume mount /app/node_modules is a named anonymous volume that hides the host's node_modules directory. This prevents your macOS-built native modules from clobbering the Linux ones inside the container.
  • target: build uses the build stage of the multi-stage Dockerfile, which still has devDependencies and source maps. Production deploys do not pass --target, so they get the runtime stage.

Image Size — From 1.4 GB to 120 MB

Image size matters for three reasons: pull time on every deploy, registry storage costs, and attack surface. Smaller images have fewer CVEs because they ship fewer packages.

+---------------------------------------------------------------+
|           IMAGE SIZE OPTIMIZATION LADDER                      |
+---------------------------------------------------------------+
|                                                                |
|  node:20            ->  ~1.1 GB   (Debian, full build tools)  |
|  node:20-slim       ->  ~240 MB   (Debian minimal)            |
|  node:20-alpine     ->  ~180 MB   (Alpine, musl libc)         |
|  distroless/nodejs  ->  ~150 MB   (no shell, no package mgr)  |
|                                                                |
|  + multi-stage build       -> -200 MB (drop devDependencies)  |
|  + npm ci --omit=dev       -> -50 MB                          |
|  + npm cache clean --force -> -30 MB                          |
|  + .dockerignore            -> -varies (no .git, no tests)    |
|                                                                |
|  Final: ~120 MB for a typical Express + Postgres app          |
|                                                                |
+---------------------------------------------------------------+

Alpine uses musl libc instead of glibc, which is smaller and has fewer dependencies. Most Node.js apps work on Alpine without changes. The exception is native modules that link against glibc — if you hit one, switch that one project to node:20-slim (Debian minimal) instead.

Distroless images from Google contain only the language runtime and your app — no shell, no package manager, no apt, no apk, nothing to exec into for an attacker. They are slightly smaller than Alpine and significantly more secure, but you cannot docker exec -it ... sh into them for debugging. For production, distroless is the gold standard. For dev, alpine is friendlier.


Common Mistakes

1. Running as root. The default user inside most base images is root. A container escape, an exploited dependency, or a malicious npm package now has root inside the container — and depending on your runtime configuration, possibly on the host. Add USER node (the official Node images ship with a pre-created node user) and never run production containers as root. CIS benchmarks and every container security scanner flag this on day one.

2. Copying source before installing dependencies. COPY . . followed by RUN npm ci means every code change invalidates the npm cache layer, and every build re-downloads and re-installs your entire dependency tree. The fix is to COPY package.json package-lock.json ./ first, run npm ci, and only then COPY . . for the source. This single change can take builds from three minutes to ten seconds for code-only edits.

3. Missing .dockerignore. Without it, COPY . . slurps in your local node_modules (with the wrong native binaries), your .git history (megabytes of compressed history), your .env file (secrets in a published image), your coverage/ reports, and your IDE config. Every Node.js repo needs a .dockerignore from the very first commit, and the bare minimum should exclude node_modules, .git, .env*, and dist.

4. Not handling SIGTERM in the Node process. Kubernetes and Docker send SIGTERM when stopping a container, then wait a grace period before sending SIGKILL. If your Node process has no SIGTERM handler, it ignores the signal entirely (Node has no default handler for it), and the orchestrator hard-kills you 30 seconds later — dropping every in-flight request. Add a process.on('SIGTERM', ...) handler that calls server.close() to drain in-flight requests and closes database pools cleanly.

5. Using node (or npm start) as PID 1 without tini. Linux gives PID 1 special treatment — it ignores signals it has no handler for, and it is responsible for reaping zombie child processes. Node.js was not designed to be PID 1 and does neither correctly. npm start is even worse because it spawns a child process and does not forward signals to it at all. The fix is tini (or dumb-init) as PID 1: ENTRYPOINT ["/sbin/tini", "--"] followed by CMD ["node", "server.js"]. Tini reaps zombies, forwards signals, and exits with your Node process's exit code.


Interview Questions

1. "Why use a multi-stage Docker build for Node.js? What goes in each stage?"

A multi-stage build separates the environment that builds the app from the environment that runs it. The build stage starts from a Node.js image, copies the source, installs all dependencies including devDependencies, runs the build script (TypeScript compilation, bundling, asset generation), and then reinstalls dependencies with --omit=dev to produce a production-only node_modules. The runtime stage starts from a fresh minimal base image, copies only the production node_modules and the built output from the build stage, sets NODE_ENV=production, switches to a non-root user, and defines the entrypoint and healthcheck. The benefit is that the final image contains zero build tools, zero devDependencies, zero source maps, and zero intermediate artifacts. A typical Express app drops from 1.4 GB to around 120 MB this way, with smaller attack surface, faster pulls, and fewer CVEs flagged by security scanners.

2. "Why should a Docker container not run as root, and how do you fix it for Node.js?"

Containers are isolated by Linux namespaces, but the isolation is not perfect. A kernel vulnerability, a misconfigured volume mount, or a privileged escape can grant the container's user real privileges on the host. If that user is root inside the container, they get root outside too. Beyond escapes, root inside the container also means any vulnerability in your dependencies — a malicious npm package, an RCE bug in a library — runs with full filesystem write access to anything the container can see. The fix for Node.js is straightforward: the official node:* base images ship with a pre-created unprivileged node user. Add USER node near the end of your Dockerfile, use --chown=node:node on COPY directives so the files are owned by that user, and your runtime drops privileges before the first line of JavaScript executes. Container security scanners and CIS benchmarks flag root containers on day one — this is table stakes.

3. "Why do Node.js containers need tini? What happens if you do not use it?"

Linux gives PID 1 — the first process inside a PID namespace — two special responsibilities. First, it ignores any signal it does not have an explicit handler for. Second, it is responsible for reaping zombie child processes. Node.js is not designed to be PID 1 and does neither well. By default, Node.js does not handle SIGTERM, so when Kubernetes stops a container and sends SIGTERM, your Node process ignores it and gets SIGKILL'd 30 seconds later — dropping every in-flight request. And if you spawn child processes from Node, their exit statuses pile up as zombies because Node does not reap them. Tini is a tiny init system (about 24 KB) that becomes PID 1 instead. It registers signal handlers that forward SIGTERM, SIGINT, and friends to your Node process as a regular child signal, and it reaps zombies properly. With tini as ENTRYPOINT ["/sbin/tini", "--"] and Node as CMD, your graceful shutdown handler actually runs and rolling deploys stop dropping requests. Without tini, you can still set --init on docker run, which injects a similar shim, but inside the Dockerfile is the portable answer.

4. "How does graceful shutdown work in Kubernetes, and what does the Node.js code need to do?"

When Kubernetes decides to stop a pod (rolling deploy, scale-down, node drain), it does three things in order. First, it removes the pod from the service's endpoints list so the load balancer stops sending new traffic. Second, it calls the pod's preStop hook if one is defined and waits for it to complete. Third, it sends SIGTERM to PID 1 inside the container and starts a grace period (default 30 seconds, controlled by terminationGracePeriodSeconds). After the grace period expires, SIGKILL is sent and the pod is forcibly killed. The Node.js code needs to do four things during the grace period: stop the readiness probe from returning healthy (so any straggler load balancer stops routing new traffic), call server.close() so the HTTP server stops accepting new connections but lets in-flight requests finish, close database pools and flush logs, and exit cleanly. Wrap the whole thing in a setTimeout(..., 25000).unref() safety net so a hung dependency does not block the shutdown past the grace period. Combine this with tini as PID 1 so SIGTERM actually reaches your handler in the first place.

5. "How would you reduce the size of a Node.js Docker image that is currently 1.4 GB?"

I would attack it on five fronts in order of impact. First, switch the base image from node:20 to node:20-alpine (or gcr.io/distroless/nodejs20 for production), which alone drops the base from 1.1 GB to about 180 MB. Second, convert to a multi-stage build where the final stage copies only the production node_modules and the built artifacts from the build stage — this eliminates devDependencies, build tools, source code, and intermediate files. Third, run npm ci --omit=dev && npm cache clean --force in the stage that produces the production deps, so the npm cache does not leak into the layer. Fourth, add a strict .dockerignore that excludes node_modules, .git, .env*, dist, coverage, *.md, and tests — this both shrinks the build context and prevents accidental file leaks. Fifth, audit dependencies with npm ls --prod --depth=0 and remove anything that is not actually used at runtime; transitively, this often drops another 20-30 MB. After all five, a typical Express + Postgres app lands around 120 MB, pulls in a couple of seconds, and shows almost no CVEs in container scanners.


Quick Reference — Dockerizing Node.js Cheat Sheet

+---------------------------------------------------------------+
|           DOCKERFILE CHECKLIST                                |
+---------------------------------------------------------------+
|                                                                |
|  BASE IMAGE:                                                   |
|   - Use node:20-alpine or distroless                           |
|   - Pin the version, never use "latest"                        |
|                                                                |
|  LAYER ORDER:                                                  |
|   1. COPY package.json package-lock.json                       |
|   2. RUN npm ci                                                |
|   3. COPY . .                                                  |
|   4. RUN npm run build                                         |
|                                                                |
|  MULTI-STAGE:                                                  |
|   - Build stage installs all deps and compiles                 |
|   - Runtime stage copies only dist + prod node_modules         |
|   - npm ci --omit=dev for production deps                      |
|                                                                |
|  SECURITY:                                                     |
|   - USER node  (never root)                                    |
|   - --chown=node:node on COPY                                  |
|   - No secrets in layers, no .env in image                     |
|                                                                |
|  SIGNALS:                                                      |
|   - apk add --no-cache tini                                    |
|   - ENTRYPOINT ["/sbin/tini", "--"]                            |
|   - CMD ["node", "dist/server.js"]                             |
|   - process.on('SIGTERM', shutdown) in code                    |
|                                                                |
|  HEALTHCHECK:                                                  |
|   - HEALTHCHECK with inline node http probe                    |
|   - /healthz endpoint flips to 503 during shutdown             |
|                                                                |
|  .dockerignore: node_modules .git .env* dist coverage          |
|                                                                |
+---------------------------------------------------------------+

+---------------------------------------------------------------+
|           KEY RULES                                            |
+---------------------------------------------------------------+
|                                                                |
|  1. Multi-stage builds are non-negotiable                      |
|  2. Pin the base image version                                 |
|  3. Copy package.json before source for layer caching          |
|  4. Always have a .dockerignore                                |
|  5. Run as the non-root "node" user                            |
|  6. Use tini as PID 1 so SIGTERM works                         |
|  7. Handle SIGTERM in Node and drain via server.close()        |
|  8. Define a HEALTHCHECK and a /healthz endpoint               |
|  9. NODE_ENV=production in the runtime stage                   |
| 10. docker-compose with healthchecks for local dev             |
|                                                                |
+---------------------------------------------------------------+
ConcernWrong WayRight Way
Base imageFROM node:latestFROM node:20-alpine
Build approachSingle stageMulti-stage build
Layer cachingCOPY . . firstCOPY package*.json first
Dependenciesnpm installnpm ci --omit=dev
Userroot (default)USER node
PID 1node or npm starttini -- node server.js
SignalsIgnoredprocess.on('SIGTERM', ...)
ShutdownHard-kill at SIGKILLserver.close() drain
Health probeNoneHEALTHCHECK + /healthz
Build contextEverythingStrict .dockerignore
Local devInstall Postgres on hostdocker compose up
Image size1.4 GB~120 MB

Prev: Lesson 10.3 -- Environment Configuration Next: Lesson 10.5 -- Node.js Interview Questions


This is Lesson 10.4 of the Node.js Interview Prep Course -- 10 chapters, 42 lessons.

On this page