Next.js Interview Prep
Deployment and Production

Deploying Next.js -- Vercel vs Self-Hosting

Deploying Next.js -- Vercel vs Self-Hosting

LinkedIn Hook

"Your Next.js app runs perfectly on localhost. You push to production. Image optimization is broken, ISR stopped revalidating, and your cold starts are 4 seconds long."

What happened? You self-hosted a framework that was designed around a specific platform's runtime primitives -- and nobody told you which features survive the jump.

Next.js is deceptively portable. You can deploy it to Vercel with one click, run it on a $5 VPS with next start, package it into a Docker image, or export a fully static site to S3. But every target has a different contract: different caching, different image handling, different ISR semantics, different support for server actions and edge runtime.

The interview question isn't "how do you deploy Next.js?" It's "why did you pick that host, and what did you lose by picking it?"

In Lesson 8.3, I break down the four main deployment paths -- Vercel, Node self-host, Docker standalone, and static export -- and the tradeoffs between edge and Node runtimes that decide your P95 latency.

Read the full lesson -> [link]

#NextJS #DevOps #Deployment #Vercel #Docker #Kubernetes #InterviewPrep


Deploying Next.js -- Vercel vs Self-Hosting thumbnail


What You'll Learn

  • Why Vercel is the "reference platform" for Next.js and what zero-config actually means
  • How preview deployments, edge functions, and ISR work on Vercel
  • How to self-host Next.js with next start on a Node.js server
  • How to containerize Next.js using output: 'standalone' for minimal Docker images
  • How static export (output: 'export') works and which features it silently disables
  • The real tradeoffs between Vercel, Netlify, AWS, Docker/Kubernetes, and Cloudflare Pages
  • Edge runtime vs Node runtime -- latency, cold starts, and API compatibility
  • Why ISR and the Data Cache behave differently when you self-host

The Hotel vs House Analogy

Deploying Next.js is like choosing where to live.

Vercel is a five-star hotel. You walk in with a suitcase (your Git repo), and everything is done for you. Fresh sheets every night (preview deployments per PR). Room service that appears instantly (edge functions). The front desk forwards your mail globally (automatic CDN). The price of convenience is that you can't knock down walls -- you follow the hotel's rules about what you can bring in, how long requests can run, and which kitchen equipment is allowed.

Self-hosting with Node.js is owning a house. You pick the neighborhood (AWS, GCP, a closet under your desk). You control every outlet and every pipe. You can paint the walls any color -- run custom background workers, use any database driver, bind to any port. But you also mow the lawn: patching the OS, rotating certificates, configuring load balancers, and debugging why the cache invalidated at 3 AM.

Docker is a prefab house on a trailer. You build it once in a controlled factory (your CI pipeline) and drop it on any lot -- Kubernetes, ECS, a bare-metal server, a developer's laptop. It runs the same everywhere, which makes it the standard for teams that need reproducibility across environments.

Static export is a photograph of your house. Beautiful, lightweight, instantly deliverable, but completely frozen. You can mail copies to anyone (S3, Cloudflare Pages, GitHub Pages), and they look identical. But you can't change a light bulb remotely -- there's no server, so anything that needs to run code at request time (ISR, server actions, image optimization) is gone.

+---------------------------------------------------------------+
|           FOUR WAYS TO SHIP NEXT.JS                           |
+---------------------------------------------------------------+
|                                                                |
|   VERCEL          -> Hotel     (zero-config, managed)         |
|   NODE SELF-HOST  -> House     (full control, full chores)    |
|   DOCKER STANDALONE -> Prefab  (portable, reproducible)       |
|   STATIC EXPORT   -> Photo     (frozen, cheapest to serve)    |
|                                                                |
|   Picking wrong means paying for things you don't use         |
|   -- OR losing features you thought you had.                   |
|                                                                |
+---------------------------------------------------------------+

Path 1 -- Deploying to Vercel

Vercel is the company that created Next.js, so every framework feature is supported on day one. "Zero-config" isn't marketing -- it means Vercel reads your next.config.js and automatically maps every feature (ISR, server actions, middleware, edge functions, image optimization) to the right infrastructure primitive without you writing a single YAML file.

The Zero-Config Deploy

# Option A: connect a Git repository
# 1. Push your Next.js app to GitHub/GitLab/Bitbucket
# 2. Click "Import Project" in the Vercel dashboard
# 3. Vercel detects Next.js and sets build command to "next build"
# 4. Every push to main triggers a production deployment
# 5. Every push to a branch triggers a preview deployment

# Option B: deploy from the CLI
npm install -g vercel
vercel           # creates a preview deployment
vercel --prod    # promotes to production

No Dockerfile. No build pipeline. No load balancer. Vercel compiles your app, splits it into static assets + serverless functions + edge functions, pushes static content to its global CDN, and routes dynamic requests to the closest region.

What Vercel Gives You Automatically

Preview deployments per pull request. Every PR gets its own isolated URL (your-app-git-feature-xyz.vercel.app). Designers and product managers can review the exact branch without running anything locally. The preview URL posts as a comment on the PR.

Edge functions. Routes marked with export const runtime = 'edge' run on Vercel's Edge Network -- V8 isolates distributed to 30+ regions. Cold start is ~5ms (vs ~200ms for a Node Lambda). Perfect for auth middleware, A/B tests, and geolocation-based redirects.

Incremental Static Regeneration (ISR). Vercel's CDN natively understands Next.js's revalidate directive. When a cached page expires, the first request triggers a background regeneration while serving the stale page -- true stale-while-revalidate, globally distributed, with no configuration.

Automatic Image Optimization. next/image routes through Vercel's image optimizer, which resizes and converts images to WebP/AVIF on the fly and caches them on the CDN.

// next.config.js -- Vercel deployment
// Nothing special needed. Vercel reads this file directly.
/** @type {import('next').NextConfig} */
const nextConfig = {
  // Optional: enable experimental features
  experimental: {
    serverActions: { bodySizeLimit: '2mb' },
  },
  // Image domains for external images
  images: {
    remotePatterns: [
      { protocol: 'https', hostname: 'cdn.example.com' },
    ],
  },
};

module.exports = nextConfig;

The Vercel Tradeoff

You're tied to Vercel's pricing model (function invocations, bandwidth, image transformations) and their execution limits (serverless functions cap at 10s on the free tier, 60s on Pro, 300s on Enterprise). For a side project or startup, this is almost always cheaper and faster than self-hosting. For a high-traffic enterprise app with unusual compute needs, the bill can surprise you.


Path 2 -- Self-Hosting with Node.js

Next.js ships with a production server you can run anywhere Node.js runs. This is the path for teams that already have infrastructure, need to run inside a VPC, or want to avoid platform lock-in.

The Basic Flow

# 1. Build the app once (creates the .next directory)
npm run build

# 2. Start the production server on port 3000
npm run start

# Behind the scenes, these map to:
# next build  -> compiles pages, bundles JS, pre-renders static HTML
# next start  -> boots a Node.js HTTP server that serves .next output
// package.json
{
  "scripts": {
    "dev": "next dev",
    "build": "next build",
    "start": "next start -p 3000",
    "start:prod": "NODE_ENV=production next start -p $PORT"
  }
}

Putting It Behind a Reverse Proxy

You almost never expose next start directly to the internet. A reverse proxy (Nginx, Caddy, HAProxy) handles TLS termination, HTTP/2, and static asset caching:

# /etc/nginx/sites-available/my-app
server {
  listen 443 ssl http2;
  server_name example.com;

  ssl_certificate     /etc/letsencrypt/live/example.com/fullchain.pem;
  ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;

  # Let Nginx serve Next's static assets directly (faster than proxying)
  location /_next/static/ {
    alias /var/www/my-app/.next/static/;
    expires 1y;
    add_header Cache-Control "public, immutable";
  }

  # Proxy everything else to the Node.js process
  location / {
    proxy_pass http://127.0.0.1:3000;
    proxy_http_version 1.1;
    proxy_set_header Host $host;
    proxy_set_header X-Real-IP $remote_addr;
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    proxy_set_header X-Forwarded-Proto $scheme;
  }
}

What You Lose (and What You Must Rebuild)

Self-hosting Node works, but you inherit responsibility for things Vercel hides:

  • Image optimization still works because Next.js bundles a Sharp-based optimizer, but it runs on your server's CPU. For high traffic, you'll want to offload to a CDN.
  • ISR and the Data Cache write to the local filesystem by default. If you run more than one instance (horizontal scaling), each instance has its own cache -- stale data diverges across pods. You need a shared cache handler (see the "Caching" section below).
  • Process management. next start does not daemonize. You need pm2, systemd, or a container orchestrator to keep it alive and restart on crash.
  • Zero-downtime deploys. You need a rolling update strategy (blue/green, canary) -- Vercel does this for free.

Path 3 -- Docker with output: 'standalone'

Standard next build leaves your app dependent on the full node_modules directory -- hundreds of megabytes of packages, most of which are dev-only transitively. Docker images built this way are 1GB+.

Next.js has a built-in solution: output: 'standalone'. At build time, it traces exactly which files are required at runtime and copies them into .next/standalone, along with a minimal server.js and a pruned node_modules. The result is a self-contained directory you can ship in a 150MB image.

Enabling Standalone Output

// next.config.js
/** @type {import('next').NextConfig} */
const nextConfig = {
  // Creates .next/standalone with only the files needed to run the app
  output: 'standalone',

  // Optional: disable the built-in image optimizer if you use a CDN instead
  // images: { unoptimized: true },
};

module.exports = nextConfig;

The Multi-Stage Dockerfile

# syntax=docker/dockerfile:1.6
# Stage 1: install dependencies
FROM node:20-alpine AS deps
WORKDIR /app
# Copy only the manifest files first to leverage Docker layer caching
COPY package.json package-lock.json ./
RUN npm ci

# Stage 2: build the Next.js app
FROM node:20-alpine AS builder
WORKDIR /app
COPY --from=deps /app/node_modules ./node_modules
COPY . .
# Build creates .next/standalone and .next/static
RUN npm run build

# Stage 3: final runtime image (tiny)
FROM node:20-alpine AS runner
WORKDIR /app
ENV NODE_ENV=production
ENV PORT=3000
ENV HOSTNAME=0.0.0.0

# Create a non-root user for security
RUN addgroup --system --gid 1001 nodejs \
 && adduser  --system --uid 1001 nextjs

# Copy the standalone output (tiny server.js + pruned deps)
COPY --from=builder --chown=nextjs:nodejs /app/.next/standalone ./
# Static assets are NOT inside standalone -- copy them separately
COPY --from=builder --chown=nextjs:nodejs /app/.next/static ./.next/static
# Public folder (images, favicon, etc.)
COPY --from=builder --chown=nextjs:nodejs /app/public ./public

USER nextjs
EXPOSE 3000

# server.js is the entry point generated by standalone output
CMD ["node", "server.js"]

Key things to notice:

  1. The final image has no npm, no TypeScript, no build tools -- just Node.js and the traced runtime files.
  2. .next/static and public/ are copied separately because standalone output does not include them (they are meant to be served by a CDN, or copied explicitly).
  3. HOSTNAME=0.0.0.0 is required inside containers -- the default (localhost) won't accept connections from outside the container.
  4. Running as a non-root user is standard container security hygiene.

Building and Running

# Build the image
docker build -t my-next-app:latest .

# Run locally
docker run -p 3000:3000 --env-file .env.production my-next-app:latest

# Check image size -- should be ~150-250 MB with standalone output
docker images my-next-app

Path 4 -- Static Export (output: 'export')

Static export turns your Next.js app into a folder of plain HTML, CSS, and JS files. No server. You can host the output on S3, Cloudflare Pages, GitHub Pages, Nginx, or any file server on earth.

// next.config.js
/** @type {import('next').NextConfig} */
const nextConfig = {
  // Generates an "out/" directory with pure static files
  output: 'export',

  // next/image's optimizer needs a server, so disable it for static export
  images: { unoptimized: true },

  // Optional: add a trailing slash to generate /about/index.html
  // (better compatibility with static hosts like S3)
  trailingSlash: true,
};

module.exports = nextConfig;
# Build -> creates ./out with all static files
npm run build

# Preview locally
npx serve out

What Static Export Silently Disables

This is the single most common source of "wait, why isn't this working in production?" bugs. When you enable output: 'export', Next.js refuses to use -- or silently degrades -- the following features:

  • No ISR. There's no server to revalidate. Pages are fixed at build time. To update content, you re-run the build.
  • No Server Actions. They require a runtime. Forms must POST to a separate API.
  • No next/image optimization. You must set images.unoptimized: true, which means the browser downloads full-resolution originals.
  • No Route Handlers (API routes). Nothing to execute them.
  • No Middleware. Middleware runs per-request, and there's no per-request runtime.
  • No dynamic routes without generateStaticParams. Every route must be enumerable at build time.
  • No headers(), cookies(), or request-time data. Everything is frozen.

Static export is the right answer for documentation sites, marketing pages, and blogs with fully build-time content. It's the wrong answer for anything that needs fresh data, user authentication, or forms.


Edge Runtime vs Node Runtime

Next.js lets you choose which runtime executes each route or middleware file. The choice matters enormously for latency and cost.

// app/api/hello/route.ts
// Run this handler on the Edge runtime (V8 isolates, global)
export const runtime = 'edge';

export async function GET() {
  return Response.json({ hello: 'world' });
}
// app/api/report/route.ts
// Default -- runs on Node.js (full npm ecosystem, regional)
export const runtime = 'nodejs';

import { PDFDocument } from 'pdf-lib'; // Node-only package

export async function POST(request: Request) {
  // ... generate a PDF report
}
+---------------------------------------------------------------+
|          EDGE RUNTIME vs NODE RUNTIME                         |
+---------------------------------------------------------------+
|                                                                |
|                  EDGE                  NODE                    |
|  Cold start:     ~5 ms                 ~100-300 ms             |
|  Distribution:   30+ regions (global)  1 region per instance   |
|  Size limit:     1-4 MB (tight)        250 MB+ (lambdas)       |
|  APIs available: Web standard (fetch)  Full Node.js stdlib     |
|  npm packages:   Pure JS only          Any package             |
|  File system:    None                  fs module works         |
|  Streaming:      Yes (first-class)     Yes                     |
|                                                                |
|  USE EDGE FOR:                                                 |
|  - Middleware (auth, redirects, A/B tests)                     |
|  - Simple JSON APIs                                            |
|  - Low-latency user-location-aware routes                      |
|                                                                |
|  USE NODE FOR:                                                 |
|  - Anything needing fs, crypto, or native modules              |
|  - PDF generation, image processing, DB drivers                |
|  - Routes that depend on the npm ecosystem                     |
|                                                                |
+---------------------------------------------------------------+

Caching Considerations When Self-Hosting

On Vercel, the Data Cache and ISR are backed by an internal distributed cache that works across regions. When you self-host, the default cache handler writes to the local filesystem (.next/cache). This is fine for a single instance but breaks in two scenarios:

  1. Multiple replicas. Two pods behind a load balancer each have their own .next/cache folder. If pod A revalidates a page, pod B still serves the stale version. Users refreshing see the page flip-flop.
  2. Ephemeral filesystems. Containers on Kubernetes, ECS, or Cloud Run have disks that vanish on restart. Cache cold-starts on every deploy.

The fix is a custom cache handler that points to a shared store like Redis:

// next.config.js
/** @type {import('next').NextConfig} */
const nextConfig = {
  output: 'standalone',

  // Delegate ISR + Data Cache to a shared backend instead of the filesystem
  cacheHandler: require.resolve('./cache-handler.js'),
  cacheMaxMemorySize: 0, // disable in-memory LRU -- go straight to Redis
};

module.exports = nextConfig;
// cache-handler.js (skeleton -- use @neshca/cache-handler in real projects)
// A custom cache handler implements get/set/revalidateTag against Redis
const { createClient } = require('redis');

const client = createClient({ url: process.env.REDIS_URL });
client.connect();

module.exports = class CacheHandler {
  async get(key) {
    const value = await client.get(key);
    return value ? JSON.parse(value) : null;
  }
  async set(key, data, ctx) {
    await client.set(key, JSON.stringify({ value: data, lastModified: Date.now() }));
  }
  async revalidateTag(tag) {
    // Invalidate all keys associated with this tag
    const keys = await client.sMembers(`tag:${tag}`);
    if (keys.length) await client.del(keys);
  }
};

Without this, self-hosted Next.js works for a single pod -- but scales poorly the moment you add replicas.


Choosing a Host -- The Honest Tradeoff Matrix

+------------------------------------------------------------------------+
|                    NEXT.JS HOSTING OPTIONS                             |
+------------------------------------------------------------------------+
|                                                                         |
|  VERCEL                                                                 |
|   + Every feature works out of the box                                  |
|   + Preview deploys, ISR, edge, image opt -- zero config                |
|   + Global CDN included                                                 |
|   - Pricing scales with traffic (can get expensive)                     |
|   - Serverless function time limits                                     |
|   - Vendor lock-in for platform primitives                              |
|                                                                         |
|  NETLIFY                                                                |
|   + Similar DX to Vercel, supports most Next features via adapter       |
|   + Good free tier                                                      |
|   - ISR and edge support lag slightly behind Vercel                     |
|   - Some newer features require workarounds                             |
|                                                                         |
|  AWS (EC2 / ECS / Lambda / Amplify)                                     |
|   + Full control, integrates with rest of AWS (RDS, S3, VPC)            |
|   + Enterprise-grade compliance options                                 |
|   - You build the pipeline, the caching layer, and the CDN              |
|   - Amplify has quirks; Lambda has cold starts and size limits          |
|                                                                         |
|  DOCKER / KUBERNETES                                                    |
|   + Maximum portability and reproducibility                             |
|   + Runs anywhere K8s runs (GKE, EKS, self-managed)                     |
|   - You own scaling, caching handler, and zero-downtime deploys         |
|   - Heaviest operational burden                                         |
|                                                                         |
|  CLOUDFLARE PAGES (with @cloudflare/next-on-pages)                      |
|   + Free global edge network, fast cold starts                          |
|   + Great for edge-first apps                                           |
|   - Node.js API compatibility is limited (workerd, not Node)            |
|   - ISR support is newer; some features still unsupported               |
|                                                                         |
|  STATIC EXPORT (S3 / GitHub Pages / any static host)                    |
|   + Cheapest possible hosting, trivial to scale                         |
|   - No ISR, no server actions, no middleware, no image optimization     |
|   - Only viable for fully static content                                |
|                                                                         |
+------------------------------------------------------------------------+

Common Mistakes

1. Deploying with next dev instead of next build && next start. next dev runs an unoptimized development server with HMR, verbose logging, and no minification. It's 10x slower, uses 3x more memory, and exposes source maps. Production always means next build followed by next start.

2. Forgetting to copy .next/static and public/ into the Docker image. With output: 'standalone', the generated server.js expects static files to be adjacent to it. If you only copy .next/standalone, every CSS/JS chunk returns 404 in production. Always copy .next/static and public/ as separate COPY steps.

3. Using output: 'export' with features that require a server. Developers enable static export for "performance," then discover middleware stopped running, image optimization broke, and ISR pages are frozen. Static export is a strict subset of Next.js -- audit your feature list before enabling it.

4. Running multiple self-hosted replicas without a shared cache handler. The default filesystem-backed cache isolates per-instance. ISR works, but each pod revalidates independently, so users see inconsistent content. Configure a Redis-backed cache handler before scaling horizontally.

5. Assuming Edge runtime is always faster. Edge has lower cold starts but lacks most of Node's APIs. Developers pick runtime = 'edge' for an API route, then can't use their database driver, can't read from the filesystem, and end up proxying to another service. Use edge for middleware and simple Web-API handlers; use Node for anything that touches the npm ecosystem.


Interview Questions

1. "Why is Vercel called the 'reference platform' for Next.js, and what does zero-config deploy actually mean?"

Vercel is built by the same company that builds Next.js, so every Next.js feature -- ISR, server actions, middleware, edge functions, next/image optimization, preview deployments -- is implemented first on Vercel. "Zero-config" means Vercel reads your next.config.js, detects which features you're using, and automatically maps each one to its internal infrastructure: static pages go to the global CDN, dynamic routes become serverless functions, middleware runs on the edge network, and images route through the image optimizer. You don't write a Dockerfile, a CI pipeline, or load balancer rules. The tradeoff is pricing -- function invocations and bandwidth are billed per use, which can be cheaper than self-hosting at low-to-medium scale but expensive at very high scale.

2. "What does output: 'standalone' do, and why is it important for Docker deployments?"

output: 'standalone' tells next build to trace exactly which files are needed at runtime and copy them into .next/standalone, along with a minimal server.js entry point and a pruned node_modules containing only runtime dependencies. Without it, shipping Next.js in Docker requires copying the full node_modules directory -- often 500MB to 1GB -- because you can't easily tell which transitive deps are actually needed at runtime. Standalone output typically cuts the final image from 1GB to ~150-250MB. Important caveat: .next/standalone does not include .next/static or the public/ folder, so the Dockerfile must copy those separately into positions adjacent to server.js.

3. "What breaks when you use output: 'export', and when is static export the right choice?"

Static export disables every feature that requires a runtime: ISR (no revalidation), server actions (no server), API route handlers, middleware, next/image optimization (you must pass images.unoptimized: true), headers() and cookies(), and any dynamic route without generateStaticParams. What you get in exchange is a folder of plain HTML/CSS/JS files you can host on S3, GitHub Pages, Cloudflare Pages, or any static host -- effectively free, infinitely scalable, with no servers to manage. It's the right choice for documentation sites, marketing pages, and blogs where all content is known at build time. It's the wrong choice for anything user-authenticated, anything with forms that mutate data, or anything that needs fresh data between builds.

4. "If you self-host Next.js on Kubernetes with three replicas, why might ISR behave incorrectly, and how do you fix it?"

Next.js's default cache handler writes ISR pages and the Data Cache to the local filesystem at .next/cache. When you run three replicas, each pod has its own .next/cache directory. If pod A serves a request that triggers revalidation, only pod A's cache gets the updated content -- pods B and C continue serving the old version until their caches expire independently. Users hitting a load balancer see the page flip between old and new randomly. The fix is to configure a custom cache handler (via cacheHandler in next.config.js) that writes to a shared backend like Redis. The community package @neshca/cache-handler is the standard implementation. You also want cacheMaxMemorySize: 0 to bypass the in-memory LRU so all instances read from the shared store.

5. "When should you pick the Edge runtime over the Node runtime for a Next.js route, and what are the downsides?"

The Edge runtime uses V8 isolates deployed to Vercel's (or Cloudflare's) global edge network. Cold starts are ~5ms vs ~100-300ms for a Node.js serverless function, and requests are served from the closest region instead of a single central region. This is ideal for middleware (auth checks, geolocation redirects, A/B tests), simple JSON APIs, and streaming endpoints where latency matters most. The downside is that the Edge runtime is not Node.js -- it's a Web-standard runtime with fetch, Request, Response, crypto.subtle, but no fs, no child_process, no native modules, and a small code size limit (typically 1-4MB). Packages that depend on Node APIs simply won't run. The rule of thumb: edge for request-shaping and light JSON work; Node for anything that touches databases, file systems, native libraries, or the broader npm ecosystem.


Quick Reference -- Deployment Cheat Sheet

+---------------------------------------------------------------+
|           NEXT.JS DEPLOYMENT CHEAT SHEET                      |
+---------------------------------------------------------------+
|                                                                |
|  VERCEL:                                                       |
|  git push -> automatic deploy                                  |
|  Preview URL per PR, ISR + edge + image opt all free           |
|                                                                |
|  NODE SELF-HOST:                                               |
|  next build && next start -p 3000                              |
|  Put Nginx/Caddy in front for TLS + static caching             |
|  Use pm2 or systemd to keep the process alive                  |
|                                                                |
|  DOCKER STANDALONE:                                            |
|  next.config.js:  output: 'standalone'                         |
|  Copy .next/standalone + .next/static + public/ into image     |
|  Run:  node server.js                                          |
|                                                                |
|  STATIC EXPORT:                                                |
|  next.config.js:  output: 'export', images.unoptimized: true   |
|  next build -> ./out/  (upload to S3 / Pages / any host)       |
|                                                                |
|  EDGE RUNTIME:                                                 |
|  export const runtime = 'edge'                                 |
|  Fast cold start, global, but NO Node APIs                     |
|                                                                |
|  SELF-HOST CACHING:                                            |
|  Set cacheHandler in next.config.js for multi-replica ISR      |
|  Use Redis (e.g. @neshca/cache-handler) as shared backend      |
|                                                                |
+---------------------------------------------------------------+

+---------------------------------------------------------------+
|           KEY RULES                                            |
+---------------------------------------------------------------+
|                                                                |
|  1. Never run `next dev` in production                         |
|  2. Use output: 'standalone' for any Docker build              |
|  3. Copy .next/static and public/ separately in Dockerfile     |
|  4. Run as a non-root user inside containers                   |
|  5. Audit features before enabling output: 'export'            |
|  6. Configure a shared cache handler before scaling replicas   |
|  7. Edge = speed + limits; Node = power + cold starts          |
|                                                                |
+---------------------------------------------------------------+
TargetISRServer ActionsImage OptEdgeCost ModelOps Burden
VercelYesYesYesYesUsage-basedNone
NetlifyYesYesYesYesUsage-basedLow
AWS (ECS/Lambda)YesYesYesNo*Usage + infraHigh
Docker / KubernetesYesYesYesNoInfra-basedHighest
Cloudflare PagesPartialPartialPartialYesFree + usageLow
Static ExportNoNoNoNoNear-zeroMinimal

*AWS CloudFront Functions are not Node-compatible -- treated separately from Next.js Edge.


Prev: Lesson 8.2 -- Environment Variables Next: Lesson 8.4 -- Monitoring and Error Tracking


This is Lesson 8.3 of the Next.js Interview Prep Course -- 8 chapters, 33 lessons.

On this page