Node.js Interview Prep
Database Integration

Database Patterns -- Repositories, Transactions, Migrations

Database Patterns -- Repositories, Transactions, Migrations

LinkedIn Hook

"Your service layer imports pg directly. Your controllers know what a SQL JOIN looks like. Your tests need a real Postgres instance just to run."

That's not a database -- that's a leaky abstraction wearing a trench coat.

Most Node.js codebases drift into the same trap: raw queries scattered across routes, no transaction boundaries, hard deletes that destroy audit trails, and a "migrations" folder that's actually just a graveyard of .sql files no one trusts to run twice.

The fix isn't a fancier ORM. It's a small set of patterns every senior backend engineer reaches for: repositories to hide SQL behind named methods, transactions to make multi-step writes atomic, migrations to evolve schema with confidence, soft deletes to keep history, and a shared connection pool instead of opening a fresh client per request.

In Lesson 7.3, I walk through each pattern with real pg and Prisma code -- and explain the connection-per-request anti-pattern that quietly destroys throughput in production.

Read the full lesson -> [link]

#NodeJS #Backend #Postgres #Prisma #DatabaseDesign #InterviewPrep


Database Patterns -- Repositories, Transactions, Migrations thumbnail


What You'll Learn

  • The repository pattern -- isolating data access behind a named API
  • How to run multi-statement transactions with raw pg (BEGIN / COMMIT / ROLLBACK)
  • Prisma's $transaction for atomic multi-table writes
  • Migration up/down skeletons for drizzle-kit, prisma migrate, and knex migrate
  • Seeding strategies for dev, test, and demo data
  • Soft deletes with a deletedAt column and how to scope queries automatically
  • Audit columns (createdAt, updatedAt, createdBy) and why they belong on every table
  • Why "connection per request" kills throughput, and how a shared pool fixes it

The Library Analogy -- Why Patterns Matter

Imagine a public library where every visitor walks into the back room, opens the card catalog themselves, climbs the shelves, and grabs whatever book they want. No checkout desk. No log of who took what. If two people grab the same book at the same time, one of them just loses. If a book is "removed", it's burned -- no record it ever existed.

That's a Node.js app with raw queries scattered across controllers.

Now imagine the same library with a front desk (the repository) where you ask for a book by name. A checkout system (the transaction) makes sure that borrowing a book and updating your account happen together -- or not at all. A renovation log (migrations) tracks every shelf that was added or moved, so a new branch can be built identically. Books that are "withdrawn" get a withdrawnAt stamp (soft delete) instead of being shredded, so you can still answer "did this book ever exist?". And there's one set of librarians (the connection pool) handling thousands of visitors -- not one librarian hired and fired per visitor.

Same library. Same books. Wildly different operational reality.

+---------------------------------------------------------------+
|           CHAOTIC ACCESS (The Anti-Pattern)                   |
+---------------------------------------------------------------+
|                                                                |
|  Controller -> new Client() -> SQL string -> client.end()      |
|  Controller -> new Client() -> SQL string -> client.end()      |
|  Controller -> new Client() -> SQL string -> client.end()      |
|                                                                |
|  - SQL leaks into routes                                       |
|  - No transactions across calls                                |
|  - New TCP + auth handshake on every request                   |
|  - Hard deletes destroy history                                |
|                                                                |
+---------------------------------------------------------------+

+---------------------------------------------------------------+
|           STRUCTURED ACCESS (The Pattern)                     |
+---------------------------------------------------------------+
|                                                                |
|  Controller -> UserRepository.findById(id)                     |
|                       |                                        |
|                       v                                        |
|             Shared Pool (10 clients, reused)                   |
|                       |                                        |
|                       v                                        |
|                  Postgres                                      |
|                                                                |
|  - SQL hidden behind named methods                             |
|  - Transactions wrap multi-step writes                         |
|  - Connections reused, not recreated                           |
|  - Soft deletes preserve audit trail                           |
|                                                                |
+---------------------------------------------------------------+

Napkin AI Visual Prompt: "Dark gradient (#0a1a0a -> #0d2e16). Two stacked panels. TOP panel labeled 'Anti-pattern': three controllers each with their own database client icon, red #ef4444 arrows, scattered SQL strings. BOTTOM panel labeled 'Repository pattern': three controllers funneling into a single 'UserRepository' box, then into one 'Pool' box, then into Postgres. Green #68a063 highlights, amber #ffb020 labels for 'transaction' and 'migration'. White monospace text."


The Repository Pattern -- A Data Access Layer

A repository is a class (or module) whose only job is to translate between domain concepts ("find user by email", "list active orders for customer") and SQL. The rest of your app never sees a query string.

Why bother?

  • One place to change SQL. Add an index? Rename a column? You touch one file, not thirty controllers.
  • Testable. You can mock UserRepository in unit tests instead of spinning up Postgres.
  • Named operations. userRepo.findActiveByEmail(email) reads better than a 12-line SELECT inline in a route handler.
  • Enforces invariants. Soft-delete scoping, audit columns, and tenant filtering live inside the repo, so callers can't forget them.

Example 1 -- A UserRepository class with pg

// src/repositories/UserRepository.js
// All SQL for the "users" table lives here. Nothing else in the app
// should write a query against the users table directly.

class UserRepository {
  // The repository receives a shared pool, NOT a fresh client.
  // This makes it trivial to swap a transaction client in later.
  constructor(pool) {
    this.pool = pool;
  }

  // Find a single user by id. Returns null if not found or soft-deleted.
  async findById(id) {
    const sql = `
      SELECT id, email, name, created_at, updated_at
      FROM users
      WHERE id = $1
        AND deleted_at IS NULL
      LIMIT 1
    `;
    const { rows } = await this.pool.query(sql, [id]);
    return rows[0] ?? null;
  }

  // Find by email -- used by the login flow.
  async findByEmail(email) {
    const sql = `
      SELECT id, email, password_hash, name
      FROM users
      WHERE email = $1
        AND deleted_at IS NULL
      LIMIT 1
    `;
    const { rows } = await this.pool.query(sql, [email]);
    return rows[0] ?? null;
  }

  // Insert a new user and return the created row.
  // Audit columns (created_at, updated_at) are filled by DB defaults.
  async create({ email, name, passwordHash }) {
    const sql = `
      INSERT INTO users (email, name, password_hash)
      VALUES ($1, $2, $3)
      RETURNING id, email, name, created_at
    `;
    const { rows } = await this.pool.query(sql, [email, name, passwordHash]);
    return rows[0];
  }

  // Soft delete -- never actually remove the row.
  async softDelete(id) {
    const sql = `
      UPDATE users
      SET deleted_at = NOW(),
          updated_at = NOW()
      WHERE id = $1
        AND deleted_at IS NULL
    `;
    const result = await this.pool.query(sql, [id]);
    return result.rowCount === 1;
  }
}

module.exports = { UserRepository };

Notice three things: (1) every SELECT filters deleted_at IS NULL, so soft-deleted rows are invisible by default; (2) the constructor takes a pool, not a connection string -- the repo doesn't know how to connect, it just knows how to query; (3) controllers will call userRepo.findByEmail(...) and never see SQL.


Transactions -- All or Nothing

A transaction groups multiple statements so they either all succeed or all fail. Without one, a partial failure leaves your database in a corrupt state -- money debited from one account but never credited to the other, an order created without its line items, a user record without its profile row.

The contract with Postgres is simple: BEGIN, run your statements, then COMMIT (success) or ROLLBACK (failure). The critical rule with pg: all statements must run on the same client, not on the pool. The pool may hand out a different physical connection for each pool.query(), and BEGIN only applies to one connection.

Example 2 -- A transaction with raw pg

// src/services/orderService.js
// Creating an order touches THREE tables: orders, order_items, and
// inventory. If any step fails, none of them should persist.

async function createOrder(pool, { userId, items }) {
  // Check out one client from the pool. Every statement below
  // MUST run on this same client for the transaction to apply.
  const client = await pool.connect();

  try {
    await client.query('BEGIN');

    // 1. Insert the order header
    const orderResult = await client.query(
      `INSERT INTO orders (user_id, status)
       VALUES ($1, 'pending')
       RETURNING id`,
      [userId]
    );
    const orderId = orderResult.rows[0].id;

    // 2. Insert each line item
    for (const item of items) {
      await client.query(
        `INSERT INTO order_items (order_id, product_id, qty, price)
         VALUES ($1, $2, $3, $4)`,
        [orderId, item.productId, item.qty, item.price]
      );

      // 3. Decrement inventory -- fails if not enough stock
      const stockResult = await client.query(
        `UPDATE inventory
         SET qty = qty - $1
         WHERE product_id = $2
           AND qty >= $1
         RETURNING qty`,
        [item.qty, item.productId]
      );

      if (stockResult.rowCount === 0) {
        // Not enough stock -- throw to trigger ROLLBACK in catch
        throw new Error(`Out of stock: product ${item.productId}`);
      }
    }

    // All statements succeeded -- make them permanent
    await client.query('COMMIT');
    return orderId;
  } catch (err) {
    // Any failure rolls back ALL three tables atomically
    await client.query('ROLLBACK');
    throw err;
  } finally {
    // ALWAYS release the client back to the pool, even on error.
    // Forgetting this leaks a connection forever.
    client.release();
  }
}

module.exports = { createOrder };

The try / catch / finally shape is non-negotiable: BEGIN and COMMIT in try, ROLLBACK in catch, client.release() in finally. Skip the finally and one bad request slowly drains your pool until the app hangs.

Example 3 -- Prisma $transaction for transfer-money

Prisma hides the BEGIN/COMMIT plumbing behind $transaction. There are two flavors: an array form (run several independent writes atomically) and an interactive callback form (use the result of one query inside the next). Money transfers need the callback form.

// src/services/walletService.ts
// Transfer money between two wallets. Both updates must commit
// together or both must roll back -- no partial transfers, ever.

import { PrismaClient } from '@prisma/client';

const prisma = new PrismaClient();

async function transferMoney(fromUserId: string, toUserId: string, amount: number) {
  if (amount <= 0) throw new Error('Amount must be positive');

  // The callback form: Prisma passes a transaction-scoped client (tx).
  // Every query through `tx` runs inside one BEGIN/COMMIT block.
  return prisma.$transaction(async (tx) => {
    // 1. Lock and read the sender's wallet
    const sender = await tx.wallet.findUnique({
      where: { userId: fromUserId },
    });

    if (!sender) throw new Error('Sender wallet not found');
    if (sender.balance < amount) throw new Error('Insufficient funds');

    // 2. Debit the sender
    const updatedSender = await tx.wallet.update({
      where: { userId: fromUserId },
      data: { balance: { decrement: amount } },
    });

    // 3. Credit the recipient
    const updatedRecipient = await tx.wallet.update({
      where: { userId: toUserId },
      data: { balance: { increment: amount } },
    });

    // 4. Record the transfer in an audit log
    await tx.transferLog.create({
      data: {
        fromUserId,
        toUserId,
        amount,
      },
    });

    // Whatever we return becomes the result of $transaction
    return { updatedSender, updatedRecipient };
  }, {
    // Optional: tighten the isolation level for financial work
    isolationLevel: 'Serializable',
    // Optional: bound the transaction so a stuck query can't hold
    // a connection forever
    timeout: 5000,
  });
}

If any step throws -- "insufficient funds", a database constraint violation, a network blip -- Prisma issues ROLLBACK automatically. The wallets, the log, and any side-effects inside tx either all commit or none do.


Migrations -- Evolving Schema with Confidence

A migration is a versioned, reviewable script that changes the database schema. Each migration has an up (apply the change) and a down (undo it). Together they form an ordered timeline that can rebuild any environment from scratch and stays in sync across every developer's laptop, CI, staging, and production.

The three popular Node.js tool families:

  • drizzle-kit -- generates SQL migration files from your TypeScript schema definitions.
  • prisma migrate -- generates SQL migration folders from your schema.prisma file.
  • knex migrate -- hand-written JS files where up and down use the Knex query builder.

All three follow the same mental model: a migrations table in the database tracks which files have been applied, and the CLI runs only the new ones in order.

Example 4 -- Migration up/down skeleton

// migrations/20260412_120000_create_users.js
// Knex-style migration. The same up/down shape applies to
// drizzle-kit and prisma migrate -- only the syntax differs.

// "up" applies the change. Run on `knex migrate:latest`.
exports.up = async function (knex) {
  await knex.schema.createTable('users', (table) => {
    // Primary key
    table.uuid('id').primary().defaultTo(knex.raw('gen_random_uuid()'));

    // Business columns
    table.string('email').notNullable().unique();
    table.string('name').notNullable();
    table.string('password_hash').notNullable();

    // Audit columns -- on EVERY table, no exceptions
    table.timestamp('created_at').notNullable().defaultTo(knex.fn.now());
    table.timestamp('updated_at').notNullable().defaultTo(knex.fn.now());

    // Soft delete column -- nullable by design
    table.timestamp('deleted_at').nullable();
  });

  // Index the soft-delete column so "WHERE deleted_at IS NULL"
  // queries stay fast on large tables.
  await knex.schema.raw(
    `CREATE INDEX users_active_idx ON users (id) WHERE deleted_at IS NULL`
  );
};

// "down" reverses the change. Run on `knex migrate:rollback`.
// Down migrations are debated -- some teams skip them in production
// and rely on forward-only "fix it with another migration" instead.
exports.down = async function (knex) {
  await knex.schema.raw(`DROP INDEX IF EXISTS users_active_idx`);
  await knex.schema.dropTableIfExists('users');
};

The equivalent in prisma migrate is generated from your schema.prisma file by running prisma migrate dev --name create_users, which produces a folder with a migration.sql containing the same CREATE TABLE and CREATE INDEX statements. With drizzle-kit, you edit your TypeScript schema and run drizzle-kit generate to emit the SQL.

Seeding -- Predictable Starting Data

Seeding populates a fresh database with known rows. There are three flavors and each belongs in a different file:

  • Reference data (countries, currencies, roles) -- required for the app to boot. Run on every environment.
  • Dev seeds (a few fake users, sample products) -- only on local and demo. Easy to log in, easy to reset.
  • Test seeds (fixtures for integration tests) -- inserted in beforeEach, deleted in afterEach.
// seeds/01_reference_data.js
exports.seed = async function (knex) {
  // Idempotent: clear then insert. Safe to run repeatedly.
  await knex('roles').del();
  await knex('roles').insert([
    { id: 1, name: 'admin' },
    { id: 2, name: 'user' },
    { id: 3, name: 'guest' },
  ]);
};

Soft Deletes -- Never Lose History

A hard delete removes a row forever. A soft delete sets a deleted_at timestamp instead. The row stays on disk; the application just learns to ignore it.

Why soft delete?

  • Audit trails. "Did user X exist last March?" still has an answer.
  • Foreign keys keep working. An order that pointed at a now-deleted product still resolves.
  • Undo is free. Set deleted_at = NULL and the row reappears.
  • Compliance. GDPR "right to erasure" can be satisfied by a separate scrub job that runs on a schedule.

The cost: every read query must remember to filter deleted_at IS NULL. Forget once and a deleted user logs back in. The fix is to centralize the filter -- inside the repository, or as a Prisma middleware, or as a Postgres view.

Example 5 -- Soft delete scope

// src/db/prismaClient.ts
// Apply a global Prisma middleware that automatically:
//   1. Turns delete() into update({ deletedAt: now })
//   2. Adds "deletedAt: null" to every find/findMany
// Callers write normal Prisma code and never see the filter.

import { PrismaClient } from '@prisma/client';

export const prisma = new PrismaClient();

// Models that support soft delete
const SOFT_DELETE_MODELS = new Set(['User', 'Post', 'Comment']);

prisma.$use(async (params, next) => {
  if (!params.model || !SOFT_DELETE_MODELS.has(params.model)) {
    return next(params);
  }

  // 1. Convert delete -> update with deletedAt
  if (params.action === 'delete') {
    params.action = 'update';
    params.args.data = { deletedAt: new Date() };
  }

  if (params.action === 'deleteMany') {
    params.action = 'updateMany';
    params.args.data = { ...(params.args.data ?? {}), deletedAt: new Date() };
  }

  // 2. Add "deletedAt: null" to every read
  if (params.action === 'findUnique' || params.action === 'findFirst') {
    params.action = 'findFirst';
    params.args.where = { ...(params.args.where ?? {}), deletedAt: null };
  }

  if (params.action === 'findMany') {
    params.args = params.args ?? {};
    params.args.where = { ...(params.args.where ?? {}), deletedAt: null };
  }

  return next(params);
});

After this middleware is in place, prisma.user.findMany() silently excludes deleted users, and prisma.user.delete({ where: { id } }) silently sets deletedAt. To find deleted rows for an admin tool, bypass the middleware with a raw query or a dedicated findDeleted method.


Audit Columns -- Free History on Every Table

Every table in a serious system has the same four columns:

  • created_at TIMESTAMP NOT NULL DEFAULT NOW() -- when this row was inserted.
  • updated_at TIMESTAMP NOT NULL DEFAULT NOW() -- when it was last modified, refreshed by a trigger or your ORM.
  • created_by UUID -- which user (or service) inserted it.
  • updated_by UUID -- which user touched it last.

Plus deleted_at for soft delete. These columns cost nothing to add up front and are excruciating to backfill later. They power audit logs, "last edited by" UI, debugging ("when did this row first get the wrong value?"), and compliance reports. Add them to every migration template and forget about them.


Connection-Per-Request -- The Anti-Pattern

This is the single biggest performance bug in junior Node.js codebases:

// ANTI-PATTERN -- do not do this
app.get('/users/:id', async (req, res) => {
  const client = new Client({ connectionString: process.env.DATABASE_URL });
  await client.connect();              // TCP + TLS + auth handshake
  const result = await client.query('SELECT * FROM users WHERE id = $1', [req.params.id]);
  await client.end();                  // tear it all down again
  res.json(result.rows[0]);
});

Every request opens a brand new TCP connection, runs a TLS handshake, authenticates against Postgres, runs one query, then closes everything. On a busy endpoint that's 50-100ms of pure overhead per request, and Postgres caps out at a few hundred connections before refusing new ones.

The fix is a shared pool initialized once at boot:

// CORRECT -- one pool, reused for the lifetime of the process
const { Pool } = require('pg');

const pool = new Pool({
  connectionString: process.env.DATABASE_URL,
  max: 10,                    // up to 10 clients kept open
  idleTimeoutMillis: 30_000,  // close idle clients after 30s
});

app.get('/users/:id', async (req, res) => {
  // pool.query() borrows a client, runs the query, returns it
  const result = await pool.query('SELECT * FROM users WHERE id = $1', [req.params.id]);
  res.json(result.rows[0]);
});

The pool keeps a small number of connections warm and reuses them across thousands of requests. Tune max based on (Postgres max_connections) / (number of app instances) -- usually 5 to 20 per Node process is a sane default.


Common Mistakes

1. Running transaction statements on the pool instead of a checked-out client. pool.query('BEGIN') followed by another pool.query('UPDATE ...') may run on two different physical connections. The BEGIN applies to the first, and the UPDATE runs outside any transaction. Always pool.connect() to grab a single client, run every statement on that client, and client.release() in a finally.

2. Forgetting client.release() in the error path. If your transaction throws and you only release the client on the happy path, that connection is leaked forever. After enough errors, the pool is exhausted and the app hangs. Always release in finally.

3. Hard deleting rows in tables with foreign keys. Hard deleting a User cascades to orders, comments, audit logs -- and now your reports are missing data and your support team can't answer "what did this customer do?". Soft delete by default; hard delete only via a separate, deliberate purge job.

4. Writing SQL in route handlers instead of repositories. The moment a SELECT statement appears in a controller, your codebase has started rotting. Adding an index or renaming a column now means searching every file for the table name. Push every query into a repository method with a clear name.

5. No down migration and no migration tool at all. "I'll just run this ALTER TABLE on prod" is how production drifts from staging which drifts from local. Use a real migration tool, commit the files, and let CI run them. The down migration is optional in practice but the migration log table is not.


Interview Questions

1. "Explain the repository pattern. What problem does it solve in a Node.js backend?"

The repository pattern is a data access layer that hides SQL (or ORM calls) behind named, domain-meaningful methods like findActiveByEmail or softDelete. The rest of the application -- controllers, services, jobs -- only knows about the repository's API, never about tables or queries. It solves three problems. First, it centralizes SQL: when you need to add an index, change a column name, or apply a tenant filter, you edit one file instead of grepping for table names across the codebase. Second, it makes business logic testable -- you can mock UserRepository in unit tests without spinning up Postgres. Third, it enforces invariants like soft-delete scoping and audit columns by encoding them once in the repo, so callers can't forget them. The repository is also the natural place to swap a transaction-scoped client in for the pool, which keeps transaction handling clean.

2. "Walk me through running a multi-statement transaction with the pg library. What's the most common bug?"

You check out a single client from the pool with pool.connect(), then run client.query('BEGIN'), then your statements, then client.query('COMMIT') -- all inside a try. In the catch, you call client.query('ROLLBACK') and rethrow. In a finally block, you call client.release() so the client returns to the pool. The most common bug is running statements on pool.query() instead of client.query(). The pool may hand each call a different physical connection, so the BEGIN applies to one connection while the UPDATE runs on another -- outside any transaction. The second most common bug is forgetting client.release() in the error path, which leaks connections until the pool is exhausted and the app hangs.

3. "What does Prisma's $transaction do, and when would you use the callback form instead of the array form?"

$transaction runs multiple Prisma operations inside a single Postgres transaction -- one BEGIN, one COMMIT, automatic ROLLBACK on any thrown error. The array form takes a list of independent Prisma promises and is fine when none of the operations depend on each other's results: "create these three rows together or not at all". The callback form takes an async (tx) => { ... } function and gives you a transaction-scoped client tx. You use the callback form whenever step 2 needs the result of step 1 -- a transfer between two wallets that first reads the sender's balance, decides whether the transfer is allowed, then writes both wallets and an audit log. Only the callback form lets you make decisions inside the transaction, and it accepts options like isolationLevel and timeout.

4. "Why use soft deletes instead of hard deletes? What are the downsides?"

Soft deletes set a deleted_at timestamp instead of removing the row, so history, audit trails, and foreign key relationships keep working. You can answer "did this user ever exist?", undelete by setting the column back to NULL, and your reports don't lose data when a customer is removed. The downsides are real: every read query must remember to filter deleted_at IS NULL, or deleted rows leak back into the UI. Disk usage grows because nothing is ever removed. Unique constraints get awkward -- you can't have a unique index on email if a deleted user with that email still occupies the slot. The fixes are to centralize the filter (in a repository, a Prisma middleware, or a database view), schedule a periodic purge job for GDPR compliance, and use partial indexes like CREATE UNIQUE INDEX ON users (email) WHERE deleted_at IS NULL to make uniqueness apply only to live rows.

5. "Why is opening a new database connection per HTTP request a bad idea? What do you use instead?"

Every new connection to Postgres requires a TCP handshake, a TLS handshake, and authentication -- typically 50-100 milliseconds of pure overhead before a single query runs. On a busy endpoint that dwarfs the actual query time, and Postgres has a hard cap on max_connections (a few hundred by default) so you'll start refusing requests under load. Instead you use a shared connection pool, created once at process startup, sized to something like 5-20 clients per Node instance. The pool keeps a small set of connections warm; when a request comes in, the pool hands out an idle client, the query runs, and the client returns to the pool. Across thousands of requests you pay the handshake cost a handful of times instead of once per request. The pool also bounds your concurrency: if all clients are busy, new queries queue instead of overwhelming the database.


Quick Reference -- Database Patterns Cheat Sheet

+---------------------------------------------------------------+
|           DATABASE PATTERNS CHEAT SHEET                       |
+---------------------------------------------------------------+
|                                                                |
|  REPOSITORY:                                                   |
|  class UserRepository {                                        |
|    constructor(pool) { this.pool = pool }                      |
|    findById(id) { ... }                                        |
|    create({...}) { ... }                                       |
|    softDelete(id) { ... }                                      |
|  }                                                             |
|                                                                |
|  TRANSACTION (pg):                                             |
|  const client = await pool.connect()                           |
|  try {                                                         |
|    await client.query('BEGIN')                                 |
|    ...                                                         |
|    await client.query('COMMIT')                                |
|  } catch (e) {                                                 |
|    await client.query('ROLLBACK'); throw e                     |
|  } finally {                                                   |
|    client.release()                                            |
|  }                                                             |
|                                                                |
|  TRANSACTION (Prisma):                                         |
|  prisma.$transaction(async (tx) => { ... })                    |
|                                                                |
|  MIGRATION:                                                    |
|  exports.up   = async (knex) => { ... }                        |
|  exports.down = async (knex) => { ... }                        |
|                                                                |
|  SOFT DELETE:                                                  |
|  UPDATE t SET deleted_at = NOW() WHERE id = $1                 |
|  SELECT ... WHERE deleted_at IS NULL                           |
|                                                                |
|  AUDIT COLUMNS:                                                |
|  created_at, updated_at, created_by, updated_by, deleted_at    |
|                                                                |
|  CONNECTION POOL:                                              |
|  const pool = new Pool({ max: 10 })   // ONCE at boot          |
|                                                                |
+---------------------------------------------------------------+

+---------------------------------------------------------------+
|           KEY RULES                                            |
+---------------------------------------------------------------+
|                                                                |
|  1. SQL lives in repositories, never in controllers            |
|  2. Transactions run on a checked-out client, not the pool     |
|  3. Always release the client in a finally block               |
|  4. Every table has created_at, updated_at, deleted_at         |
|  5. Soft delete by default; hard delete only via purge jobs    |
|  6. Centralize the deleted_at filter (repo or middleware)      |
|  7. Migrations are versioned files, not ad-hoc ALTER TABLEs    |
|  8. One shared pool per process -- never a client per request  |
|                                                                |
+---------------------------------------------------------------+
ConcernAnti-PatternPattern
Where SQL livesInline in controllersInside a repository class
Multi-step writesSeparate pool.query callsBEGIN/COMMIT on one client
Schema changesAd-hoc ALTER TABLE on prodVersioned migration files
DeletesDELETE FROM ...UPDATE ... SET deleted_at = NOW()
History"Check the backups"created_at, updated_at, audit log
Connectionsnew Client() per requestShared Pool created at boot
Transactions in PrismaSequential await callsprisma.$transaction(...)
Test dataManual SQL in a docSeed scripts checked into git

Prev: Lesson 7.2 -- ORMs and Query Builders Next: Lesson 7.4 -- Caching with Redis


This is Lesson 7.3 of the Node.js Interview Prep Course -- 10 chapters, 42 lessons.

On this page