Networking Interview Prep
OSI Model

Transport & Session Layers (Layers 4–5)

Transport & Session Layers (Layers 4–5)

LinkedIn Hook

You have heard of TCP. You have heard of UDP. You have probably even heard of the "3-way handshake."

But do you know which OSI layer they live on — and why that distinction matters in an interview?

Layer 4 (Transport) is the post office of the internet. TCP is the courier that gets a signature at the door and re-delivers if you are not home. UDP is the kid dropping flyers from a plane — fast, cheap, no guarantees.

Layer 5 (Session) is the thing almost nobody can explain correctly — and that makes it the perfect interview trap. (Hint: it has nothing to do with HTTP cookies or express-session.)

In this lesson: exactly what Layer 4 does (segments, ports, multiplexing, TCP reliability, flow control, congestion control), what Layer 5 actually is (not what you think), and two Node.js code examples that make both layers concrete.

Read the full lesson → [link]

#Networking #TCP #OSIModel #BackendEngineering #SystemDesign #InterviewPrep


Transport & Session Layers (Layers 4–5) thumbnail


What You'll Learn

  • How Layer 4 (Transport) breaks data into segments, assigns port numbers, and uses multiplexing to let multiple apps share one IP address
  • How TCP guarantees delivery with sequence numbers, acknowledgments, retransmission, flow control, and congestion control — and when UDP is the better choice
  • What Layer 5 (Session) actually is in the OSI model — not HTTP cookies, but true session establishment, maintenance, and termination
  • Which real-world protocols live at each layer, and how modern stacks collapsed the session layer into the application layer

Layer 4 — Transport

The Post Office Analogy

Think of Layer 3 (Network) as the city's addressing system — it gets your data to the right building (IP address). But a building has many rooms. Layer 4 is the post office worker who reads the room number on the envelope and delivers your mail to the right apartment.

That room number is the port. The process of delivering to the right room is multiplexing. And whether you pay for tracked courier service or anonymous flyer delivery depends on whether you choose TCP or UDP.


Segments, Not Packets

By the time data is handed down from Layer 5 to Layer 4, it gets a new name and a new structure. Layer 4 does not work with packets (that is a Layer 3 concept) — it works with segments (TCP) or datagrams (UDP).

A TCP segment is the unit of data at the Transport layer. It contains:

  • Source port — which application sent this (e.g., ephemeral port 52431 in the client browser)
  • Destination port — which application should receive this (e.g., 443 for HTTPS)
  • Sequence number — the position of this segment in the overall byte stream
  • Acknowledgment number — confirming receipt of the previous segment
  • Flags — SYN, ACK, FIN, RST, PSH, URG (control bits for connection lifecycle)
  • Window size — how many bytes the receiver is willing to accept right now (flow control)
  • Checksum — error detection
  • Data — the actual payload chunk

The Layer 3 packet wraps the entire TCP segment as its payload and carries it across the network. The distinction matters in interviews: if someone asks about segments, they are talking about Layer 4. Packets are Layer 3.


Port Numbers and Multiplexing

An IP address identifies a machine. A port number identifies a specific process running on that machine.

When your laptop has a browser, a Slack client, and a VS Code extension all making network requests simultaneously, they all share the same IP address. The Transport layer keeps their traffic separate using port pairs:

Browser tab → source port 52431 → server port 443
Slack       → source port 52890 → server port 443
VS Code ext → source port 53012 → server port 443

The server's response back to each client includes the destination port that matches the client's source port — so the OS delivers each response to the right process. This is multiplexing: many logical connections over one IP address.

Port ranges you must know:

RangeNameExamples
0 – 1023Well-known / system ports80 (HTTP), 443 (HTTPS), 22 (SSH), 25 (SMTP)
1024 – 49151Registered ports5432 (PostgreSQL), 6379 (Redis), 3306 (MySQL)
49152 – 65535Ephemeral / dynamic portsAssigned by the OS for outbound client connections

TCP — Reliable, Ordered, Connection-Oriented

TCP (Transmission Control Protocol) is the tracked courier. Before a single byte of data is sent, TCP establishes a connection through the 3-way handshake:

Client                        Server
  │── SYN (seq=x) ──────────────► │   "I want to connect, my seq starts at x"
  │◄── SYN-ACK (seq=y, ack=x+1) ──│   "OK, my seq starts at y, I acknowledge x"
  │── ACK (ack=y+1) ──────────────►│   "Acknowledged — connection open"
  │                                │
  │═══════ DATA TRANSFER ══════════│
  │                                │
  │── FIN ────────────────────────►│   "I'm done sending"
  │◄── FIN-ACK ────────────────────│   "Acknowledged, I'm also done"

Reliability mechanisms TCP provides:

  1. Sequence numbers — every byte is numbered; the receiver uses these to detect missing data and reorder out-of-sequence segments.

  2. Acknowledgments (ACK) — the receiver sends back an ACK for each segment received, confirming its sequence number. If the sender does not receive an ACK within a timeout period, it retransmits.

  3. Retransmission — lost or corrupted segments are re-sent automatically. The sender maintains a retransmission timer; if it expires before an ACK arrives, the segment is resent.

  4. Flow control (Window Size) — the receiver advertises its buffer size in the "Window" field of each segment. The sender cannot transmit more data than the receiver's current window allows. If the receiver's buffer fills up, it sets the window to 0, pausing the sender until it is ready.

  5. Congestion control — separate from flow control, this protects the network itself (not just the receiver). TCP uses algorithms like slow start, congestion avoidance, and fast retransmit to detect and respond to network congestion, backing off when packet loss is detected.


UDP — Fast, Lightweight, Connectionless

UDP (User Datagram Protocol) is the kid dropping flyers from a plane. There is no handshake, no acknowledgment, no retransmission, no ordering guarantee.

A UDP datagram has only four fields in its header: source port, destination port, length, and checksum. That is it. The entire header is 8 bytes versus TCP's minimum 20 bytes.

When UDP wins:

Use CaseWhy UDP
Live video streamingA dropped frame is better than a stalled video waiting for retransmission
Online gamingStale position data is useless — better to drop it and send the current state
DNS queriesSingle request/response — TCP overhead would double the round trip
VoIP / audio callsSlight audio glitch < perceptible delay from retransmission
QUIC (HTTP/3)Implements its own reliability on top of UDP for better control

The rule of thumb: if the application can tolerate some loss and needs low latency, use UDP. If correctness and completeness matter more than speed, use TCP.


Layer 5 — Session

What a Session Actually Is (Not What You Think)

Here is the common interview trap: a developer hears "session layer" and immediately thinks of HTTP session cookies, express-session, JWT tokens, or localStorage. Those are all application-layer concepts. They have nothing to do with OSI Layer 5.

Think of Layer 5 as a phone call manager. A phone call has three phases: you dial and wait for the other person to pick up (establishment), you talk back and forth (maintenance), and then one of you hangs up (termination). The session layer manages this lifecycle for communication sessions between two applications — independently of the data format (Layer 6) or the application logic (Layer 7).

A session in the OSI sense is a semi-permanent, interactive information interchange between two communicating devices. It tracks:

  • Which side is allowed to transmit right now (dialog control)
  • How to recover if the connection is temporarily interrupted (synchronization checkpoints)
  • How to properly terminate the exchange when done

Session Establishment, Maintenance, and Termination

Establishment: The session layer negotiates the parameters of the session — who goes first, what type of communication (simplex, half-duplex, full-duplex), and synchronization points. It essentially says: "we are starting a session, here are the rules."

Maintenance: During an active session, Layer 5 inserts synchronization checkpoints into the data stream. If the connection drops mid-transfer, the session can be resumed from the last checkpoint rather than restarting from the beginning. This is why a large NFS file transfer can survive a brief network blip without corrupting the file.

Termination: The session layer handles graceful shutdown — ensuring both parties agree the session is complete before tearing down the connection. This is different from a TCP FIN — the session layer termination is at the logical application session level.


Real Protocols at Layer 5

In the OSI reference model, these protocols operate at or near Layer 5:

ProtocolWhat It Does
RPC (Remote Procedure Call)Manages the request-reply session for calling functions on remote machines — session setup, parameter passing, response handling
NetBIOSLegacy Windows protocol for session management between machines on a LAN — name resolution and session establishment
SMB (Server Message Block)Windows file sharing; establishes and maintains sessions between Windows clients and file servers
NFS (Network File System)Unix/Linux file sharing; session management for persistent file access across the network
SQL sessionsWhen your application connects to PostgreSQL, the database driver establishes a session — authentication, transaction context, cursor state — this is Layer 5 behavior
PPTPPoint-to-Point Tunneling Protocol for VPNs; establishes and manages tunnel sessions

How Modern Protocols Collapsed the Session Layer

In practice, the clean OSI separation between Layer 5, 6, and 7 largely disappeared with TCP/IP. Modern protocols often handle session management internally within the application layer:

  • HTTP/1.1 collapses all of Layers 5, 6, and 7 into one. The Connection: keep-alive header is Layer 5 behavior (maintaining the session across multiple requests) implemented at Layer 7.
  • HTTP/2 takes this further — it manages persistent sessions with multiplexed streams, all within a single TLS connection. The stream management is conceptually Layer 5 work done inside the application protocol.
  • WebSockets upgrade an HTTP connection into a persistent bidirectional channel. The handshake (Upgrade: websocket) and the ongoing connection maintenance are session-layer behaviors expressed as application protocol messages.
  • TLS sessions — TLS includes session resumption (via session IDs or tickets) so a returning client does not need to redo the full handshake. This is pure Layer 5 session management, but implemented within the security protocol.

The key insight for interviews: the concept of a session (establishment, maintenance, termination) is real and important. The idea that it lives in a cleanly separate Layer 5 is a teaching model. In the real world, TCP/IP stacks and application protocols share these responsibilities across layers.


Code Example — Node.js TCP Server and Client (Layer 4)

This example demonstrates Layer 4 concepts directly: a raw TCP connection, port numbers, data transmission as a stream that gets chunked into segments, and connection lifecycle (establishment and termination).

// ---- TCP SERVER (Layer 4 concepts) ----
// Uses Node.js built-in 'net' module — raw TCP, no HTTP abstraction

const net = require("net");

const PORT = 9000;
let connectionCount = 0;

const server = net.createServer((socket) => {
  connectionCount++;
  const connectionId = connectionCount;

  // Layer 4: source port and destination port identify this unique connection
  console.log(
    `[Server] Connection #${connectionId} established` +
    ` from ${socket.remoteAddress}:${socket.remotePort}` +   // client's ephemeral port
    ` → server port ${socket.localPort}`                      // our listening port
  );

  // Layer 4: TCP delivers data as a stream — Node gives us chunks (segments)
  socket.on("data", (chunk) => {
    const message = chunk.toString("utf8");
    console.log(`[Server] Received segment: "${message}" (${chunk.length} bytes)`);

    // Acknowledge and respond — simulating Layer 4 application-level ACK
    const response = `ACK | Echo: "${message}" | ServerTime: ${Date.now()}`;
    socket.write(response, "utf8");
    console.log(`[Server] Sent response (${Buffer.byteLength(response)} bytes)`);
  });

  // Layer 4: TCP FIN — peer initiated graceful connection termination
  socket.on("end", () => {
    console.log(`[Server] Connection #${connectionId} — FIN received, closing`);
    socket.end(); // send our own FIN
  });

  socket.on("error", (err) => {
    console.error(`[Server] Connection #${connectionId} error: ${err.message}`);
  });
});

server.listen(PORT, "127.0.0.1", () => {
  console.log(`[Server] Listening on 127.0.0.1:${PORT}`);
  console.log(`[Server] Waiting for TCP connections...`);
});
// ---- TCP CLIENT (Layer 4 concepts) ----
// Run this in a separate terminal after starting the server

const net = require("net");

const SERVER_HOST = "127.0.0.1";
const SERVER_PORT = 9000;

const client = net.createConnection({ host: SERVER_HOST, port: SERVER_PORT }, () => {
  // Layer 4: 3-way handshake complete — connection is established
  console.log(`[Client] TCP connection established`);
  console.log(`[Client] My ephemeral port: ${client.localPort}`);   // OS-assigned ephemeral
  console.log(`[Client] Server port: ${client.remotePort}`);        // well-known port

  // Send first segment
  const msg1 = "Hello from Layer 4";
  console.log(`\n[Client] Sending: "${msg1}"`);
  client.write(msg1, "utf8");
});

// Receive server response (server's segment)
client.on("data", (chunk) => {
  console.log(`[Client] Server responded: "${chunk.toString("utf8")}"`);

  // Send a second segment after receiving the first response
  const msg2 = "Goodbye — sending FIN";
  console.log(`\n[Client] Sending: "${msg2}"`);
  client.write(msg2, "utf8");

  // Initiate graceful TCP termination (sends FIN)
  setTimeout(() => {
    console.log("[Client] Initiating connection teardown (FIN)");
    client.end();
  }, 200);
});

client.on("end", () => {
  console.log("[Client] Connection fully closed (FIN-ACK received)");
});

client.on("error", (err) => {
  console.error(`[Client] Error: ${err.message}`);
});

Server output:

[Server] Listening on 127.0.0.1:9000
[Server] Waiting for TCP connections...
[Server] Connection #1 established from 127.0.0.1:54821 → server port 9000
[Server] Received segment: "Hello from Layer 4" (18 bytes)
[Server] Sent response (55 bytes)
[Server] Received segment: "Goodbye — sending FIN" (21 bytes)
[Server] Sent response (58 bytes)
[Server] Connection #1 — FIN received, closing

Client output:

[Client] TCP connection established
[Client] My ephemeral port: 54821
[Client] Server port: 9000

[Client] Sending: "Hello from Layer 4"
[Client] Server responded: "ACK | Echo: "Hello from Layer 4" | ServerTime: 1745228400000"

[Client] Sending: "Goodbye — sending FIN"
[Client] Initiating connection teardown (FIN)
[Client] Connection fully closed (FIN-ACK received)

Notice the ephemeral port 54821 — the OS assigned this dynamically to the client process. The server's port 9000 is fixed and listening. Together this port pair uniquely identifies the TCP connection among all connections on both machines.


Code Example — Session Management Simulation (Layer 5)

This example simulates Layer 5 session behavior: establishing a session with authentication, maintaining it across multiple exchanges with a session ID, and terminating it gracefully. This mirrors what protocols like NFS, SQL database connections, and RPC frameworks do at the session layer.

// Simulating OSI Layer 5 Session behavior:
// - Session establishment (authentication + session ID assignment)
// - Session maintenance (state tracking, token validation per exchange)
// - Session termination (graceful close, state cleanup)

// ---- SESSION MANAGER (represents the Layer 5 session layer) ----

class SessionManager {
  constructor() {
    this.sessions = new Map(); // sessionId → session state
  }

  // Phase 1: ESTABLISHMENT — authenticate and create session
  establish(clientId, credentials) {
    console.log(`[Session] Establishment request from client: ${clientId}`);

    // Simulate credential check (authentication happens at session establishment)
    if (credentials.password !== "secret123") {
      console.log(`[Session] Authentication failed for ${clientId}`);
      return { success: false, error: "AUTH_FAILED" };
    }

    const sessionId = `SES-${Date.now()}-${Math.random().toString(36).slice(2, 7)}`;
    const session = {
      sessionId,
      clientId,
      establishedAt: Date.now(),
      lastActivity: Date.now(),
      checkpoints: [],        // synchronization points for crash recovery
      dialogTurn: "client",   // who is allowed to transmit (dialog control)
      active: true,
    };

    this.sessions.set(sessionId, session);

    console.log(`[Session] Session established — ID: ${sessionId}`);
    console.log(`[Session] Dialog control: ${session.dialogTurn} goes first`);

    return { success: true, sessionId };
  }

  // Phase 2: MAINTENANCE — validate session, update state, manage dialog control
  exchange(sessionId, message, sender) {
    const session = this.sessions.get(sessionId);

    if (!session || !session.active) {
      return { success: false, error: "SESSION_NOT_FOUND_OR_CLOSED" };
    }

    // Dialog control: enforce whose turn it is
    if (session.dialogTurn !== sender) {
      return {
        success: false,
        error: `DIALOG_VIOLATION — it is ${session.dialogTurn}'s turn to transmit`,
      };
    }

    // Session maintenance: update last activity (keep-alive tracking)
    session.lastActivity = Date.now();

    // Insert a synchronization checkpoint every 3 exchanges (crash recovery)
    session.checkpoints.push({ at: Date.now(), message });
    if (session.checkpoints.length % 3 === 0) {
      console.log(
        `[Session] Checkpoint #${session.checkpoints.length} recorded` +
        ` — session can resume from here if interrupted`
      );
    }

    // Switch dialog control to the other party (half-duplex simulation)
    session.dialogTurn = sender === "client" ? "server" : "client";

    console.log(
      `[Session] [${sessionId}] ${sender} → "${message}"` +
      ` | Next turn: ${session.dialogTurn}`
    );

    return { success: true, received: message };
  }

  // Phase 3: TERMINATION — graceful session close, state cleanup
  terminate(sessionId, initiator) {
    const session = this.sessions.get(sessionId);

    if (!session) {
      return { success: false, error: "SESSION_NOT_FOUND" };
    }

    const duration = Date.now() - session.establishedAt;
    console.log(`[Session] Termination initiated by: ${initiator}`);
    console.log(`[Session] Session ${sessionId} duration: ${duration}ms`);
    console.log(`[Session] Total checkpoints recorded: ${session.checkpoints.length}`);

    session.active = false;
    this.sessions.delete(sessionId);

    console.log(`[Session] Session ${sessionId} terminated and state cleaned up`);
    return { success: true, closedBy: initiator };
  }
}

// ---- SIMULATION ----

async function runSessionSimulation() {
  const manager = new SessionManager();

  console.log("=== PHASE 1: SESSION ESTABLISHMENT ===");
  const result = manager.establish("client-node-app", { password: "secret123" });
  if (!result.success) {
    console.error("Cannot continue — session not established");
    return;
  }
  const { sessionId } = result;

  console.log("\n=== PHASE 2: SESSION MAINTENANCE (data exchange) ===");

  // Half-duplex exchanges — each party waits its turn
  manager.exchange(sessionId, "REQUEST: fetch user profile", "client");
  manager.exchange(sessionId, "RESPONSE: { id: 42, name: 'Alice' }", "server");

  manager.exchange(sessionId, "REQUEST: update email to alice@new.com", "client");
  manager.exchange(sessionId, "RESPONSE: email updated successfully", "server");

  manager.exchange(sessionId, "REQUEST: fetch order history", "client");
  // ^ This exchange triggers checkpoint #3

  manager.exchange(sessionId, "RESPONSE: [order-1, order-2, order-3]", "server");

  // Demonstrate dialog violation (client tries to send out of turn)
  const violation = manager.exchange(sessionId, "UNAUTHORIZED SEND", "client");
  // After server responded, it is client's turn — this actually works.
  // Let's demonstrate a real violation — server trying to send when it is client's turn:
  const violation2 = manager.exchange(sessionId, "SERVER SPEAKING OUT OF TURN", "server");
  if (!violation2.success) {
    console.log(`[Session] Correctly blocked: ${violation2.error}`);
  }

  console.log("\n=== PHASE 3: SESSION TERMINATION ===");
  manager.terminate(sessionId, "client");
}

runSessionSimulation();

Output:

=== PHASE 1: SESSION ESTABLISHMENT ===
[Session] Establishment request from client: client-node-app
[Session] Session established — ID: SES-1745228400000-k9x2m
[Session] Dialog control: client goes first

=== PHASE 2: SESSION MAINTENANCE (data exchange) ===
[Session] [SES-1745228400000-k9x2m] client → "REQUEST: fetch user profile" | Next turn: server
[Session] [SES-1745228400000-k9x2m] server → "RESPONSE: { id: 42, name: 'Alice' }" | Next turn: client
[Session] [SES-1745228400000-k9x2m] client → "REQUEST: update email to alice@new.com" | Next turn: server
[Session] [SES-1745228400000-k9x2m] server → "RESPONSE: email updated successfully" | Next turn: client
[Session] [SES-1745228400000-k9x2m] client → "REQUEST: fetch order history" | Next turn: server
[Session] Checkpoint #3 recorded — session can resume from here if interrupted
[Session] [SES-1745228400000-k9x2m] server → "RESPONSE: [order-1, order-2, order-3]" | Next turn: client
[Session] [SES-1745228400000-k9x2m] client → "UNAUTHORIZED SEND" | Next turn: server
[Session] Correctly blocked: DIALOG_VIOLATION — it is client's turn to transmit

=== PHASE 3: SESSION TERMINATION ===
[Session] Termination initiated by: client
[Session] Session SES-1745228400000-k9x2m duration: 12ms
[Session] Total checkpoints recorded: 3
[Session] Session SES-1745228400000-k9x2m terminated and state cleaned up

The session ID, dialog control, checkpoints, and graceful termination are all OSI Layer 5 concepts — none of this is about HTTP cookies or browser storage.

Transport & Session Layers (Layers 4–5) visual 1


Common Mistakes

  • Confusing the TCP 3-way handshake with the OSI Session layer. The SYN/SYN-ACK/ACK handshake is a Layer 4 TCP mechanism — it establishes a reliable transport connection. The OSI Session layer (Layer 5) operates above this and manages the logical application session. The TCP handshake is a prerequisite for the session, not the session itself. Many candidates say "the session layer does the handshake" — this is wrong and will cost you in an interview.

  • Thinking HTTP session cookies equal OSI Layer 5. When an interviewer asks about the session layer, they mean the OSI concept: connection management, dialog control, and synchronization checkpoints in protocols like NFS, SMB, and SQL connections. express-session, JWT, and Set-Cookie: sessionId=abc are application layer (Layer 7) mechanisms — they have no relation to OSI Layer 5.

  • Confusing port numbers with application protocols. Port 443 is not HTTPS — it is where HTTPS happens to listen by convention. Port 5432 is not PostgreSQL — it is where PostgreSQL defaults to listening. A port is a Layer 4 addressing mechanism (a number from 0 to 65535). The application protocol that uses a port is a Layer 7 concept. You can run any service on any port; the well-known assignments are just conventions.


Interview Questions

Q: What is the difference between a TCP segment and an IP packet? Which OSI layer does each belong to?

A TCP segment is the Protocol Data Unit (PDU) at Layer 4 (Transport). It contains the source port, destination port, sequence number, acknowledgment number, flags, window size, and payload. An IP packet is the PDU at Layer 3 (Network). It contains the source IP, destination IP, TTL, protocol field, and its payload — which is the entire TCP segment. When data moves down the OSI stack, each layer wraps the layer above's data inside its own header. So the relationship is: IP packet payload = TCP segment, and TCP segment payload = application data. They are nested, not interchangeable terms.

Q: Explain TCP flow control versus TCP congestion control. Why are two separate mechanisms needed?

They solve different problems. Flow control protects the receiver — it prevents the sender from overwhelming the receiver's buffer. The receiver advertises its available buffer in the TCP Window field; the sender respects this limit and backs off when the window shrinks to zero. Congestion control protects the network — it prevents any single connection from saturating shared links and causing packet loss for everyone. TCP detects congestion through packet loss (timeout or duplicate ACKs) and uses algorithms like slow start, congestion avoidance, and fast retransmit to reduce its sending rate. You need both because a receiver might have a large buffer (flow control permits fast sending) while the network between them is congested (congestion control should slow sending down).

Q: Why would you choose UDP over TCP for a video streaming application?

In live video, timeliness matters more than completeness. If a TCP segment containing video frame 42 is lost, TCP will retransmit it. But by the time the retransmission arrives, frames 43 through 60 are already waiting — the player has to buffer them, causing a perceptible stall. With UDP, a dropped frame is simply skipped. The player shows a brief glitch (or the codec interpolates) and immediately continues with frame 43. The result is smoother, lower-latency playback. Additionally, UDP's smaller header (8 bytes vs TCP's 20+ bytes) reduces per-packet overhead at high bitrates. Modern streaming protocols like WebRTC and QUIC (HTTP/3) use UDP and implement selective reliability at the application layer — getting the best of both worlds.

Q: What does the OSI Session layer actually do, and can you name two protocols that operate at Layer 5?

The OSI Session layer (Layer 5) manages the lifecycle of a communication session between two applications: it handles establishment (negotiating session parameters and authenticating), maintenance (dialog control — which side transmits when — and inserting synchronization checkpoints so interrupted transfers can resume), and termination (gracefully closing the session when both parties are done). Two protocols that operate at Layer 5: RPC (Remote Procedure Call), which manages the session state around a remote function call — setup, parameter exchange, and teardown; and SMB (Server Message Block), the Windows file-sharing protocol, which establishes and maintains authenticated sessions between clients and file servers. In modern TCP/IP stacks, session layer behavior is typically absorbed into the application layer — HTTP/2 stream management and WebSocket connection maintenance are examples.

Q: A client opens a connection to a server. The server is running three services: a web server on port 443, a PostgreSQL database on port 5432, and an SSH daemon on port 22. How does the OS know which process should receive each incoming TCP segment?

The OS uses the 4-tuple to uniquely identify each TCP connection: (source IP, source port, destination IP, destination port). Each incoming segment contains this 4-tuple in its header. The OS maintains a socket table mapping 4-tuples to process file descriptors. When a segment arrives for destination port 443, the OS looks it up in the socket table and delivers it to the web server process. When a segment arrives for destination port 5432, it goes to the PostgreSQL process. Two segments could even come from the same client IP with different source (ephemeral) ports — they would still be distinguished by the full 4-tuple and delivered to separate processes or connection handlers. This is Layer 4 multiplexing: one IP address, many simultaneous logical connections, each dispatched to the right application.


Quick Reference — Cheat Sheet

Layer 4 vs Layer 5 Comparison

PropertyLayer 4 — TransportLayer 5 — Session
PDU nameSegment (TCP) / Datagram (UDP)Data
Core responsibilityEnd-to-end delivery between processesSession lifecycle management
AddressingPort numbers (0–65535)Session IDs / tokens
ConnectionTCP (connection-oriented), UDP (connectionless)Established on top of a transport connection
ReliabilityTCP: sequencing, ACK, retransmission; UDP: noneSynchronization checkpoints for crash recovery
Flow managementWindow size (flow control), CWND (congestion control)Dialog control (who transmits when)
Key protocolsTCP, UDP, SCTP, DCCPRPC, NetBIOS, SMB, NFS, SQL sessions, PPTP
Modern relevanceExtremely high — TCP/UDP underpin everythingAbsorbed into application layer in TCP/IP stacks
Interview trapSegments ≠ packets; TCP handshake lives hereNot HTTP cookies; not browser sessions

TCP Segment Structure

 0                   1                   2                   3
 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
┌─────────────────────────────┬─────────────────────────────────────┐
│       Source Port (16)      │         Destination Port (16)       │
├─────────────────────────────┴─────────────────────────────────────┤
│                      Sequence Number (32)                         │
├───────────────────────────────────────────────────────────────────┤
│                    Acknowledgment Number (32)                     │
├─────────┬────────┬─┬─┬─┬─┬─┬─┬───────────────────────────────────┤
│Data Off.│ Rsrvd  │U│A│P│R│S│F│           Window Size (16)        │
│  (4)    │  (4)   │R│C│S│S│Y│I│                                   │
│         │        │G│K│H│T│N│N│                                   │
├─────────┴────────┴─┴─┴─┴─┴─┴─┴───────────┬───────────────────────┤
│              Checksum (16)                │   Urgent Pointer (16) │
├───────────────────────────────────────────┴───────────────────────┤
│                    Options (0–320 bits, if any)                   │
├───────────────────────────────────────────────────────────────────┤
│                         Data (Payload)                            │
└───────────────────────────────────────────────────────────────────┘

Field         Bits  Purpose
──────────    ────  ──────────────────────────────────────────────────
Source Port    16   Sending application's port (often ephemeral)
Dest Port      16   Receiving application's port (often well-known)
Seq Number     32   Byte offset of first byte in this segment
Ack Number     32   Next byte number the sender expects to receive
Data Offset     4   Where the data starts (header length in 32-bit words)
Flags           6   URG ACK PSH RST SYN FIN — connection control bits
Window Size    16   Receiver's available buffer space (flow control)
Checksum       16   Error detection across header + data
Urgent Ptr     16   Offset to urgent data (only if URG flag set)
Options     0–320   MSS, window scaling, timestamps, SACK (optional)

TCP vs UDP at a Glance

TCP                               UDP
─────────────────────────         ──────────────────────────
Connection-oriented               Connectionless
3-way handshake required          No handshake — fire and forget
Ordered delivery guaranteed       No ordering guarantee
Retransmission on loss            No retransmission
Flow control (window size)        No flow control
Congestion control (CWND)         No congestion control
Header: 20–60 bytes               Header: 8 bytes (fixed)
Higher latency                    Lower latency
Use: HTTP, HTTPS, SSH, SMTP       Use: DNS, video, gaming, VoIP, QUIC

Previous: Lesson 2.2 — Data Link & Network & Physical Layers (Layers 1–3) → Next: Lesson 2.4 — Presentation & Application Layers (Layers 6–7) →


This is Lesson 2.3 of the Networking Interview Prep Course — 8 chapters, 32 lessons.

On this page