Non-Repeatable Read and READ COMMITTED
The Same Query, Two Different Answers
LinkedIn Hook
"You ran the same SELECT twice inside one transaction. The numbers changed. Nobody touched your code. Welcome to non-repeatable reads."
Most developers learn that transactions are "all or nothing" and stop there. Then production hits a strange bug: a report query reads a customer's balance, does some math, reads the same balance again, and the two values disagree. No bug in the application. No race condition in your code. The problem is the default isolation level -- and almost nobody notices until the numbers in an invoice stop adding up.
Postgres ships with READ COMMITTED as the default isolation level, and it does exactly what the name promises: each statement sees only data committed before that statement started. Run the same SELECT twice in one transaction with another session committing in between, and you get two different snapshots. This is called a non-repeatable read, and it is a feature, not a bug -- but only if you know it exists.
MySQL InnoDB defaults to REPEATABLE READ instead, so engineers who learned on MySQL get blindsided when they switch to Postgres. The same code that produced consistent reads on MySQL silently starts producing drifting reads on Postgres, and the bug only shows up when two users hit the system at the same time.
In Lesson 11.4, I break down non-repeatable reads with two-session SQL demos you can paste into psql, the exact mechanics of READ COMMITTED in Postgres, why the default differs from MySQL, and when to upgrade isolation versus when to live with the drift.
Read the full lesson -> [link]
#PostgreSQL #Databases #Transactions #IsolationLevels #BackendDevelopment #InterviewPrep
What You'll Learn
- What a non-repeatable read is and how it differs from a dirty read
- How Postgres READ COMMITTED takes a fresh snapshot per statement, not per transaction
- A two-session psql demo where the same SELECT returns two different rows
- Why MySQL InnoDB defaults to REPEATABLE READ and Postgres defaults to READ COMMITTED
- How
SELECT ... FOR UPDATEandSELECT ... FOR SHARElock rows to prevent drift - When non-repeatable reads are acceptable and when they cause real correctness bugs
- How to upgrade a single transaction to REPEATABLE READ without changing the global default
The Bank Statement Analogy -- Why the Same Question Gets Two Answers
Imagine you walk up to a bank teller and ask "what is my balance?" The teller checks the screen and says "$1,000." You pause for a moment to do some math in your head -- maybe you are deciding whether to withdraw cash. While you are thinking, your roommate at a different branch deposits $500 into the same joint account. You ask the teller again, just to confirm: "what is my balance?" The teller checks the screen again and says "$1,500."
You did not move. You did not touch anything. You asked the exact same question to the exact same teller and got two different answers. From your point of view, the bank "changed its story" mid-conversation. But from the bank's point of view, nothing went wrong: each lookup correctly reported the balance at the moment the lookup happened. The teller never lied. The teller just gave you the freshest committed truth, twice, and the truth changed in between.
That is exactly what a non-repeatable read is. Inside one of your database transactions, you run the same SELECT twice. Between the two queries, a different transaction commits a change to the same row. Your second query sees the new value. The transaction did not break, the database did not corrupt anything, and you did not lose any data -- but the assumption "if I read it once, I can read it again and get the same answer" turned out to be wrong.
+---------------------------------------------------------------+
| NON-REPEATABLE READ -- THE PROBLEM |
+---------------------------------------------------------------+
| |
| Session A (your transaction) |
| ---------------------------- |
| BEGIN; |
| SELECT balance FROM accounts WHERE id = 1; -> 1000 |
| |
| (you start computing something based on 1000) |
| |
| Session B (another user) |
| ------------------------ |
| BEGIN; |
| UPDATE accounts |
| SET balance = 1500 |
| WHERE id = 1; |
| COMMIT; |
| |
| SELECT balance FROM accounts WHERE id = 1; -> 1500 |
| |
| Same row. Same transaction. Different answer. |
| This is a NON-REPEATABLE READ. |
| |
+---------------------------------------------------------------+
The key insight: a non-repeatable read is not a dirty read. In Lesson 11.3 we saw dirty reads, where you see another transaction's uncommitted changes -- those changes might be rolled back, leaving you with values that never officially existed. Non-repeatable reads only show you data that was fully committed by the other transaction. The data is real. It just was not real yet the first time you asked.
What "Non-Repeatable Read" Actually Means
A read is repeatable if running the same SELECT twice inside the same transaction returns the same data both times. A read is non-repeatable if a row you previously read has been modified or deleted by another committed transaction by the time you read it again.
There are three distinct read anomalies the SQL standard cares about, and they form a hierarchy:
+---------------------------------------------------------------+
| THE THREE READ ANOMALIES |
+---------------------------------------------------------------+
| |
| 1. DIRTY READ |
| - You see another transaction's UNCOMMITTED changes. |
| - Those changes might be rolled back later. |
| - Prevented by: READ COMMITTED and stricter levels. |
| |
| 2. NON-REPEATABLE READ |
| - You re-read a row and it has CHANGED or DISAPPEARED. |
| - The change was committed by another transaction. |
| - Prevented by: REPEATABLE READ and stricter levels. |
| |
| 3. PHANTOM READ |
| - You re-run a range query and NEW rows appear. |
| - Another transaction inserted rows matching your WHERE. |
| - Prevented by: SERIALIZABLE (and Postgres REPEATABLE |
| READ, which is actually snapshot isolation). |
| |
+---------------------------------------------------------------+
A non-repeatable read is specifically about a row you already read changing under you. It is not about new rows showing up (that is a phantom read, covered in Lesson 11.5) and it is not about seeing values that were never committed (that is a dirty read).
Napkin AI Visual Prompt: "Dark gradient (#0a0f1f -> #111a2e). Three stacked horizontal lanes labelled 'Dirty Read', 'Non-Repeatable Read', 'Phantom Read'. Each lane has two terminal windows side by side connected by a pink (#ff5c8a) arrow. The 'Non-Repeatable Read' lane is highlighted in sky blue (#4fc3f7) with a glowing border. White monospace labels. The first lane shows uncommitted data leaking, the second shows a row's value changing, the third shows new rows appearing in a range. Title at top: 'Three Read Anomalies'."
Postgres Default -- READ COMMITTED, Per-Statement Snapshots
Postgres ships with READ COMMITTED as the default isolation level for every new transaction. You can confirm this from psql:
-- Check the default isolation level for new transactions
SHOW default_transaction_isolation;
Sample output:
default_transaction_isolation
-------------------------------
read committed
(1 row)
Here is the critical mechanic that trips up almost everyone: under READ COMMITTED, each statement gets its own snapshot of the database, taken at the moment the statement starts. The transaction does not get one stable snapshot for its entire lifetime. Every SELECT, every UPDATE, every DELETE looks at the database fresh.
+---------------------------------------------------------------+
| READ COMMITTED -- ONE SNAPSHOT PER STATEMENT |
+---------------------------------------------------------------+
| |
| BEGIN; |
| SELECT ... -> snapshot taken at statement start (S1) |
| SELECT ... -> snapshot taken at statement start (S2) |
| SELECT ... -> snapshot taken at statement start (S3) |
| COMMIT; |
| |
| Between any two statements, other transactions can commit |
| changes that S2 and S3 will see. There is NO stability |
| across statements -- only within a single statement. |
| |
+---------------------------------------------------------------+
This is exactly the behaviour that produces non-repeatable reads. The first SELECT sees the row as it existed at time T1. Some other session commits at T2. Your second SELECT runs at T3, sees the brand new committed value, and reports it without complaint.
READ COMMITTED does prevent dirty reads -- you will never see another transaction's uncommitted writes -- so it is strictly safer than READ UNCOMMITTED. But it does not prevent the same row from changing under you, and it does not prevent new rows from appearing in a range query. Those are tradeoffs Postgres made deliberately, in exchange for higher concurrency: per-statement snapshots are cheap, never block on conflicts, and never throw serialization errors.
Two-Session Demo -- Watch a Non-Repeatable Read Happen
Open two psql sessions to the same database. The cleanest way is two terminals; if you only have one, you can use \connect tricks or two BEGIN blocks in a tool like DBeaver. We will create a tiny accounts table and watch the anomaly happen live.
Setup
-- Run this once in either session to set up the demo
CREATE TABLE accounts (
id INTEGER PRIMARY KEY,
owner TEXT NOT NULL,
balance NUMERIC(12, 2) NOT NULL
);
INSERT INTO accounts (id, owner, balance) VALUES
(1, 'Alice', 1000.00),
(2, 'Bob', 500.00);
-- Confirm the starting state
SELECT * FROM accounts ORDER BY id;
Sample output:
id | owner | balance
----+-------+---------
1 | Alice | 1000.00
2 | Bob | 500.00
(2 rows)
Demo 1 -- The Classic Non-Repeatable Read
Run the statements in the order shown by the timeline. Session A is your "report" transaction; Session B is another user updating the row.
+---------------------------------------------------------------+
| TIMELINE -- DEMO 1 |
+---------------------------------------------------------------+
| |
| T1 Session A: BEGIN; SELECT balance ... -> 1000.00 |
| T2 Session B: BEGIN; UPDATE balance=1500; COMMIT; |
| T3 Session A: SELECT balance ... -> 1500.00 |
| T4 Session A: COMMIT; |
| |
| Same SELECT in Session A returned two different values. |
| |
+---------------------------------------------------------------+
Session A (your transaction):
-- Session A, statement 1
BEGIN;
-- Default isolation is READ COMMITTED, so we do not need to set it
SELECT balance FROM accounts WHERE id = 1;
Sample output:
balance
---------
1000.00
(1 row)
Now switch to Session B and commit a change while Session A is still open.
Session B (another user):
-- Session B, runs while Session A's transaction is still open
BEGIN;
UPDATE accounts SET balance = 1500.00 WHERE id = 1;
COMMIT;
Sample output:
UPDATE 1
COMMIT
Switch back to Session A and re-run the exact same query inside the same transaction.
Session A (continued):
-- Session A, statement 2 -- same query, same transaction
SELECT balance FROM accounts WHERE id = 1;
Sample output:
balance
---------
1500.00
(1 row)
-- Session A, finish up
COMMIT;
The same SELECT returned 1000.00 the first time and 1500.00 the second time, all inside one transaction. That is a non-repeatable read in the wild. Session A did nothing wrong -- it is just running at READ COMMITTED, which permits this.
Demo 2 -- Non-Repeatable Read via DELETE
A non-repeatable read is not just about UPDATE. If another session deletes the row, your second read returns zero rows.
-- Reset the data first
UPDATE accounts SET balance = 1000.00 WHERE id = 1;
Session A:
BEGIN;
SELECT id, owner, balance FROM accounts WHERE id = 1;
Sample output:
id | owner | balance
----+-------+---------
1 | Alice | 1000.00
(1 row)
Session B (while A is still open):
BEGIN;
DELETE FROM accounts WHERE id = 1;
COMMIT;
Session A (continued):
SELECT id, owner, balance FROM accounts WHERE id = 1;
Sample output:
id | owner | balance
----+-------+---------
(0 rows)
COMMIT;
The row is gone, even though Session A never deleted it and never even got a chance to react. From Session A's perspective the row "existed and then did not exist," all within a single transaction. This is the same anomaly -- a row Session A previously read is no longer in the same state on the second read.
Demo 3 -- Fixing It With REPEATABLE READ
Postgres lets you upgrade an individual transaction to a stricter isolation level without changing the global default. Use BEGIN ISOLATION LEVEL REPEATABLE READ (or SET TRANSACTION ISOLATION LEVEL REPEATABLE READ immediately after BEGIN).
Reset the data first:
INSERT INTO accounts (id, owner, balance) VALUES (1, 'Alice', 1000.00)
ON CONFLICT (id) DO UPDATE SET balance = 1000.00;
Session A:
-- Open a transaction at REPEATABLE READ instead of READ COMMITTED
BEGIN ISOLATION LEVEL REPEATABLE READ;
SELECT balance FROM accounts WHERE id = 1;
Sample output:
balance
---------
1000.00
(1 row)
Session B (while A is still open):
BEGIN;
UPDATE accounts SET balance = 9999.00 WHERE id = 1;
COMMIT;
Session A (continued):
-- Same query, same transaction, but now at REPEATABLE READ
SELECT balance FROM accounts WHERE id = 1;
Sample output:
balance
---------
1000.00
(1 row)
COMMIT;
-- After COMMIT, a fresh transaction sees the new value
SELECT balance FROM accounts WHERE id = 1;
Sample output:
balance
---------
9999.00
(1 row)
At REPEATABLE READ, Postgres takes one snapshot at the first statement of the transaction and reuses it for every subsequent read. Session A keeps seeing 1000.00 even though Session B already committed 9999.00. The non-repeatable read anomaly is gone.
Demo 4 -- Locking the Row With SELECT ... FOR UPDATE
Sometimes you cannot upgrade the whole transaction (maybe you need READ COMMITTED elsewhere), but you do want to make sure a specific row will not change under you. The targeted fix is SELECT ... FOR UPDATE, which acquires a row-level lock that blocks other writers until you commit.
Reset:
UPDATE accounts SET balance = 1000.00 WHERE id = 1;
Session A:
BEGIN;
-- FOR UPDATE locks the row against any other writer
SELECT balance FROM accounts WHERE id = 1 FOR UPDATE;
Sample output:
balance
---------
1000.00
(1 row)
Session B (while A is still open):
BEGIN;
-- This UPDATE blocks until Session A commits or rolls back
UPDATE accounts SET balance = 9999.00 WHERE id = 1;
Session B hangs waiting on the lock. Session A is free to keep reading and computing.
Session A (continued):
-- Re-read the locked row -- guaranteed unchanged
SELECT balance FROM accounts WHERE id = 1;
Sample output:
balance
---------
1000.00
(1 row)
-- Do whatever business logic you needed to do
UPDATE accounts SET balance = balance - 100 WHERE id = 1;
COMMIT;
The moment Session A commits, Session B's UPDATE unblocks and runs against the new value (900.00), not the original 1000.00. Both transactions see consistent state, and the two writes happen in a defined order instead of one silently clobbering the other. This pattern -- read with FOR UPDATE, modify, commit -- is the workhorse of correct money-handling code in Postgres at READ COMMITTED.
Postgres vs MySQL InnoDB -- Different Defaults, Different Surprises
The single most confusing thing for engineers moving between Postgres and MySQL is that the default isolation level is different. Code that worked one way on MySQL can silently misbehave on Postgres, and vice versa.
+---------------------------------------------------------------+
| DEFAULT ISOLATION LEVELS |
+---------------------------------------------------------------+
| |
| PostgreSQL |
| Default: READ COMMITTED |
| - One snapshot per STATEMENT |
| - Allows non-repeatable reads |
| - Allows phantom reads |
| |
| MySQL InnoDB |
| Default: REPEATABLE READ |
| - One snapshot per TRANSACTION |
| - Prevents non-repeatable reads |
| - Mostly prevents phantoms (gap locks) |
| |
| SQL Server |
| Default: READ COMMITTED (locking, not snapshot) |
| |
| Oracle |
| Default: READ COMMITTED (statement-level snapshots) |
| |
+---------------------------------------------------------------+
If you write a multi-statement report on MySQL and rely on the fact that "reads inside one transaction always agree," you will get burned the day you port it to Postgres. The reverse is also painful: developers who build on Postgres often assume reads can drift and write defensive code (locking, single-statement queries, explicit snapshots) which becomes unnecessary on MySQL but is not actively harmful.
The right takeaway is to never assume an isolation level by default. Read your database's documentation, set the isolation level explicitly when correctness matters, and know which anomalies your default actually permits.
When Non-Repeatable Reads Are Actually Fine
It is tempting to read about anomalies and conclude "I should just use SERIALIZABLE for everything." Do not do that. READ COMMITTED is the default for a reason: it is fast, it never blocks readers on writers, and it never throws serialization errors that force you to retry. For most workloads it is exactly the right tradeoff.
Non-repeatable reads are a real problem when:
- You are doing multi-statement business logic that assumes a value is stable (read balance, decide whether to allow withdrawal, write new balance).
- You are generating a report that needs to be internally consistent (sum of categories must equal the grand total).
- You are running an integrity check that compares two queries against each other.
Non-repeatable reads are basically harmless when:
- You only run a single query per transaction (no second read, no anomaly possible).
- You are reading immutable data (an audit log, an event stream, a user's signup row).
- You are okay seeing the freshest committed value at all times (a dashboard widget, a cache refresh).
The decision is not "what isolation level is safest" but "what does my transaction actually need to be correct." If you only need atomicity for a single write, READ COMMITTED is great. If you need a stable view of the world across multiple statements, upgrade to REPEATABLE READ for that one transaction.
Common Mistakes
1. Assuming Postgres defaults to REPEATABLE READ because MySQL does. This is the number one porting bug. Engineers who learned transactions on MySQL assume that reads inside a transaction are stable. On Postgres they are not -- each statement gets its own snapshot. The same code that was "correct" on MySQL silently develops drift bugs the day it runs on Postgres. The fix is to read your database's documentation, never assume defaults transfer between vendors, and explicitly set the isolation level when correctness depends on it.
2. Using SERIALIZABLE everywhere "just to be safe." Stricter isolation levels are not free. SERIALIZABLE adds tracking overhead, increases contention, and forces your application to handle serialization failures by retrying. For workloads with high write contention, "just upgrade everything to SERIALIZABLE" can drop throughput by an order of magnitude and turn every retry into a user-visible error. The right approach is to use READ COMMITTED by default and upgrade individual transactions that have multi-statement consistency requirements.
3. Reading a value, computing on it, and writing it back without locking.
This is the classic lost-update bug. Session A reads balance = 1000, decides the new balance should be 900, and writes 900. Meanwhile Session B reads the same 1000, decides the new balance should be 800, and writes 800. One of the two updates is silently lost. The fix is SELECT ... FOR UPDATE to lock the row while you compute, or upgrade to REPEATABLE READ which will throw a serialization error and let you retry.
4. Confusing non-repeatable reads with dirty reads. A dirty read shows you data that was never committed -- it might be rolled back, leaving you holding a value that never officially existed. A non-repeatable read only shows you data that was fully committed by another transaction. Both are anomalies, but they have very different consequences and very different fixes. READ COMMITTED prevents dirty reads but allows non-repeatable reads. Knowing the difference is a common interview gotcha.
5. Forgetting that Postgres REPEATABLE READ is actually snapshot isolation. The SQL standard says REPEATABLE READ must prevent non-repeatable reads but may allow phantoms. Postgres's REPEATABLE READ implementation is actually snapshot isolation, which prevents phantoms too -- it is stricter than the standard requires. This means upgrading to REPEATABLE READ on Postgres often gives you most of what you would get from SERIALIZABLE, without the serialization-failure overhead. Do not assume the standard's wording matches what your engine actually does.
Interview Questions
1. "What is a non-repeatable read, and which isolation level prevents it?"
A non-repeatable read happens when you read the same row twice inside a single transaction and get different values, because another transaction committed an UPDATE or DELETE to that row in between your two reads. The data you see is real and committed -- it is not a dirty read -- but it changed under you. READ COMMITTED permits non-repeatable reads, because it takes a fresh snapshot at the start of every statement, not at the start of the transaction. To prevent non-repeatable reads, you need to use REPEATABLE READ (or stricter), which takes one snapshot at the start of the transaction and reuses it for every subsequent read. In Postgres you can do this per-transaction with BEGIN ISOLATION LEVEL REPEATABLE READ, without changing the global default.
2. "What is the default isolation level in Postgres, and how does it differ from MySQL InnoDB?"
Postgres defaults to READ COMMITTED, while MySQL InnoDB defaults to REPEATABLE READ. This is the single most surprising difference between the two engines for engineers who switch between them. Under Postgres READ COMMITTED, each statement gets its own snapshot, so non-repeatable reads and phantom reads are both possible. Under MySQL REPEATABLE READ, one snapshot is taken at the first read and reused for the entire transaction, which prevents non-repeatable reads and -- thanks to InnoDB's gap locks -- mostly prevents phantoms too. The practical consequence is that multi-statement business logic that "just works" on MySQL can silently produce drifting reads on Postgres unless you explicitly upgrade the isolation level or use row-level locks.
3. "Walk me through how you would prevent a lost update when two users try to debit the same account at the same time."
The naive approach -- read the balance, compute the new balance in application code, write it back -- is vulnerable to a lost update under READ COMMITTED. Two sessions can both read 1000, both compute 900, and the second write silently overwrites the first, losing one debit entirely. There are three good fixes. First, use SELECT ... FOR UPDATE to acquire a row-level lock when you read the balance; the second session will block until the first commits, then read the updated value. Second, upgrade the transaction to REPEATABLE READ; Postgres will detect the conflict and abort the second transaction with a serialization failure, which the application retries. Third, do the math in SQL with a single statement: UPDATE accounts SET balance = balance - 100 WHERE id = 1 AND balance >= 100, which is atomic at the statement level and never reads stale data. The single-statement fix is the simplest and the one I prefer when the logic fits.
4. "Why does Postgres default to READ COMMITTED instead of a stricter level?"
Because READ COMMITTED is the right tradeoff for the vast majority of OLTP workloads. It prevents dirty reads (you never see uncommitted data), it never blocks readers on writers, it never throws serialization errors that force application retries, and per-statement snapshots are very cheap to maintain. Stricter levels add real costs: REPEATABLE READ holds a snapshot for the whole transaction (more bloat, longer-lived snapshots delay vacuum), and SERIALIZABLE adds predicate-tracking machinery and forces the application to handle serialization failures. For most applications -- where individual transactions are short and most logic is single-statement updates -- READ COMMITTED gives you good enough correctness with the highest possible throughput. The Postgres philosophy is "default to fast and safe-ish, let the developer opt in to stricter when correctness demands it."
5. "If REPEATABLE READ prevents non-repeatable reads on Postgres, why would anyone ever use SERIALIZABLE?"
REPEATABLE READ on Postgres prevents non-repeatable reads and (because the implementation is snapshot isolation) also prevents phantom reads in the simple sense -- you cannot see new rows appearing inside your snapshot. But snapshot isolation does not prevent write skew, which is a more subtle anomaly where two transactions each read a consistent snapshot, each make a decision based on it, and the combined result violates an invariant that neither transaction alone could violate. The classic example is two doctors in an on-call system: both read "there are two doctors on call, I am allowed to take myself off," both write themselves off, and the system ends up with zero on-call doctors even though every individual transaction was internally consistent. SERIALIZABLE on Postgres uses Serializable Snapshot Isolation (SSI) to detect these dangerous read-write patterns and abort one of the transactions with a serialization failure. So the rule is: use REPEATABLE READ when you need stable reads, and upgrade to SERIALIZABLE when you have invariants that span multiple rows and multiple transactions.
Quick Reference -- Cheat Sheet
+---------------------------------------------------------------+
| NON-REPEATABLE READ CHEAT SHEET |
+---------------------------------------------------------------+
| |
| DEFINITION |
| Same row read twice in one transaction returns |
| different values because another transaction committed |
| an UPDATE or DELETE in between. |
| |
| POSTGRES DEFAULT |
| READ COMMITTED (one snapshot per statement) |
| -> non-repeatable reads ARE allowed |
| |
| MYSQL INNODB DEFAULT |
| REPEATABLE READ (one snapshot per transaction) |
| -> non-repeatable reads are NOT allowed |
| |
| PREVENTS NON-REPEATABLE READ |
| - REPEATABLE READ |
| - SERIALIZABLE |
| - SELECT ... FOR UPDATE / FOR SHARE |
| |
| POSTGRES SYNTAX |
| SHOW default_transaction_isolation; |
| BEGIN ISOLATION LEVEL REPEATABLE READ; |
| SET TRANSACTION ISOLATION LEVEL REPEATABLE READ; |
| SELECT ... FOR UPDATE; |
| |
+---------------------------------------------------------------+
+---------------------------------------------------------------+
| ANOMALY MATRIX |
+---------------------------------------------------------------+
| |
| Level | Dirty | Non-Rep | Phantom |
| ------------------|-------|---------|--------- |
| READ UNCOMMITTED | YES | YES | YES |
| READ COMMITTED | no | YES | YES |
| REPEATABLE READ | no | no | YES (std) / no (PG) |
| SERIALIZABLE | no | no | no |
| |
| Postgres REPEATABLE READ is snapshot isolation, which |
| also prevents phantoms (stricter than the SQL standard). |
| |
+---------------------------------------------------------------+
| Concern | Wrong Way | Right Way |
|---|---|---|
| Default isolation | Assume same as MySQL | SHOW default_transaction_isolation |
| Multi-statement consistency | Trust READ COMMITTED | BEGIN ISOLATION LEVEL REPEATABLE READ |
| Lost updates | Read, compute, write back | SELECT ... FOR UPDATE or single-statement UPDATE |
| Stable balance read | SELECT twice and hope | SELECT ... FOR UPDATE |
| Reports | Many SELECTs at READ COMMITTED | One transaction at REPEATABLE READ |
| Money-handling code | App-side arithmetic | UPDATE ... SET balance = balance - 100 WHERE balance >= 100 |
| Cross-vendor portability | Assume defaults transfer | Set isolation explicitly per transaction |
| Picking SERIALIZABLE | Use it everywhere | Only when invariants span multiple rows |
Prev: Lesson 11.3 -- Dirty Read and READ UNCOMMITTED Next: Lesson 11.5 -- Phantom Read and REPEATABLE READ
This is Lesson 11.4 of the Database Interview Prep Course -- 12 chapters, 58 lessons.