The SQL
standard defines four levels of transaction
isolation in terms of three phenomena that must be prevented
between concurrent transactions.
These undesirable phenomena are:
- dirty read
A transaction reads data written by a concurrent uncommitted transaction.
- nonrepeatable read
A transaction re-reads data it has previously read and finds that data
has been modified by another transaction (that committed since the
initial read).
- phantom read
A transaction re-executes a query returning a set of rows that satisfy a
search condition and finds that the set of rows satisfying the condition
has changed due to another recently-committed transaction.
The four transaction isolation levels and the corresponding
behaviors are described in Table 9-1.
Table 9-1. SQL Transaction Isolation Levels
Isolation Level
| Dirty Read
| Nonrepeatable Read
| Phantom Read
|
---|
Read uncommitted
| Possible
| Possible
| Possible
|
Read committed
| Not possible
| Possible
| Possible
|
Repeatable read
| Not possible
| Not possible
| Possible
|
Serializable
| Not possible
| Not possible
| Not possible
|
PostgreSQL
offers the read committed and serializable isolation levels.
Read Committed
is the default isolation level in PostgreSQL.
When a transaction runs on this isolation level,
a SELECT query sees only data committed before the
query began; it never sees either uncommitted data or changes committed
during query execution by concurrent transactions. (However, the
SELECT does see the effects of previous updates
executed within its own transaction, even though they are not yet
committed.) In effect, a SELECT query
sees a snapshot of the database as of the instant that that query
begins to run. Notice that two successive SELECTs can
see different data, even though they are within a single transaction, if
other transactions
commit changes during execution of the first SELECT.
UPDATE, DELETE, and SELECT
FOR UPDATE commands behave the same as SELECT
in terms of searching for target rows: they will only find target rows
that were committed as of the query start time. However, such a target
row may have already been updated (or deleted or marked for update) by
another concurrent transaction by the time it is found. In this case, the
would-be updater will wait for the first updating transaction to commit or
roll back (if it is still in progress). If the first updater rolls back,
then its effects are negated and the second updater can proceed with
updating the originally found row. If the first updater commits, the
second updater will ignore the row if the first updater deleted it,
otherwise it will attempt to apply its operation to the updated version of
the row. The query search condition (WHERE clause) is
re-evaluated to see if the updated version of the row still matches the
search condition. If so, the second updater proceeds with its operation,
starting from the updated version of the row.
Because of the above rule, it is possible for updating queries to see
inconsistent snapshots --- they can see the effects of concurrent updating
queries that affected the same rows they are trying to update, but they
do not see effects of those queries on other rows in the database.
This behavior makes Read Committed mode unsuitable for queries that
involve complex search conditions. However, it is just right for simpler
cases. For example, consider updating bank balances with transactions
like
BEGIN;
UPDATE accounts SET balance = balance + 100.00 WHERE acctnum = 12345;
UPDATE accounts SET balance = balance - 100.00 WHERE acctnum = 7534;
COMMIT;
If two such transactions concurrently try to change the balance of account
12345, we clearly want the second transaction to start from the updated
version of the account's row. Because each query is affecting only a
predetermined row, letting it see the updated version of the row does
not create any troublesome inconsistency.
Since in Read Committed mode each new query starts with a new snapshot
that includes all transactions committed up to that instant, subsequent
queries in the same transaction will see the effects of the committed
concurrent transaction in any case. The point at issue here is whether
or not within a single query we see an absolutely consistent
view of the database.
The partial transaction isolation provided by Read Committed mode is
adequate for many applications, and this mode is fast and simple to use.
However, for applications that do complex queries and updates, it may
be necessary to guarantee a more rigorously consistent view of the
database than the Read Committed mode provides.
Serializable provides the strictest transaction
isolation. This level emulates serial transaction execution,
as if transactions had been executed one after another, serially,
rather than concurrently. However, applications using this level must
be prepared to retry transactions due to serialization failures.
When a transaction is on the serializable level,
a SELECT query sees only data committed before the
transaction began; it never sees either uncommitted data or changes
committed
during transaction execution by concurrent transactions. (However, the
SELECT does see the effects of previous updates
executed within its own transaction, even though they are not yet
committed.) This is different from Read Committed in that the
SELECT
sees a snapshot as of the start of the transaction, not as of the start
of the current query within the transaction. Thus, successive
SELECTs within a single transaction always see the same
data.
UPDATE, DELETE, and SELECT
FOR UPDATE commands behave the same as SELECT
in terms of searching for target rows: they will only find target rows
that were committed as of the transaction start time. However, such a
target
row may have already been updated (or deleted or marked for update) by
another concurrent transaction by the time it is found. In this case, the
serializable transaction will wait for the first updating transaction to commit or
roll back (if it is still in progress). If the first updater rolls back,
then its effects are negated and the serializable transaction can proceed
with updating the originally found row. But if the first updater commits
(and actually updated or deleted the row, not just selected it for update)
then the serializable transaction will be rolled back with the message
ERROR: Can't serialize access due to concurrent update
because a serializable transaction cannot modify rows changed by
other transactions after the serializable transaction began.
When the application receives this error message, it should abort
the current transaction and then retry the whole transaction from
the beginning. The second time through, the transaction sees the
previously-committed change as part of its initial view of the database,
so there is no logical conflict in using the new version of the row
as the starting point for the new transaction's update.
Note that only updating transactions may need to be retried --- read-only
transactions will never have serialization conflicts.
The Serializable mode provides a rigorous guarantee that each
transaction sees a wholly consistent view of the database. However,
the application has to be prepared to retry transactions when concurrent
updates make it impossible to sustain the illusion of serial execution.
Since the cost of redoing complex transactions may be significant,
this mode is recommended only when updating transactions contain logic
sufficiently complex that they may give wrong answers in Read
Committed mode. Most commonly, Serializable mode is necessary when
a transaction performs several successive queries that must see
identical views of the database.