[go: up one dir, main page]

0% found this document useful (0 votes)
9 views26 pages

Dbms Unit IV

Unit IV of the Database Management Systems course covers transaction concepts, ACID properties, and various protocols for ensuring atomicity, consistency, isolation, and durability in database transactions. It discusses the implementation of concurrency control mechanisms, including locking and timestamp-based protocols, as well as recovery techniques such as log-based recovery and advanced recovery methods. The unit emphasizes the importance of serializability and the management of concurrent transactions to maintain database integrity and performance.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
9 views26 pages

Dbms Unit IV

Unit IV of the Database Management Systems course covers transaction concepts, ACID properties, and various protocols for ensuring atomicity, consistency, isolation, and durability in database transactions. It discusses the implementation of concurrency control mechanisms, including locking and timestamp-based protocols, as well as recovery techniques such as log-based recovery and advanced recovery methods. The unit emphasizes the importance of serializability and the management of concurrent transactions to maintain database integrity and performance.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 26

DBMS IV UNIT Dr.

Nuthanakanti Bhaskar

DATABASE MANAGEMENT SYSTEMS

UNIT-IV

-------------------------------------------------------------------------------------------------------------------------------
-

1. Transaction concept & State

2. Implementation of atomicity and durability

3. Serializability

4. Recoverability

5. Implementation of isolation

6. Lock based protocols

7. Lock based protocols

8. Timestamp based protocols

9. Validation based protocol

10. Recovery and Atomicity & Log based recovery

11. Log based recovery

12. Recovery with concurrent transactions

13. Buffer management

14. Failure with loss of Nonvolatile storage

15. Advanced recovery techniques

16. ARIES

17. Remote backup systems

CMR Technical Campus 1


DBMS IV UNIT Dr. Nuthanakanti Bhaskar

Transaction Concept

• A transaction is a unit of program execution that accesses and possibly updates various data
items.
• E.g. transaction to transfer $50 from account A to account B:
1. read(A)
2. A := A – 50
3. write(A)
4. read(B)
5. B := B + 50
6. write(B)
• Two main issues to deal with:
– Failures of various kinds, such as hardware failures and system crashes
– Concurrent execution of multiple transactions

ACID Properties

• Atomicity. Either all operations of the transaction are properly reflected in the database or none
are.
• Consistency. Execution of a transaction in isolation preserves the consistency of the database.
• Isolation. Although multiple transactions may execute concurrently, each transaction must be
unaware of other concurrently executing transactions. Intermediate transaction results must be
hidden from other concurrently executed transactions.
– That is, for every pair of transactions Ti and Tj, it appears to Ti that either Tj, finished
execution before Ti started, or Tj started execution after Ti finished.
• Durability. After a transaction completes successfully, the changes it has made to the database
persist, even if there are system failures.
• Example of Fund Transfer
• Transaction to transfer $50 from account A to account B:
1. read(A)
2. A := A – 50
3. write(A)
4. read(B)
5. B := B + 50
6. write(B)

Atomicity requirement

– if the transaction fails after step 3 and before step 6, money will be “lost” leading to an
inconsistent database state
• Failure could be due to software or hardware
– the system should ensure that updates of a partially executed transaction are not reflected
in the database
Durability requirement

Once the user has been notified that the transaction has completed (i.e., the transfer of the $50 has taken
place), the updates to the database by the transaction must persist even if there are software or hardware
failures.
• Example of Fund Transfer (Cont.)

CMR Technical Campus 2


DBMS IV UNIT Dr. Nuthanakanti Bhaskar

• Transaction to transfer $50 from account A to account B:


1. read(A)
2. A := A – 50
3. write(A)
4. read(B)
5. B := B + 50
6. write(B)

Consistency requirement

in above example:
– the sum of A and B is unchanged by the execution of the transaction
• In general, consistency requirements include
• Explicitly specified integrity constraints such as primary keys and foreign keys
• Implicit integrity constraints
– e.g. sum of balances of all accounts, minus sum of loan amounts must
equal value of cash-in-hand
– A transaction must see a consistent database.
– During transaction execution the database may be temporarily inconsistent.
– When the transaction completes successfully the database must be consistent
• Erroneous transaction logic can lead to inconsistency

Isolation requirement

If between steps 3 and 6, another transaction T2 is allowed to access the partially updated database, it will
see an inconsistent database (the sum A + B will be less than it should be).
T1 T2
1. read(A)
2. A := A – 50
3. write(A)
read(A), read(B), print(A+B)
4. read(B)
5. B := B + 50
6. write(B
• Isolation can be ensured trivially by running transactions serially
– that is, one after the other.
• However, executing multiple transactions concurrently has significant benefits, as we will see
later.

Transaction State

• Active – the initial state; the transaction stays in this state while it is executing
• Partially committed – after the final statement has been executed.
• Failed -- after the discovery that normal execution can no longer proceed.
• Aborted – after the transaction has been rolled back and the database restored to its state prior to
the start of the transaction. Two options after it has been aborted:

CMR Technical Campus 3


DBMS IV UNIT Dr. Nuthanakanti Bhaskar


restart the transaction
• can be done only if no internal logical error
– kill the transaction
• Committed – after successful completion.

Implementation of Atomicity and Durability


• The recovery-management component of a database system implements the support for
atomicity and durability.
• E.g. the shadow-database scheme:
– all updates are made on a shadow copy of the database
• db_pointer is made to point to the updated shadow copy after
– the transaction reaches partial commit and
– all updated pages have been flushed to disk.
• db_pointer always points to the current consistent copy of the database.
– In case transaction fails, old consistent copy pointed to by db_pointer can be used, and
the shadow copy can be deleted.

The shadow-database scheme:


– Assumes that only one transaction is active at a time.
– Assumes disks do not fail
– Useful for text editors, but
• extremely inefficient for large databases (why?)
– Variant called shadow paging reduces copying of data, but is still not
practical for large databases
– Does not handle concurrent transactions
• Will study better schemes in Chapter 17.

Concurrent Executions
• Multiple transactions are allowed to run concurrently in the system. Advantages are:
– increased processor and disk utilization, leading to better transaction throughput
• E.g. one transaction can be using the CPU while another is reading from or
writing to the disk
– reduced average response time for transactions: short transactions need not wait behind
long ones.

Concurrency control schemes – mechanisms to achieve isolation


– that is, to control the interaction among the concurrent transactions in order to prevent
them from destroying the consistency of the database
• Will study in Chapter 16, after studying notion of correctness of concurrent
executions.
Schedules
• Schedule – a sequences of instructions that specify the chronological order in which instructions
of concurrent transactions are executed
– a schedule for a set of transactions must consist of all instructions of those transactions

CMR Technical Campus 4


DBMS IV UNIT Dr. Nuthanakanti Bhaskar

– must preserve the order in which the instructions appear in each individual transaction.
• A transaction that successfully completes its execution will have a commit instructions as the last
statement
– by default transaction assumed to execute commit instruction as its last step
• A transaction that fails to successfully complete its execution will have an abort instruction as the
last statement
• Schedule 1
• Let T1 transfer $50 from A to B, and T2 transfer 10% of the balance from A to B.
• A serial schedule in which T1 is followed by T2 :
• Schedule 2
• Schedule 3
• Let T1 and T2 be the transactions defined previously. The following schedule is not a serial
schedule, but it is equivalent to Schedule 1.
• Schedule 4
• The following concurrent schedule does not preserve the value of (A + B ).

Serializability
• Basic Assumption – Each transaction preserves database consistency.
• Thus serial execution of a set of transactions preserves database consistency.
• A (possibly concurrent) schedule is serializable if it is equivalent to a serial schedule. Different
forms of schedule equivalence give rise to the notions of:
1. conflict serializability
2. view serializability
• Simplified view of transactions
– We ignore operations other than read and write instructions
– We assume that transactions may perform arbitrary computations on data in local buffers
in between reads and writes.
– Our simplified schedules consist of only read and write instructions.

Conflicting Instructions
• Instructions li and lj of transactions Ti and Tj respectively, conflict if and only if there exists some
item Q accessed by both li and lj, and at least one of these instructions wrote Q.

1. li = read(Q),lj = read(Q).li and lj don’t conflict.


2. li = read(Q), lj = write(Q). They conflict.
3. li = write(Q), lj = read(Q). They conflict
4. li = write(Q), lj = write(Q). They conflict
• Intuitively, a conflict between li and lj forces a (logical) temporal order between them.
– If li and lj are consecutive in a schedule and they do not conflict, their results would
remain the same even if they had been interchanged in the schedule.
Conflict Serializability

• If a schedule S can be transformed into a schedule S´ by a series of swaps of non-conflicting


instructions, we say that S and S´ are conflict equivalent.

CMR Technical Campus 5


DBMS IV UNIT Dr. Nuthanakanti Bhaskar

• We say that a schedule S is conflict serializable if it is conflict equivalent to a serial schedule


• Schedule 3 can be transformed into Schedule 6, a serial schedule where T2 follows T1, by series of
swaps of non-conflicting instructions.
– Therefore Schedule 3 is conflict serializable.
• Example of a schedule that is not conflict serializable:
• We are unable to swap instructions in the above schedule to obtain either the serial schedule < T3,
T4 >, or the serial schedule < T4, T3 >.

View Serializability
• Let S and S´ be two schedules with the same set of transactions. S and S´ are view equivalent if
the following three conditions are met, for each data item Q,
– If in schedule S, transaction Ti reads the initial value of Q, then in schedule S’ also
transaction Ti must read the initial value of Q.
– If in schedule S transaction Ti executes read(Q), and that value was produced by
transaction Tj (if any), then in schedule S’ also transaction Ti must read the value of Q
that was produced by the same write(Q) operation of transaction Tj .
– The transaction (if any) that performs the final write(Q) operation in schedule S must
also perform the final write(Q) operation in schedule S’.
As can be seen, view equivalence is also based purely on reads and writes alone.
• A schedule S is view serializable if it is view equivalent to a serial schedule.
• Every conflict serializable schedule is also view serializable.
• Below is a schedule which is view-serializable but not conflict serializable.
• What serial schedule is above equivalent to?
• Every view serializable schedule that is not conflict serializable has blind writes.
• Other Notions of Serializability
• The schedule below produces same outcome as the serial schedule < T1, T5 >, yet is not conflict
equivalent or view equivalent to it.
• Determining such equivalence requires analysis of operations other than read and write.

Recoverable Schedules
• Recoverable schedule — if a transaction Tj reads a data item previously written by a transaction
Ti , then the commit operation of Ti appears before the commit operation of Tj.
• The following schedule (Schedule 11) is not recoverable if T9 commits immediately after the read
• If T8 should abort, T9 would have read (and possibly shown to the user) an inconsistent database
state. Hence, database must ensure that schedules are recoverable.

Cascading Rollbacks
• Cascading rollback – a single transaction failure leads to a series of transaction rollbacks.
Consider the following schedule where none of the transactions has yet committed (so the
schedule is recoverable)
• If T10 fails, T11 and T12 must also be rolled back.
• Can lead to the undoing of a significant amount of work

Cascadeless Schedules
Cascadeless schedules — cascading rollbacks cannot occur; for each pair of transactions Ti and Tj
such that Tj reads a data item previously written by Ti, the commit operation of Ti appears before the
read operation of Tj.
• Every cascadeless schedule is also recoverable
• It is desirable to restrict the schedules to those that are cascadeless

CMR Technical Campus 6


DBMS IV UNIT Dr. Nuthanakanti Bhaskar

Concurrency Control
• A database must provide a mechanism that will ensure that all possible schedules are
– either conflict or view serializable, and
– are recoverable and preferably cascadeless
• A policy in which only one transaction can execute at a time generates serial schedules, but
provides a poor degree of concurrency
– Are serial schedules recoverable/cascadeless?
• Testing a schedule for serializability after it has executed is a little too late!
• Goal – to develop concurrency control protocols that will assure serializability.

Concurrency Control vs. Serializability Tests


• Concurrency-control protocols allow concurrent schedules, but ensure that the schedules are
conflict/view serializable, and are recoverable and cascadeless .
• Concurrency control protocols generally do not examine the precedence graph as it is being
created
– Instead a protocol imposes a discipline that avoids nonseralizable schedules.
– We study such protocols in Chapter 16.
• Different concurrency control protocols provide different tradeoffs between the amount of
concurrency they allow and the amount of overhead that they incur.
• Tests for serializability help us understand why a concurrency control protocol is correct.

Weak Levels of Consistency


• Some applications are willing to live with weak levels of consistency, allowing schedules that are
not serializable
– E.g. a read-only transaction that wants to get an approximate total balance of all accounts
– E.g. database statistics computed for query optimization can be approximate (why?)
– Such transactions need not be serializable with respect to other transactions
• Tradeoff accuracy for performance

Levels of Consistency in SQL-92


• Serializable — default
• Repeatable read — only committed records to be read, repeated reads of same record must
return same value. However, a transaction may not be serializable – it may find some records
inserted by a transaction but not find others.
• Read committed — only committed records can be read, but successive reads of record may
return different (but committed) values.
• Read uncommitted — even uncommitted records may be read.

Transaction Definition in SQL


• Data manipulation language must include a construct for specifying the set of actions that
comprise a transaction.
• In SQL, a transaction begins implicitly.
• A transaction in SQL ends by:
– Commit work commits current transaction and begins a new one.
– Rollback work causes current transaction to abort.
• In almost all database systems, by default, every SQL statement also commits implicitly if it
executes successfully
– Implicit commit can be turned off by a database directive
• E.g. in JDBC, connection.setAutoCommit(false);

CMR Technical Campus 7


DBMS IV UNIT Dr. Nuthanakanti Bhaskar

Implementation of Isolation
• Schedules must be conflict or view serializable, and recoverable, for the sake of database
consistency, and preferably cascadeless.
• A policy in which only one transaction can execute at a time generates serial schedules, but
provides a poor degree of concurrency.
• Concurrency-control schemes tradeoff between the amount of concurrency they allow and the
amount of overhead that they incur.
• Some schemes allow only conflict-serializable schedules to be generated, while others allow
view-serializable schedules that are not conflict-serializable.

Testing for Serializability


• Consider some schedule of a set of transactions T1, T2, ..., Tn
Precedence graph — a direct graph where the vertices are the transactions (names).
• We draw an arc from Ti to Tj if the two transaction conflict, and Ti accessed the data item on
which the conflict arose earlier.
• We may label the arc by the item that was accessed.
• Example 1
• Example Schedule (Schedule A) + Precedence Graph
T1 T2 T3 T4 T5
read(X)
read(Y)
read(Z)
read(V)
read(W)
read(W)
read(Y)
write(Y)
write(Z)
read(U)
read(Y)
write(Y)
read(Z)
write(Z)

read(U)
write(U)

Test for Conflict Serializability


• A schedule is conflict serializable if and only if its precedence graph is acyclic.
• Cycle-detection algorithms exist which take order n2 time, where n is the number of vertices in
the graph.
– (Better algorithms take order n + e where e is the number of edges.)
• If precedence graph is acyclic, the serializability order can be obtained by a topological sorting of
the graph.
– This is a linear order consistent with the partial order of the graph.
– For example, a serializability order for Schedule A would be
T5 → T1 → T3 → T2 → T4
• Are there others?

CMR Technical Campus 8


DBMS IV UNIT Dr. Nuthanakanti Bhaskar

Test for View Serializability


• The precedence graph test for conflict serializability cannot be used directly to test for view
serializability.
– Extension to test for view serializability has cost exponential in the size of the precedence
graph.
• The problem of checking if a schedule is view serializable falls in the class of NP-complete
problems.
– Thus existence of an efficient algorithm is extremely unlikely.
• However practical algorithms that just check some sufficient conditions for view serializability
can still be used.

Lock-Based Protocols

• A lock is a mechanism to control concurrent access to a data item


• Data items can be locked in two modes :
1. exclusive (X) mode. Data item can be both read as well as
written. X-lock is requested using lock-X instruction.
2. shared (S) mode. Data item can only be read. S-lock is
requested using lock-S instruction.
• Lock requests are made to concurrency-control manager. Transaction can proceed only after
request is granted.

Lock-compatibility matrix
• A transaction may be granted a lock on an item if the requested lock is compatible with locks
already held on the item by other transactions
• Any number of transactions can hold shared locks on an item,
– but if any transaction holds an exclusive on the item no other transaction may hold any
lock on the item.
• If a lock cannot be granted, the requesting transaction is made to wait till all incompatible locks
held by other transactions have been released. The lock is then granted.
• Example of a transaction performing locking:
T2: lock-S(A);
read (A);
unlock(A);
lock-S(B);
read (B);
unlock(B);
display(A+B)
• Locking as above is not sufficient to guarantee serializability — if A and B get updated in-
between the read of A and B, the displayed sum would be wrong.
• A locking protocol is a set of rules followed by all transactions while requesting and releasing
locks. Locking protocols restrict the set of possible schedules.

Pitfalls of Lock-Based Protocols


• Consider the partial schedule
• Neither T3 nor T4 can make progress — executing lock-S(B) causes T4 to wait for T3 to release its
lock on B, while executing lock-X(A) causes T3 to wait for T4 to release its lock on A.
• Such a situation is called a deadlock.

CMR Technical Campus 9


DBMS IV UNIT Dr. Nuthanakanti Bhaskar

– To handle a deadlock one of T3 or T4 must be rolled back


and its locks released.
• The potential for deadlock exists in most locking protocols. Deadlocks are a necessary evil.
• Starvation is also possible if concurrency control manager is badly designed. For example:
– A transaction may be waiting for an X-lock on an item, while a sequence of other
transactions request and are granted an S-lock on the same item.
– The same transaction is repeatedly rolled back due to deadlocks.
• Concurrency control manager can be designed to prevent starvation.

The Two-Phase Locking Protocol

• This is a protocol which ensures conflict-serializable schedules.


• Phase 1: Growing Phase
– transaction may obtain locks
– transaction may not release locks
• Phase 2: Shrinking Phase
– transaction may release locks
– transaction may not obtain locks
• The protocol assures serializability. It can be proved that the transactions can be serialized in the
order of their lock points (i.e. the point where a transaction acquired its final lock).
• Two-phase locking does not ensure freedom from deadlocks
• Cascading roll-back is possible under two-phase locking. To avoid this, follow a modified
protocol called strict two-phase locking. Here a transaction must hold all its exclusive locks till
it commits/aborts.
• Rigorous two-phase locking is even stricter: here all locks are held till commit/abort. In this
protocol transactions can be serialized in the order in which they commit.
• There can be conflict serializable schedules that cannot be obtained if two-phase locking is used.
• However, in the absence of extra information (e.g., ordering of access to data), two-phase
locking is needed for conflict serializability in the following sense:
Given a transaction Ti that does not follow two-phase locking, we can find a transaction Tj that uses
two-phase locking, and a schedule for Ti and Tj that is not conflict serializable.
• Lock Conversions
• Two-phase locking with lock conversions:
– First Phase:
– can acquire a lock-S on item
– can acquire a lock-X on item
– can convert a lock-S to a lock-X (upgrade)
– Second Phase:
– can release a lock-S
– can release a lock-X
– can convert a lock-X to a lock-S (downgrade)
• This protocol assures serializability. But still relies on the programmer to insert the various
locking instructions.

Automatic Acquisition of Locks

• A transaction Ti issues the standard read/write instruction, without explicit locking calls.
• The operation read(D) is processed as:
if Ti has a lock on D

CMR Technical Campus 10


DBMS IV UNIT Dr. Nuthanakanti Bhaskar

then
read(D)
else begin
if necessary wait until no other
transaction has a lock-X on D
grant Ti a lock-S on D;
read(D)
end

• write(D) is processed as:


if Ti has a lock-X on D
then
write(D)
else begin
if necessary wait until no other trans. has any lock on D,
if Ti has a lock-S on D
then
upgrade lock on D to lock-X
else
grant Ti a lock-X on D
write(D)
end;
• All locks are released after commit or abort

Implementation of Locking

• A lock manager can be implemented as a separate process to which transactions send lock and
unlock requests
• The lock manager replies to a lock request by sending a lock grant messages (or a message asking
the transaction to roll back, in case of a deadlock)
• The requesting transaction waits until its request is answered
• The lock manager maintains a data-structure called a lock table to record granted locks and
pending requests
• The lock table is usually implemented as an in-memory hash table indexed on the name of the
data item being locked

Lock Table

• Black rectangles indicate granted locks, white ones indicate waiting requests
• Lock table also records the type of lock granted or requested
• New request is added to the end of the queue of requests for the data item, and granted if it is
compatible with all earlier locks
• Unlock requests result in the request being deleted, and later requests are checked to see if they
can now be granted
• If transaction aborts, all waiting or granted requests of the transaction are deleted
– lock manager may keep a list of locks held by each transaction, to implement this
efficiently

CMR Technical Campus 11


DBMS IV UNIT Dr. Nuthanakanti Bhaskar

Graph-Based Protocols

• Graph-based protocols are an alternative to two-phase locking


• Impose a partial ordering → on the set D = {d1, d2 ,..., dh} of all data items.
– If di → dj then any transaction accessing both di and dj must access di before accessing dj.
– Implies that the set D may now be viewed as a directed acyclic graph, called a database
graph.
• The tree-protocol is a simple kind of graph protocol.

Tree Protocol

1. Only exclusive locks are allowed.


2. The first lock by Ti may be on any data item. Subsequently, a data Q can be locked by Ti only if
the parent of Q is currently locked by Ti.
3. Data items may be unlocked at any time.
4. A data item that has been locked and unlocked by Ti cannot subsequently be relocked by Ti

Timestamp-Based Protocols

• Each transaction is issued a timestamp when it enters the system. If an old transaction Ti has time-
stamp TS(Ti), a new transaction Tj is assigned time-stamp TS(Tj) such that TS(Ti) <TS(Tj).
• The protocol manages concurrent execution such that the time-stamps determine the
serializability order.
• In order to assure such behavior, the protocol maintains for each data Q two timestamp values:
– W-timestamp(Q) is the largest time-stamp of any transaction that executed write(Q)
successfully.
– R-timestamp(Q) is the largest time-stamp of any transaction that executed read(Q)
successfully.
• The timestamp ordering protocol ensures that any conflicting read and write operations are
executed in timestamp order.
• Suppose a transaction Ti issues a read(Q)
– If TS(Ti)  W-timestamp(Q), then Ti needs to read a value of Q that was already
overwritten.
n Hence, the read operation is rejected, and Ti is rolled back.
– If TS(Ti) W-timestamp(Q), then the read operation is executed, and R-timestamp(Q) is
set to max(R-timestamp(Q), TS(Ti)).
• Suppose that transaction Ti issues write(Q).
– If TS(Ti) < R-timestamp(Q), then the value of Q that Ti is producing was needed
previously, and the system assumed that that value would never be produced.
n Hence, the write operation is rejected, and Ti is rolled back.
– If TS(Ti) < W-timestamp(Q), then Ti is attempting to write an obsolete value of Q.
n Hence, this write operation is rejected, and Ti is rolled back.
– Otherwise, the write operation is executed, and W-timestamp(Q) is set to TS(Ti).
• Example Use of the Protocol
A partial schedule for several data items for transactions with
timestamps 1, 2, 3, 4, 5

CMR Technical Campus 12


DBMS IV UNIT Dr. Nuthanakanti Bhaskar

Correctness of Timestamp-Ordering Protocol

• The timestamp-ordering protocol guarantees serializability since all the arcs in the precedence
graph are of the form:

Thus, there will be no cycles in the precedence graph


• Timestamp protocol ensures freedom from deadlock as no transaction ever waits.
• But the schedule may not be cascade-free, and may not even be recoverable.

Thomas’ Write Rule

• Modified version of the timestamp-ordering protocol in which obsolete write operations may be
ignored under certain circumstances.
• When Ti attempts to write data item Q, if TS(Ti) < W-timestamp(Q), then Ti is attempting to write
an obsolete value of {Q}.
– Rather than rolling back Ti as the timestamp ordering protocol would have done, this
{write} operation can be ignored.
• Otherwise this protocol is the same as the timestamp ordering protocol.
• Thomas' Write Rule allows greater potential concurrency.
– Allows some view-serializable schedules that are not conflict-serializable.

Validation-Based Protocol

• Execution of transaction Ti is done in three phases.


1. Read and execution phase: Transaction Ti writes only to
temporary local variables
2. Validation phase: Transaction Ti performs a ``validation test'‘ to determine if local variables can be
written without violating serializability.
3. Write phase: If Ti is validated, the updates are applied to the
database; otherwise, Ti is rolled back.
• The three phases of concurrently executing transactions can be interleaved, but each transaction
must go through the three phases in that order.
– Assume for simplicity that the validation and write phase occur together, atomically and
serially
• I.e., only one transaction executes validation/write at a time.
• Also called as optimistic concurrency control since transaction executes fully in the hope that
all will go well during validation
• Each transaction Ti has 3 timestamps
– Start(Ti) : the time when Ti started its execution
– Validation(Ti): the time when Ti entered its validation phase
– Finish(Ti) : the time when Ti finished its write phase
• Serializability order is determined by timestamp given at validation time, to increase
concurrency.
– Thus TS(Ti) is given the value of Validation(Ti).
• This protocol is useful and gives greater degree of concurrency if probability of conflicts is low.
– because the serializability order is not pre-decided, and
– relatively few transactions will have to be rolled back.
• Validation Test for Transaction Tj
• If for all Ti with TS (Ti) < TS (Tj) either one of the following condition holds:

CMR Technical Campus 13


DBMS IV UNIT Dr. Nuthanakanti Bhaskar

– finish(Ti) < start(Tj)


– start(Tj) < finish(Ti) < validation(Tj) and the set of data items written by Ti does not
intersect with the set of data items read by Tj.
then validation succeeds and Tj can be committed. Otherwise, validation fails and Tj is aborted.
• Justification: Either the first condition is satisfied, and there is no overlapped execution, or the
second condition is satisfied and
n the writes of Tj do not affect reads of Ti since they occur after Ti has finished its reads.
n the writes of Ti do not affect reads of Tj since Tj does not read any item written by Ti.
• Schedule Produced by Validation
• Example of schedule produced using validation

Multiple Granularity

• Allow data items to be of various sizes and define a hierarchy of data granularities, where the
small granularities are nested within larger ones
• Can be represented graphically as a tree (but don't confuse with tree-locking protocol)
• When a transaction locks a node in the tree explicitly, it implicitly locks all the node's descendents
in the same mode.
• Granularity of locking (level in tree where locking is done):
n fine granularity (lower in tree): high concurrency, high locking overhead
n coarse granularity (higher in tree): low locking overhead, low concurrency
• Example of Granularity Hierarchy
The levels, starting from the coarsest (top) level are
– database
– area
– file
– record

Intention Lock Modes

• In addition to S and X lock modes, there are three additional lock modes with multiple
granularity:
– intention-shared (IS): indicates explicit locking at a lower level of the tree but only with
shared locks.
– intention-exclusive (IX): indicates explicit locking at a lower level with exclusive or
shared locks
– shared and intention-exclusive (SIX): the subtree rooted by that node is locked explicitly
in shared mode and explicit locking is being done at a lower level with exclusive-mode
locks.
• intention locks allow a higher level node to be locked in S or X mode without having to check all
descendent nodes.

Compatibility Matrix with Intention Lock Modes

• The compatibility matrix for all lock modes is:


• Multiple Granularity Locking Scheme
• Transaction Ti can lock a node Q, using the following rules:
– The lock compatibility matrix must be observed.

CMR Technical Campus 14


DBMS IV UNIT Dr. Nuthanakanti Bhaskar

– The root of the tree must be locked first, and may be locked in any mode.
– A node Q can be locked by Ti in S or IS mode only if the parent of Q is currently locked
by Ti in either IX or IS mode.
– A node Q can be locked by Ti in X, SIX, or IX mode only if the parent of Q is currently
locked by Ti in either IX or SIX mode.
– Ti can lock a node only if it has not previously unlocked any node (that is, Ti is two-
phase).
– Ti can unlock a node Q only if none of the children of Q are currently locked by Ti.
• Observe that locks are acquired in root-to-leaf order, whereas they are released in leaf-to-root
order.

Recovery and Atomicity

• Modifying the database without ensuring that the transaction will commit may leave the database
in an inconsistent state.
• Consider transaction Ti that transfers $50 from account A to account B; goal is either to perform
all database modifications made by Ti or none at all.
• Several output operations may be required for Ti (to output A and B). A failure may occur after
one of these modifications have been made but before all of them are made.
• To ensure atomicity despite failures, we first output information describing the modifications to
stable storage without modifying the database itself.
• We study two approaches:
– log-based recovery, and
– shadow-paging
• We assume (initially) that transactions run serially, that is, one after the other.

Recovery Algorithms

• Recovery algorithms are techniques to ensure database consistency and transaction atomicity and
durability despite of failures
– Focus of this chapter
• Recovery algorithms have two parts
– Actions taken during normal transaction processing to ensure enough information exists
to recover from failures
– Actions taken after a failure to recover the database contents to a state that ensures
atomicity, consistency and durability

Log-Based Recovery

• A log is kept on stable storage.


– The log is a sequence of log records, and maintains a record of update activities on the
database.
• When transaction Ti starts, it registers itself by writing a
<Ti start>log record
• Before Ti executes write(X), a log record <Ti, X, V1, V2> is written, where V1 is the value of X
before the write, and V2 is the value to be written to X.
– Log record notes that Ti has performed a write on data item Xj Xj had value V1 before the
write, and will have value V2 after the write.

CMR Technical Campus 15


DBMS IV UNIT Dr. Nuthanakanti Bhaskar

• When Ti finishes it last statement, the log record <Ti commit> is written.
• We assume for now that log records are written directly to stable storage (that is, they are not
buffered)
• Two approaches using logs
– Deferred database modification
– Immediate database modification

Deferred Database Modification

• The deferred database modification scheme records all modifications to the log, but defers all
the writes to after partial commit.
• Assume that transactions execute serially
• Transaction starts by writing <Ti start> record to log.
• A write(X) operation results in a log record <Ti, X, V> being written, where V is the new value
for X
– Note: old value is not needed for this scheme
• The write is not performed on X at this time, but is deferred.
• When Ti partially commits, <Ti commit> is written to the log
• Finally, the log records are read and used to actually execute the previously deferred writes.
• During recovery after a crash, a transaction needs to be redone if and only if both <Ti start>
and<Ti commit> are there in the log.
• Redoing a transaction Ti ( redoTi) sets the value of all data items updated by the transaction to the
new values.
• Crashes can occur while
– the transaction is executing the original updates, or
– while recovery action is being taken
• example transactions T0 and T1 (T0 executes before T1):
T0: read (A) T1 : read (C)
A: - A - 50 C:- C- 100
Write (A) write (C)
read (B)
B:- B + 50
write (B)
• Below we show the log as it appears at three instances of time.
• If log on stable storage at time of crash is as in case:
(a) No redo actions need to be taken
(b) redo(T0) must be performed since <T0 commit> is present
(c) redo(T0) must be performed followed by redo(T1) since
<T0 commit> and <Ti commit> are present

Immediate Database Modification

• The immediate database modification scheme allows database updates of an uncommitted


transaction to be made as the writes are issued
– since undoing may be needed, update logs must have both old value and new value
• Update log record must be written before database item is written
– We assume that the log record is output directly to stable storage

CMR Technical Campus 16


DBMS IV UNIT Dr. Nuthanakanti Bhaskar

– Can be extended to postpone log record output, so long as prior to execution of an


output(B) operation for a data block B, all log records corresponding to items B must be
flushed to stable storage
• Output of updated blocks can take place at any time before or after transaction commit
• Order in which blocks are output can be different from the order in which they are written.
• Recovery procedure has two operations instead of one:
– undo(Ti) restores the value of all data items updated by Ti to their old values, going
backwards from the last log record for Ti
– redo(Ti) sets the value of all data items updated by Ti to the new values, going forward
from the first log record for Ti
• Both operations must be idempotent
– That is, even if the operation is executed multiple times the effect is the same as if it is
executed once
• Needed since operations may get re-executed during recovery
• When recovering after failure:
– Transaction Ti needs to be undone if the log contains the record
<Ti start>, but does not contain the record <Ti commit>.
– Transaction Ti needs to be redone if the log contains both the record <Ti start> and the
record <Ti commit>.
• Undo operations are performed first, then redo operations.
• Immediate Database Modification Example
Log Write Output
<T0 start>
<T0, A, 1000, 950>
To, B, 2000, 2050
A = 950
B = 2050
<T0 commit>
<T1 start>
<T1, C, 700, 600>
C = 600
BB, BC
<T1 commit>
BA
• Note: BX denotes block containing X.
• Immediate DB Modification Recovery Example
Below we show the log as it appears at three instances of time.
Recovery actions in each case above are:
(a) undo (T0): B is restored to 2000 and A to 1000.
(b) undo (T1) and redo (T0): C is restored to 700, and then A and B are
set to 950 and 2050 respectively.
(c) redo (T0) and redo (T1): A and B are set to 950 and 2050
respectively. Then C is set to 600

Checkpoints

• Problems in recovery procedure as discussed earlier :


1. searching the entire log is time-consuming
2. we might unnecessarily redo transactions which have already
3. output their updates to the database.

CMR Technical Campus 17


DBMS IV UNIT Dr. Nuthanakanti Bhaskar

• Streamline recovery procedure by periodically performing checkpointing


1. Output all log records currently residing in main memory onto stable storage.
2. Output all modified buffer blocks to the disk.
3. Write a log record < checkpoint> onto stable storage.
• During recovery we need to consider only the most recent transaction Ti that started before the
checkpoint, and transactions that started after Ti.
1. Scan backwards from end of log to find the most recent <checkpoint> record
2. Continue scanning backwards till a record <Ti start> is found.
3. Need only consider the part of log following above start record. Earlier part of log can be
ignored during recovery, and can be erased whenever desired.
4. For all transactions (starting from Ti or later) with no <Ti commit>, execute undo(Ti).
(Done only in case of immediate modification.)
5. Scanning forward in the log, for all transactions starting from Ti or later with a <Ti
commit>, execute redo(Ti).
• Example of Checkpoints
• T1 can be ignored (updates already output to disk due to checkpoint)
• T2 and T3 redone.
• T4 undone

Recovery With Concurrent Transactions

• We modify the log-based recovery schemes to allow multiple transactions to execute


concurrently.
1. All transactions share a single disk buffer and a single log
2. A buffer block can have data items updated by one or more transactions
• We assume concurrency control using strict two-phase locking;
1. i.e. the updates of uncommitted transactions should not be visible to other transactions
• Otherwise how to perform undo if T1 updates A, then T2 updates A and
commits, and finally T1 has to abort?
• Logging is done as described earlier.
1. Log records of different transactions may be interspersed in the log.
• The checkpointing technique and actions taken on recovery have to be changed
1. since several transactions may be active when a checkpoint is performed.
• Checkpoints are performed as before, except that the checkpoint log record is now of the form
< checkpoint L>
where L is the list of transactions active at the time of the checkpoint
1. We assume no updates are in progress while the checkpoint is carried out (will relax this
later)
• When the system recovers from a crash, it first does the following:
1. Initialize undo-list and redo-list to empty
2. Scan the log backwards from the end, stopping when the first <checkpoint L> record is
found.
For each record found during the backward scan:
• if the record is <Ti commit>, add Ti to redo-list
• if the record is <Ti start>, then if Ti is not in redo-list, add Ti to undo-list
3. For every Ti in L, if Ti is not in redo-list, add Ti to undo-list
• At this point undo-list consists of incomplete transactions which must be undone, and redo-list
consists of finished transactions that must be redone.
• Recovery now continues as follows:
1. Scan log backwards from most recent record, stopping when
<Ti start> records have been encountered for every Ti in undo-list.

CMR Technical Campus 18


DBMS IV UNIT Dr. Nuthanakanti Bhaskar

• During the scan, perform undo for each log record that belongs to a transaction
in undo-list.
2. Locate the most recent <checkpoint L> record.
3. Scan log forwards from the <checkpoint L> record till the end of the log.
• During the scan, perform redo for each log record that belongs to a transaction
on redo-list
• Example of Recovery
• Go over the steps of the recovery algorithm on the following log:
<T0 start>
<T0, A, 0, 10>
<T0 commit>
<T1 start> /* Scan at step 1 comes up to here */
<T1, B, 0, 10>
<T2 start>
<T2, C, 0, 10>
<T2, C, 10, 20>
<checkpoint {T1, T2}>
<T3 start>
<T3, A, 10, 20>
<T3, D, 0, 10>
<T3 commit>

Log Record Buffering

• Log record buffering: log records are buffered in main memory, instead of of being output
directly to stable storage.
– Log records are output to stable storage when a block of log records in the buffer is full,
or a log force operation is executed.
• Log force is performed to commit a transaction by forcing all its log records (including the
commit record) to stable storage.
• Several log records can thus be output using a single output operation, reducing the I/O cost.
• The rules below must be followed if log records are buffered:
– Log records are output to stable storage in the order in which they are created.
– Transaction Ti enters the commit state only when the log record
<Ti commit> has been output to stable storage.
– Before a block of data in main memory is output to the database, all log records
pertaining to data in that block must have been output to stable storage.
• This rule is called the write-ahead logging or WAL rule
– Strictly speaking WAL only requires undo information to be output
Database Buffering

• Database maintains an in-memory buffer of data blocks


– When a new block is needed, if buffer is full an existing block needs to be removed from
buffer
– If the block chosen for removal has been updated, it must be output to disk
• If a block with uncommitted updates is output to disk, log records with undo information for the
updates are output to the log on stable storage first
– (Write ahead logging)
• No updates should be in progress on a block when it is output to disk. Can be ensured as follows.

CMR Technical Campus 19


DBMS IV UNIT Dr. Nuthanakanti Bhaskar

– Before writing a data item, transaction acquires exclusive lock on block containing the
data item
– Lock can be released once the write is completed.
• Such locks held for short duration are called latches.
– Before a block is output to disk, the system acquires an exclusive latch on the block
• Ensures no update can be in progress on the block
• Database buffer can be implemented either
– in an area of real main-memory reserved for the database, or
– in virtual memory
• Implementing buffer in reserved main-memory has drawbacks:
– Memory is partitioned before-hand between database buffer and applications, limiting
flexibility.
– Needs may change, and although operating system knows best how memory should be
divided up at any time, it cannot change the partitioning of memory.
• Database buffers are generally implemented in virtual memory in spite of some drawbacks:
– When operating system needs to evict a page that has been modified, the page is written
to swap space on disk.
– When database decides to write buffer page to disk, buffer page may be in swap space,
and may have to be read from swap space on disk and output to the database on disk,
resulting in extra I/O!
• Known as dual paging problem.
– Ideally when OS needs to evict a page from the buffer, it should pass control to database,
which in turn should
• Output the page to database instead of to swap space (making sure to output log
records first), if it is modified
• Release the page from the buffer, for the OS to use
Dual paging can thus be avoided, but common operating systems do not support such functionality.

Failure with Loss of Nonvolatile Storage


• So far we assumed no loss of non-volatile storage
• Technique similar to checkpointing used to deal with loss of non-volatile storage
– Periodically dump the entire content of the database to stable storage
– No transaction may be active during the dump procedure; a procedure similar to
checkpointing must take place
• Output all log records currently residing in main memory onto stable storage.
• Output all buffer blocks onto the disk.
• Copy the contents of the database to stable storage.
• Output a record <dump> to log on stable storage.

Recovering from Failure of Non-Volatile Storage

• To recover from disk failure


– restore database from most recent dump.
– Consult the log and redo all transactions that committed after the dump
• Can be extended to allow transactions to be active during dump;
known as fuzzy dump or online dump
– Will study fuzzy checkpointing later

CMR Technical Campus 20


DBMS IV UNIT Dr. Nuthanakanti Bhaskar

Advanced Recovery: Key Features

• Support for high-concurrency locking techniques, such as those used for B+-tree concurrency
control, which release locks early
– Supports “logical undo”
• Recovery based on “repeating history”, whereby recovery executes exactly the same actions as
normal processing
– including redo of log records of incomplete transactions, followed by subsequent undo
– Key benefits
• supports logical undo
• easier to understand/show correctness

Advanced Recovery: Logical Undo Logging


• Operations like B+-tree insertions and deletions release locks early.
– They cannot be undone by restoring old values (physical undo), since once a lock is
released, other transactions may have updated the B+-tree.
– Instead, insertions (resp. deletions) are undone by executing a deletion (resp. insertion)
operation (known as logical undo).
• For such operations, undo log records should contain the undo operation to be executed
– Such logging is called logical undo logging, in contrast to physical undo logging
• Operations are called logical operations

Advanced Recovery: Physical Redo


• Redo information is logged physically (that is, new value for each write) even for operations with
logical undo
– Logical redo is very complicated since database state on disk may not be “operation
consistent” when recovery starts
– Physical redo logging does not conflict with early lock release

Advanced Recovery: Operation Logging


• Operation logging is done as follows:
– When operation starts, log <Ti, Oj, operation-begin>. Here Oj is a unique identifier of
the operation instance.
– While operation is executing, normal log records with physical redo and physical undo
information are logged.
– When operation completes, <Ti, Oj, operation-end, U> is logged, where U contains
information needed to perform a logical undo information.
Example: insert of (key, record-id) pair (K5, RID7) into index I9
• If crash/rollback occurs before operation completes:
– the operation-end log record is not found, and
– the physical undo information is used to undo operation.
• If crash/rollback occurs after the operation completes:
– the operation-end log record is found, and in this case
– logical undo is performed using U; the physical undo information for the operation is
ignored.
• Redo of operation (after crash) still uses physical redo information.

Advanced Recovery: Txn Rollback


Rollback of transaction Ti is done as follows:
• Scan the log backwards

CMR Technical Campus 21


DBMS IV UNIT Dr. Nuthanakanti Bhaskar

1. If a log record <Ti, X, V1, V2> is found, perform the undo and log a special redo-only log
record <Ti, X, V1>.
2. If a <Ti, Oj, operation-end, U> record is found
• Rollback the operation logically using the undo information U.
– Updates performed during roll back are logged just like during normal
operation execution.
– At the end of the operation rollback, instead of logging an operation-
end record, generate a record
<Ti, Oj, operation-abort>.
• Skip all preceding log records for Ti until the record
<Ti, Oj operation-begin> is found
4. If a redo-only record is found ignore it
5. If a <Ti, Oj, operation-abort> record is found:
• skip all preceding log records for Ti until the record
<Ti, Oj, operation-begin> is found.
6. Stop the scan when the record <Ti, start> is found
7. Add a <Ti, abort> record to the log
Some points to note:
• Cases 3 and 4 above can occur only if the database crashes while a transaction is being rolled
back.
• Skipping of log records as in case 4 is important to prevent multiple rollback of the same
operation.
• Advanced Recovery: Txn Rollback Example
• Example with a complete and an incomplete operation

Advanced Recovery: Crash Recovery


The following actions are taken when recovering from system crash
1. (Redo phase): Scan log forward from last < checkpoint L> record till end of log
1. Repeat history by physically redoing all updates of all transactions,
2. Create an undo-list during the scan as follows
• undo-list is set to L initially
• Whenever <Ti start> is found Ti is added to undo-list
• Whenever <Ti commit> or <Ti abort> is found, Ti is deleted from undo-list
This brings database to state as of crash, with committed as well as uncommitted transactions
having been redone.
Now undo-list contains transactions that are incomplete, that is, have neither committed nor
been fully rolled back.
2. (Undo phase): Scan log backwards, performing undo on log records of transactions found in
undo-list.
– Log records of transactions being rolled back are processed as described earlier, as they
are found
• Single shared scan for all transactions being undone
– When <Ti start> is found for a transaction Ti in undo-list, write a <Ti abort> log record.
– Stop scan when <Ti start> records have been found for all Ti in undo-list
• This undoes the effects of incomplete transactions (those with neither commit nor abort log
records). Recovery is now complete.

Advanced Recovery: Checkpointing


• Checkpointing is done as follows:
1. Output all log records in memory to stable storage
2. Output to disk all modified buffer blocks

CMR Technical Campus 22


DBMS IV UNIT Dr. Nuthanakanti Bhaskar

3. Output to log on stable storage a < checkpoint L> record.


Transactions are not allowed to perform any actions while checkpointing is in progress.
• Fuzzy checkpointing allows transactions to progress while the most time consuming parts of
checkpointing are in progress
– Performed as described on next slide

Advanced Recovery: Fuzzy Checkpointing


• Fuzzy checkpointing is done as follows:
– Temporarily stop all updates by transactions
– Write a <checkpoint L> log record and force log to stable storage
– Note list M of modified buffer blocks
– Now permit transactions to proceed with their actions
– Output to disk all modified buffer blocks in list M
blocks should not be updated while being output
Follow WAL: all log records pertaining to a block must be output before the
block is output
– Store a pointer to the checkpoint record in a fixed position last_checkpoint on disk
• When recovering using a fuzzy checkpoint, start scan from the checkpoint record pointed to by
last_checkpoint
– Log records before last_checkpoint have their updates reflected in database on disk, and
need not be redone.
– Incomplete checkpoints, where system had crashed while performing checkpoint, are
handled safely

ARIES

• ARIES is a state of the art recovery method


– Incorporates numerous optimizations to reduce overheads during normal processing and
to speed up recovery
– The “advanced recovery algorithm” we studied earlier is modeled after ARIES, but
greatly simplified by removing optimizations
• Unlike the advanced recovery algorithm, ARIES
– Uses log sequence number (LSN) to identify log records
Stores LSNs in pages to identify what updates have already been applied to a
database page
– Physiological redo
– Dirty page table to avoid unnecessary redos during recovery
– Fuzzy checkpointing that only records information about dirty pages, and does not
require dirty pages to be written out at checkpoint time
More coming up on each of the above …

ARIES Optimizations

• Physiological redo
– Affected page is physically identified, action within page can be logical
Used to reduce logging overheads
– e.g. when a record is deleted and all other records have to be moved to
fill hole
» Physiological redo can log just the record deletion

CMR Technical Campus 23


DBMS IV UNIT Dr. Nuthanakanti Bhaskar

» Physical redo would require logging of old and new values for
much of the page
Requires page to be output to disk atomically
– Easy to achieve with hardware RAID, also supported by some disk
systems
– Incomplete page output can be detected by checksum techniques,
» But extra actions are required for recovery
» Treated as a media failure
ARIES Data Structures

• ARIES uses several data structures


– Log sequence number (LSN) identifies each log record
Must be sequentially increasing
Typically an offset from beginning of log file to allow fast access
– Easily extended to handle multiple log files
– Page LSN
– Log records of several different types
– Dirty page table
– ARIES Data Structures: Page LSN
• Each page contains a PageLSN which is the LSN of the last log record whose effects are
reflected on the page
– To update a page:
X-latch the page, and write the log record
Update the page
Record the LSN of the log record in PageLSN
Unlock page
– To flush page to disk, must first S-latch page
Thus page state on disk is operation consistent
– Required to support physiological redo
– PageLSN is used during recovery to prevent repeated redo
Thus ensuring idempotence

ARIES Data Structures: Log Record

• Each log record contains LSN of previous log record of the same transaction
– LSN in log record may be implicit
• Special redo-only log record called compensation log record (CLR) used to log actions taken
during recovery that never need to be undone
– Serves the role of operation-abort log records used in advanced recovery algorithm
– Has a field UndoNextLSN to note next (earlier) record to be undone
Records in between would have already been undone
Required to avoid repeated undo of already undone actions

ARIES Data Structures: DirtyPage Table

• DirtyPageTable
– List of pages in the buffer that have been updated
– Contains, for each such page
PageLSN of the page

CMR Technical Campus 24


DBMS IV UNIT Dr. Nuthanakanti Bhaskar

RecLSN is an LSN such that log records before this LSN have already been
applied to the page version on disk
– Set to current end of log when a page is inserted into dirty page table
(just before being updated)
– Recorded in checkpoints, helps to minimize redo work

ARIES Data Structures: Checkpoint Log

• Checkpoint log record


– Contains:
DirtyPageTable and list of active transactions
For each active transaction, LastLSN, the LSN of the last log record written by
the transaction
– Fixed position on disk notes LSN of last completed
checkpoint log record
• Dirty pages are not written out at checkpoint time
Instead, they are flushed out continuously, in the background
• Checkpoint is thus very low overhead
– can be done frequently

ARIES Recovery Algorithm

ARIES recovery involves three passes


• Analysis pass: Determines
– Which transactions to undo
– Which pages were dirty (disk version not up to date) at time of crash
– RedoLSN: LSN from which redo should start
• Redo pass:
– Repeats history, redoing all actions from RedoLSN
• RecLSN and PageLSNs are used to avoid redoing actions already reflected on
page
• Undo pass:
– Rolls back all incomplete transactions
• Transactions whose abort was complete earlier are not undone
– Key idea: no need to undo these transactions: earlier undo actions were
logged, and are redone as required

Remote Backup Systems

• Remote backup systems provide high availability by allowing transaction processing to continue
even if the primary site is destroyed.

Detection of failure: Backup site must detect when primary site has failed
– to distinguish primary site failure from link failure maintain several communication links
between the primary and the remote backup.
– Heart-beat messages

CMR Technical Campus 25


DBMS IV UNIT Dr. Nuthanakanti Bhaskar

Transfer of control:
– To take over control backup site first perform recovery using its copy of the database and
all the long records it has received from the primary.
• Thus, completed transactions are redone and incomplete transactions are rolled
back.
– When the backup site takes over processing it becomes the new primary
– To transfer control back to old primary when it recovers, old primary must receive redo
logs from the old backup and apply all updates locally.

Time to recover: To reduce delay in takeover, backup site periodically proceses the redo log records (in
effect, performing recovery from previous database state), performs a checkpoint, and can then delete
earlier parts of the log.

Hot-Spare configuration permits very fast takeover:


– Backup continually processes redo log record as they arrive, applying the updates locally.
– When failure of the primary is detected the backup rolls back incomplete transactions,
and is ready to process new transactions.
• Alternative to remote backup: distributed database with replicated data
– Remote backup is faster and cheaper, but less tolerant to failure
• more on this in Chapter 19
• Ensure durability of updates by delaying transaction commit until update is logged at backup;
avoid this delay by permitting lower degrees of durability.

One-safe: commit as soon as transaction’s commit log record is written at primary


– Problem: updates may not arrive at backup before it takes over.

Two-very-safe: commit when transaction’s commit log record is written at primary and backup
– Reduces availability since transactions cannot commit if either site fails.

Two-safe: proceed as in two-very-safe if both primary and backup are active. If only the primary is
active, the transaction commits as soon as is commit log record is written at the primary.
– Better availability than two-very-safe; avoids problem of lost transactions in one-safe.

CMR Technical Campus 26

You might also like