executed, instead of reexecuting the entire transaction.
15.16
Answer:
Consider two transactions
T
1
and
T
2
shown below.
T
1
T
2
write
(p)
read
(p)
read
(q)
write
(q)
This preview has intentionally blurred sections. Sign up to view the full version.
View Full Document
10
Chapter 15
Concurrency Control
Let TS(
T
1
)
<
TS(
T
2
) and let the timestamp test at each operation except
write
(
q
) be successful. When transaction
T
1
does the timestamp test for
write
(
q
) it finds that TS(
T
1
)
<
Rtimestamp(
q
), since TS(
T
1
)
<
TS(
T
2
) and
Rtimestamp(
q
)
=
TS(
T
2
). Hence the
write
operation fails and transaction
T
1
rolls back. The cascading results in transaction
T
2
also being rolled back
as it uses the value for item
p
that is written by transaction
T
1
.
If this scenario is exactly repeated every time the transactions are restarted,
this could result in starvation of both transactions.
15.17
Answer:
In the text, we considered two approaches to dealing with
the phantom phenomenon by means of locking. The coarser granularity
approach obviously works for timestamps as well. The
B
+
tree index
based approach can be adapted to timestamping by treating index buckets
as data items with timestamps associated with them, and requiring that all
read accesses use an index. We now show that this simple method works.
Suppose a transaction
T
i
wants to access all tuples with a particular range
of searchkey values, using a
B
+
tree index on that searchkey.
T
i
will need
to read all the buckets in that index which have key values in that range.
It can be seen that any delete or insert of a tuple with a keyvalue in the
same range will need to write one of the index buckets read by
T
i
. Thus
the logical conflict is converted to a conflict on an index bucket, and the
phantom phenomenon is avoided.
15.18
Answer:
Note: The treeprotocol of Section 15.1.5 which is referred to
in this question, is different from the multigranularity protocol of Sec
tion 15.3 and the
B
+
tree concurrency protocol of Section 15.10.
One strategy for early lock releasing is given here. Going down the
tree from the root, if the currently visited node’s child is not full, release
locks held on all nodes except the current node, request an Xlock on
the child node, after getting it release the lock on the current node, and
then descend to the child. On the other hand, if the child is full, retain all
locks held, request an Xlock on the child, and descend to it after getting
the lock. On reaching the leaf node, start the insertion procedure. This
strategy results in holding locks only on the full index tree nodes from the
leaf upwards, until and including the first nonfull node.
An optimization to the above strategy is possible. Even if the current
node’s child is full, we can still release the locks on all nodes but the
current one. But after getting the Xlock on the child node, we split it right
away. Releasing the lock on the current node and retaining just the lock
on the appropriate split child, we descend into it making it the current
node. With this optimization, at any time at most two locks are held, of a
parent and a child node.
15.19
Answer:
a.
validation testforfirstcommitterwins scheme: LetStart(
T
i
), Commit(
T
i
)
and be the timestamps associated with a transaction
T
i
and the up
date set for
T
i
be update
set(
T
i
). Then for all transactions
T
k
with
Practice Exercises
11
Commit(
T
k
) < Commit(
T
i
), one of the following two conditions must
hold:
•
This is the end of the preview.
Sign up
to
access the rest of the document.
 Spring '13
 Dr.Khansari
 TI, Twophase locking

Click to edit the document details