{[ promptMessage ]}

Bookmark it

{[ promptMessage ]}

Thus the transaction avoids spending time in disk io

Info iconThis preview shows pages 9–11. Sign up to view the full content.

View Full Document Right Arrow Icon
Thus the transaction avoids spending time in disk I/O with locks held. The technique may even increase disk throughput as the disk I/O is not stalled for want of a lock. Consider the following scenario with strict two-phase locking protocol: A transaction is waiting for a lock, the disk is idle and there are some item to be read from disk. In such a situation disk bandwidth is wasted. But in the proposed technique, the transaction will read all the required item from the disk without acquiring any lock and the disk bandwidth may be properly utilized. Note that the proposed technique is most useful if the computation involved in the transactions is less and most of the time is spent in disk I/O and waiting on locks, as is usually the case in disk-resident databases. If the transaction is computation intensive, there may be wasted work. An optimization is to save the updates of transactions in a temporary buffer, and instead of reexecuting the transaction, to compare the data values of items when they are locked with the values used earlier. If the two values are the same for all items, then the buffered updates of the transaction are executed, instead of reexecuting the entire transaction. 15.16 Answer: Consider two transactions T 1 and T 2 shown below. T 1 T 2 write (p) read (p) read (q) write (q)
Background image of page 9

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full Document Right Arrow Icon
10 Chapter 15 Concurrency Control Let TS( T 1 ) < TS( T 2 ) and let the timestamp test at each operation except write ( q ) be successful. When transaction T 1 does the timestamp test for write ( q ) it Fnds that TS( T 1 ) < R-timestamp( q ), since TS( T 1 ) < TS( T 2 ) and R-timestamp( q ) = TS( T 2 ). Hence the write operation fails and transaction T 1 rolls back. The cascading results in transaction T 2 also being rolled back as it uses the value for item p that is written by transaction T 1 . Ifthisscenario isexactlyrepeatedeverytime the transactionsare restarted, this could result in starvation of both transactions. 15.17 Answer: In the text, we considered two approaches to dealing with the phantom phenomenon by means of locking. The coarser granularity approach obviously works for timestamps as well. The B + -tree index based approach can be adapted to timestamping by treating index buckets as data items with timestamps associated with them, and requiring that all read accesses use an index. We now show that this simple method works. Suppose a transaction T i wants to access all tuples with a particular range of search-key values, using a B + -tree index on that search-key. T i will need to read all the buckets in that index which have key values in that range. It can be seen that any delete or insert of a tuple with a key-value in the same range will need to write one of the index buckets read by T i . Thus the logical con±ict is converted to a con±ict on an index bucket, and the phantom phenomenon is avoided. 15.18
Background image of page 10
Image of page 11
This is the end of the preview. Sign up to access the rest of the document.

{[ snackBarMessage ]}

Page9 / 11

Thus the transaction avoids spending time in disk IO with...

This preview shows document pages 9 - 11. Sign up to view the full document.

View Full Document Right Arrow Icon bookmark
Ask a homework question - tutors are online