# B for every approximately concave function f cache

• 29

Course Hero uses AI to attempt to automatically extract content from documents to surface to you and others so you can study better, e.g., in search results, to enrich docs, and more. This preview shows page 13 - 15 out of 29 pages.

(b) For every approximately concave functionf, cache sizek2, and page requestsequence that conforms tof, the page fault rate of the LRU policy is at mostαf(k)plus an additive term that goes to 0 with the sequence length.(c) There exists a choice of an approximately concave functionf, a cache sizek2,and an arbitrarily long page request sequence that conforms tof, such that thepage fault rate of the FIFO policy is bounded away fromαf(k).Parts (a) and (b) prove the worst-case optimality of the LRU policy in a strongand fine-grained sense,f-by-fandk-by-k. Part (c) differentiates LRU from FIFO,as the latter is suboptimal for some (in fact, many) choices offandk.The guarantees in Theorem 1.1 are so good that they are meaningful even whentaken at face value—for strongly sublinearf’s,αf(k) goes to 0 reasonably quicklywithk. The precise definition ofαf(k) fork2 isαf(k) =k-1f-1(k+ 1)-2,(1.1)where we abuse notation and interpretf-1(y) as the smallest value ofxsuch thatf(x) =y. That is,f-1(y) denotes the smallest window length in which page requestsforydistinct pages might appear. As expected, for the functionf(n) =nwe haveαf(k) = 1 for allk. (With no restriction on the input sequence, an adversary canforce a 100% fault rate.) Iff(n) =dne, however, thenαf(k) scales with 1/k.Thus with a cache size of 10,000, the page fault rate is always at most 1%. Iff(n) =d1 + log2ne, thenαf(k) goes to 0 even faster withk, roughly ask/2k.1.3.2 Proof of Theorem 1.1This section proves the first two parts of Theorem 1.1; part (c) is left as Exercise 1.4.Part (a).To prove the lower bound in part (a), fix an approximately concave func-tionfand a cache sizek2. Fix a deterministic cache replacement policyA.We construct a page sequenceσthat uses onlyk+1 distinct pages, so at any giventime step there is exactly one page missing from the algorithm’s cache. (Assume thatthe algorithm begins with the firstkpages in its cache.) The sequence comprisesk-1 blocks, where thejth block consists ofmj+1consecutive requests for the samepagepj, wherepjis the unique page missing from the algorithmA’s cache at thestart of the block. (Recall thatmyis the number of values ofxsuch thatf(x) =y.)This sequence conforms tof(Exercise 1.3).By the choice of thepj’s,Aincurs a page fault on the first request of a block,
14T. RoughgardenFigure 1.4 Blocks ofk-1 faults, fork= 3.and not on any of the other (duplicate) requests of that block. Thus, algorithmAsuffers exactlyk-1 page faults.The length of the page request sequence ism2+m3+· · ·+mk. Becausem1= 1,this sum equals (kj=1mj)-1 which, using the definition of themj’s, equals(f-1(k+1)-1)-1 =f-1(k+1)-2. The algorithm’s page fault rate on this sequencematches the definition (1.1) ofαf(k), as required. More generally, repeating theconstruction over and over again produces arbitrarily long page request sequencesfor which the algorithm has page fault rateαf(k).

Course Hero member to access this document

Course Hero member to access this document

End of preview. Want to read all 29 pages?

Course Hero member to access this document

Term
Fall
Professor
Staff
Tags
Analysis of algorithms, Computational complexity theory, Tim Roughgarden