This preview shows page 1. Sign up to view the full content.
Unformatted text preview: Chapter 6 Complete Problems
6.1 TM Transducers c
/ x $ 2way R/O M 1way W/O output tape 2way R/W worktapes Figure 6.1: Model of a Transducer
De nition 28 Let L0 and L
be languages. We say that L0 is e ectively reducible to L
(written L0 eff L) if 9 a DTM transducer computing a funciton g such that x 2 L0 , g(x) 2 L.
0 L0 R L if L0 is Rreducible to L, where R is the complexity requirement on the transducer. So,
e.g. R can be: (1) constantspace reduction (also called nitespace transduction or reduction), (2)
log nspace reduction (which also implies polynomial time reduction), (3) polynomial time reduction (4) polylog space reduction, (5) polyspace reduction, etc. 72 c 73 O.H.Ibarra Observations:
1. If L0 ptime L then
(a) L NP L0 NP
(b) L P L0 P
2 2 ) ) 2 2 2. If L0 logspace L then
(a) L P L0 P
(b) L NP L0 NP
(c) L DSPACE (log n)
class.
2 ) 2 2 ) 2 2 ) L0 DSPACE (log n). Proof is not obvious, will be shown in
2 De nition 29 Let C be a complexity class (e.g. NP; P; PSPACE , etc.). Let L be a language.
We say that L is C hard under Rreduction if L0 R L for all L0 in C . (So, e.g. if R is ptime and
C = NP , then L is called NP hard under polynomial time reduction. De nition 30 L is complete under Rreduction if
1. L is C hard under Rreduction.
2. L is in C (So, e.g. L is NP complete under polynomial time reduction).
If C = polynomial space and R is polynomial time, L is called PSAPCE complete. If R is not
speci ed, it is understood to be polynomial time.
Examples: Lsat = F F is a Boolean formula, F is satis able
Lsat is in NP (\just guess the assignment and verify")
L3sat = F F is a boolean formula in CNF with at most 3 literals / clause (a literal is a variable
or its negation).
Lpartition = x1 # #xk k 1; xi s are positive integers, a partition P1 &P2 of the integers such
that P1 = P2 .
Lpartition is in NP
We will show that Lsat is NP hard, i.e., Lsat P P = NP . Formally, (P = NP ) logspace Lsat .
Consider straightline programs using only constructs:
f f j g j g f j 9 g 2 x 0 ) c 74 O.H.Ibarra x 1 x y x y +
: z z Each program is of the form:
P input ( 1
k)
..
. body uses only the constructs de ned above
output( )
x; ;x y Assume that the input domain is f0 1g, i.e. each i can only assume 0/1 value. The length of
= j j = = # of characters in .
NZ = f j 6 0 i.e. outputs a nonzero value for some 1
k in f0 1gg.
Claim: NZ is complete.
; P P n L P PP ; L Proof: x x; ;x ; NP (a) NZ is in NP: Given , guess the values of 1
k in f0 1g and evaluate .
(b) NZ is
hard: Given a Boolean Formula, , we construct a program F such that is
satis able , F 6 0. Assume is in 3CNF. Let = 1 ^ ^ m , each clause i is a disjunction
of at most 3 literals. For example, 1 = 1 _ 2 _ 3 . The program F is de ned as follows (where
1
k represent the varaibles of ):
L P L x; NP F F C ;x input ( ; P F P x; ;x x x P C x C C P F x1 ; ;x k) 1 :
: C1 1
0 C1 C1 x1 C1 C x2 C1 C x1 ..
. x k C2 ..
. 0 x1 k x +
1+
1+ x3 x1 ; ;x k are new variables. computes the value of the clause C1 computes the value of the clause C2 C2 ..
. m C ..
. m C C1 F 0
1 computes the value of the clause m
C : C1 C1 ; m C are new variables. from the example c 75 O.H.Ibarra ..
. : 1 Cm Cm
w
w ..
. 0 w + C1 w + Cm
:
1w
output(y)
w
y Clearly, PF 6 0 , F is satis able. Also, PF can be constructed from F by a TM transducer in
2
log n space. Hence L3sat logspace LNZ .
Next, we show that the zeroequivalence problem for straightline programs is NPhard when we
restrict the input domain to be a nite set of integers. Theorem 19 Let D be any nite set of integers with at least two elements. Then the zeroequivalence problem for fz 1, z x + y, z x y, z x y gprograms over D is NPhard.
Proof: We shall reduce the satis ability problem for Boolean formulas in conjunctive normal form (CNF) to our problem. So let F = C1 Cm be a Boolean formula in CNF over variables x1 ; ; xn .
Assume that each Ci contains exactly three literals. (A literal is a variable or its negation.) We
shall construct a program P such that P(a1 ; ; an ) = 0 for all ai in D if and only if F is not satisable. P has input variables X1 ; ; Xn , output variable z , and auxiliary variables which include
X1 ; ; Xn , z1 ; ; zm , etc. Assume that D = f 1 ; ; k g, where k 2 and 1 < < k . The
relationship between the logical formula F and the program P constructed below is as follows: For
each truth assignment to the variables x1 ; ; xn there corresponds one or more input assignments
to the variables X1 ; ; Xn such that the truth assignment satis es F if and only if each of the
corresponding input assignments produces a nonzero output. The correspondence between the assignments is the following: If xi is assigned `false,' then Xi is assigned 1 . If xi is assigned `true,'
then Xi is assigned j for some j > 1. The program P is the following:
Segment 1. For i = 1; 2;
Xi Xi 1 ; n, write the code to perform the following task: (If the input is (a1 ; ; an ) in Dn , then after segment 1, Xi = 0 if ai = 1 ; otherwise, Xi > 0.)
Segment 2. For i = 1; 2; ; n, write the code to perform the following task:
Xi (( 2 1 ) Xi )(( 3
Xi ) (( n 1 ) Xi )
1)
(If Xi = 0, then Xi > 0; If Xi > 0, then Xi = 0.) c 76 O.H.Ibarra Segment 3. For i = 1; 2; ; n, write the code to perform the following task:
zi sum of the variables representing the literals of Ci
(If Ci is satis ed, then zi > 0; if Ci is not satis ed, then zi = 0.)
Segment 4
z1 z zz z z zm ..
. 1 Clearly, we can construct the program P in time polynomial in the size of F. (Note that an instruction of the form z c, where c > 1, can be coded over fz 1, z x + y, z x y, z x yg
using at most O(log c) instructions.) Moreover, P computes the zerofunction if and only if L is
not satis able. The result now follows, since the satis ability problem is NPhard.
2
Note. The proof of Theorem 19 above implies that the inequivalence problem for polynomial expressions over a nite set of integers is NPcomplete. When the domain is the set of all integers,
the problem is probablistically decidable in polynomial time. Lsat = f F j F is a Boolean formula, F is satis able g.
e:g:; F = (x + x ) (x x + x )
+ sometimes written as _
sometimes written as ^
1 2 3 4 5 Theorem 20 1. Lsat is NP.
2. Lsat is NPhard under log space reduction. (i:e:; forall L' 2 NP ;L'logspace Lsat ) Proof:
1. Guess and verify.
2. Let L' be in NP. Let M be a p(n) time singletape NTM accepting L', p(n) a polynomial.
Assume there are exactly 2 choices/move.
logspace
x = a a an transducer Fx such that
x is accepted by M () Fx is satis able.
12 Accepting con guration:
Ax = # 0# 1 # # p(n) c 77 O.H.Ibarra where each is an ID of length p(n),
0 is the initial ID, and
( ) is an accepting ID.
i pn Observe: 1. 0 = q0 ; a1 ; 0]a2 a
or q0 ; a1 ; 1]a2 a
where the number of 's is p(n)n,
0 { denotes 1 choice, and
1 { denotes 2 choice.
n n st nd 2. ( ) =  f ;{z; ]
() pn } pn 3. i ` +1
i 4. In A = # 0 # 1 # # ( )
the rst # is at position 0, the second # at position p(n)+1, etc., and the last # is at position
p(n)(p(n)+1). The last symbol is at position (p(n) + 1)2 1.
x pn Let = f#g f q; X; 0]; q; X; 1] j q in Q, X in g.
How do we de ne F ?
x For every 0 i (p(n) + 1)2 1 and every X in , create a new Boolean variable C .
iX We will construct the formula F over the variables C 's \representing" A s.t. F (with the C
assigned 1 () the i position in A is X) is true () M accepts x.
x th iX x e:g:; in A = # 0 # 1 # # ( )
x pn C0# corresponds to position 0, and
C1 0 1 0] or C1 0 1 1] corresponds to position 1,
q ;a ; ..
. q ;a ; x x iX c 78 O.H.Ibarra Fx = F1 F2 F3 F4
^ ^ ^ F1 = ^
0 i (p(n)+1)2 1 2
6_
6 CiX
4
X2 ^: 13
0
B_
C7
B (CiX CiY )C7
A5
@2
^ X 6=Y X;Y (the rst term inside the ], means \at least one CiX is true", and the second term means \no
more than one CiX is true.") F2 = (C0# C(p(n)+1)# ) (C1 q0;a1 ;0] C1 q ;a1 ;1]) C2a2 C3a3
^ ^ _ ^ o ^ ^ ^ ^ Cna n n<i p(n) Ci (F2 represents the initial ID.) F3 = _
p(n)(p(n)+1)+1 i (p(n)+1)2 (F3 means last ID is accepting.) 0
@ _ 1 X is accepting composite 1
CiX A Dependency: Ax = # # z ID
} W XY next {
{ z } ID
# Z # W is at jp(n)2, X is at jp(n)1, Y is at jp(n), and Z is at j.
De ne the relation G = < W; X; Y; Z > Z can occur as shown above . Note that G is nite
and can be computed independent of the input.
f F4 = Notation ^
p(n)+1 j j 2
4 _ (p(n)+1)2 1 (W;X;Y;Z )2G g Cj p(n) 2;W Cj p(n) 1;X Cj p(n);Y
^ ^ ^ 3
Cj;Z 5 c 79 O.H.Ibarra L1DFA6= = fM jM is a 1DFA, L(M ) 6= g L2DFA6= = fM jM is a 2DFA, L(M ) 6= g Similarly, we have: L1NFA6= ; L2NFA6=
L1DFA6= = M M is a 1DFA with input alphabet , L(M ) = g L2DFA6= = M M is a 2DFA with input alphabet , L(M ) = g f f j 6 j 6 Claim:
1. L1NFA6= is in P .
Proof: Given a 1NFA M , L(M ) =
there is a directed path from the start state to an
accepting state. Use the shortest path alogrithm.
6 , 2. L1DFA6= is in P .
Proof: Given a 1DFA, M , construct a 1DFA M accepting L(M ). Then L(M ) =
L(M ) = . Note that M is just M with accepting states becoming rejecting states and
rejecting states becoming accepting states.
6 , 6 3. L1NFA6=0 = M M a 1NFA with input alphabet 0 (i.e. unary alphabet), L(M ) = 0
NP complete.
f j f g 6 g is Proof: (a) NP hardness: We show how we can construct given a 3CNF formula, F , a 1NFA MF
over input alphabet 0 such that F is not satis able L(MF ) = 0 , or, equivalently,
F is satis able L(MF ) = 0 .
Let F = C1
Cm . Let x1 ; ; xk be the variables in F .
i. Pick the rst k prime numbers p1 ; ; pk .
P
ii. Construct a 1NFA MF with k=1 pi states to do the following, given input 0l :
i
choose \nondeterministically" a Ci
veri es that Ci is not satis ed for the given assignment which is represented by
the input 0l :
(
1 if l mod pi = 0
xi =
0 otherwise
If Ci is not satis ed, accept else reject.
Hence, L(MF ) = 0
F is satis able. It can be shown that the construction of MF
from F can be carried out be a logspace transducer.
2
f , ^ g , 6 ^ 6 6 , c 80 O.H.Ibarra (b) L1NFA6=0 is in NP .
Proof: Let M be a 1NFA with n states. (Then we know that a 1DFA with 2n states
equivalent to M .) Hence L(M ) = 0 it does not accept some string of length r 2n .
We can represent M by a boolean matrix AM : AM is n n where n is the number of
states, and
9 6 , ( 1 if there is a transition from state i to state j
0 otherwise
To determine if a string that is not accepted by M , we \guess" r 2n and compute
Ar and check that in Ar , the (1; t) entry is 0 for each accepting state t. Since Ar can
M
M
M
be computed in O(log r) boolean matrix multiplications, the computation of AM can be
done in polynomial time.
2 AM (i; j ) =
9 De nition 31 L is PSPACE complete if
1. L is in PSPACE and
2. L is PSPACE hard, i.e., L0 ptime L for all L0 2 PSPACE . Quanti ed Boolean Formulas A boolean formula where each variable is quanti ed by
(qbf). Note that every qbf is either true or false. 9 or 8 is a quanti ed boolean formuala Example: F = x1 x2 x3 (x1 + x2) (x2 + x3 ) (x2 + x3 )] is true.
9 8 9 A qbf F is in (prenex) normal form if it is of the form F = Q1 x1 Q2 x2 Qn xn , where Qi is either
or , is a boolean formula without quanti ers over variables x1 ; ; xn . Here we generalize the
de nition of boolean formulas by allowing constants 1 (for true) and 0 (for false) as operands. It
can be shown that any qbf can be converted to normal form in polynomial time.
9 8 Theorem 21 QBF = F F is a qfb, F is true is PSPACEcomplete.
f j g Proof: To evaluate a given qbf F , we execute the following recursive procedure, EV AL(F ), which
returns 1 (for true) or 0 (for false):
1. If F contains no quanti ers (i.e., it only contains constants), evaluate and output its value (which
is either 1 for true or 0 for false).
2. If F is of the form xG, then output EV AL(G1 ) + EV AL(G0 ), where for s = 1, 0, Gs is G with
s substituted for x.
3. If F is of the form xG, then output EV AL(G1 ) E V AL(G0 ).
9 8 c 81 O.H.Ibarra Clearly EV AL(F ) can be converted to a nonrecursive procedure that runs in polynomial space.
Hence, QBF is in PSPACE.
To show that QBF is PSPACEhard, let M be a p(n) spacebounded singletape TM, where p(n)
is a polynomial. We have to show that there is a polynomialtime transducer that maps any string
x = a1 an to a qbf Fx such that x is accepted by M if and only if Fx is true. The construction
is similar to the proof that Lsat is NPhard, but uses the quanti ers and to make the representation of the accepting computation Ax of M on input x \succint". We omit the proof.
9 8 Twoplayer Games
QBF can be viewed as a game. We have 2 players, E and A. Given a qbf F = Q1 x1 Q2 x2 Qn xn ,
E and A take turns in assigning values to the xi 's (starting with x1 ), with E assigning values to
variables that are bound by and A assigning values to variables that are bound by . If with the
assigned values, is true, then E \wins"; otherwise A \wins". If E wins no matter what values A
selects, then we say that E has a winning strategy.
9 8 Example: Let F = x1 x2 x3 (x1 + x2 ) (x2 + x3 ) (x2 + x3 )].
9 8 9 E selecting x1 = 1 and x3 to be the negation of what A selects for x2 will always result in a true.
Hence E has a winning strategy for the qbf F . Claim: GAME = F F is a qbf, player E has a winning strategy on F is PSPACEcomplete.
f j g One can use the PSPACEcompleteness of GAME to show that other games (e.g., generalized geography, chess, checkers, GO, etc. are PSPACEcomplete). Generalized Geography:
Here, we are given a directed graph G, with nodes 1; ; n (representing cities) and a node v. There
are two players, P1 and P2. Starting with node v, the two players alternately take turns in moving
(via a directed edge) from a node where the previous player just visited. P1 starts the game at
node v. The rule is that no node can be visited more than once. The player who gets \stuck"
loses the game, and the other wins the game. We say that P1 has a winning strategy if P1 wins no
matter what moves P2 makes. c 82 O.H.Ibarra Claim: GEOGRAPHY = f< G; v > jG is a directed graph and v a node, player P1 has a winning
strategy g is PSPACEcomplete.
Proof: Given a graph G and vertex v, de ne a recursive procedure, EV AL(G; v): 1. Construct a new graph G1 by deleting v and all incoming and outgoing edges. Let v1 ;
be the nodes pointed at by v originally.
2. If EV AL(G1 ; vi ) for some vi , then reject else accept. ; vm Clearly EV AL(F ) can be converted to a nonrecursive procedure that runs in polynomial space.
Hence, QBF is in PSPACE.
To show that GEOGRAPPHY is PSPACEhard, we show that GAME is polynomialtime reducible
to GEOGRAPHY. We omit the proof. Claim: L2DFA= is PSPACE complete.
Proof: First we show that L2DFA= is in PSPACE . Let M be a 2DFA with n states. Consider
6 the computation of M on input /a1 ak $. We may assume that M has no stationary moves, i.e.,
c
if (q; a) = (p; d), then d = +1 or d = 1. Also, assume that the \change" in state from q to p
occurs while M is moving its head to +d (i.e. to the left or right of the current position). The
computation of M has the form shown in Figure 6.2. Clearly, M crosses the boundary between any
6 Imaginary Symbol
c
/ a1 a2 a3 ak $ q0 f Figure 6.2: Computation of M
2 cells (i.e. symbols) an odd number of times. The sequence of states the boundary is crossed is
called the crossing sequence at that boundary. Clearly the length of the crossing sequence at any
boundary for an accepting computation is at most (2n 1); otherwise M is in an in nite loop.
The nondeterministic TM N when given the speci cation of M does the following:
1. Let C0 = initial crossing sequence between the imaginary symbol cell and / be equal to
c
< q0 >.
2. \Guess" a1 and \guess" the crossing sequence C1 at the boundary between / and a1 and write
c
it on a worktape. (This needs O(n log n) space.) c 83 O.H.Ibarra 3. Check that C1 is \compatible" with C0 w.r.t. the rules of M .
4. \Guess" a2 and \guess" the crossing sequence C2 at the boundary between a1 and a2 and
write it on a worktape (C2 takes the place C0 ).
5. Check that C2 is \compatible" with C1 w.r.t. the rules of M . The process is repeated until the
symbol $ is guessed. Then the nal crossing sequence between $ and the imaginary symbol is
< f > which should be checked for compatibility with the crossing sequence between ak and
$.
Clearly, N uses polynomial space. By Savitch's theorem, N can be converted to a deterministic
TM which also uses polynomial space.
2
Corollary 11
Proof: L NFA6 is in PSPACE .
2 = Same as above. states can be converted to an equivalent 1NFA M 0 with Corollary 12 Every 2NFA M with n
(n + 1)(2n 1) states = O(2n log n ) states. The states of the 1NFA are crossing sequences of lenght at most 2n 1. The start state is
< q0 > and the accepting state is < f >. There are (n + 1)(2n 1) distinct crossing sequences. 2
We now show that L2DFA6= is PSPACE hard.
Proof: Let M be a polyspace TM. We may assume that M is singletape. Assume that M is nr
space bounded for some r, and f is the only accepting state. Let x = a1 an be the input to M .
If M accepts x, the accepting computation of M on x can be described by a sequence of ID's:
Proof: ID #ID # #IDk
1 where 2 ID = q  ]
a
1 0 an
{z 1 IDk = f ; X ]X nr 1 2 } Xnr IDi is the direct successor of IDi
The 2DFA Nx when given input /y$, does the following:
c
1. Check that the input is wellformed. (This requires O(1) states.)
+1 2. Check that ID1 and IDk are the initial ID and accepting ID, respectively. (This requires
O(nr ) states.)
3. Check that IDi+1 is the direct successor of IDi w.r.t. the rules of M . (This requires O(nr )
states.) c 84 O.H.Ibarra Thus, Nx has O(nr ) states. Clearly, L(Nx ) =
M accepts x.
2 Claim: L1NFA6= is
PSPACE complete.
Proof: To show that L1NFA6= is PSPACE hard, let M be a polyspace TM and x = a1 an
be an input. De ne the language:
L = x x = ID1 #ID2# #IDk , where ID1 # #IDk is the accepting computation of M on
input a1 an .
(See previous proof.)
We can construct a 1NFA, Nx , which when given an input y nondeterministically does one of
the following:
6 f j , 6 g 1. Checks that the input is not of the form ID1 # #IDk , i.e., not well formed or 2. checks that either ID1 is not the start ID or IDk is not the accepting ID, or
3. checks that IDi+1 is not a successor of IDi (by nondeterministically choosing the position of
discrepancy).
Clearly, Nx has a polynomial (in n) number of states, and L(Nx ) =
M accepts x.
To see that L1NFA6= is in PSPACE , consider a 1NFA M with n states. We know that by the
\subset construction" that we can construct a 1DFA equivalent to M which has at most 2n states.
Rather than building the 1DFA, a nondeterministic TM, N can be constructed which \guesses"
the inputs to M and uses a worktape to keep track of all \possible states" M can enter on the
\guessed" inputs, like in the subset construction. N need only record n states. N rejects if after
processing some string of symbols, all the states reached are nonaccepting. Clearly, N accepts
M L(M ) = :
2
Let L = A1 # #Ak k 1, each Ai is a 1NFA, L(A1 ) L(A2 )
L(Ak ) = .
Note that to determine whether L(A1 ) L(A2 )
L(Ak ) = , we can construct a 1NFA, A,
accepting L(A1 ) L(A2 )
L(Ak ) and check whether L(A) = . But A will have n1
nk
states, where ni is the number of states of Ai . Since k is not xed (i.e., it is a parameter), the space
needed to construct A is exponential in the size of A1 # #Ak .
We can show that L is in PSPACE ; in fact, it is PSPACE complete.
Claim: L is in PSPACE .
Proof: We describe a nondeterministic singletape (basic) TM, M , which accepts A1 # #Ak
L(A1 ) L(A2 )
L(Ak ) = . The tape of M is initially given the input, i.e., the spec cations
of A1 ; ; Ak :
M operates as follows:
6 , , 6 f j \ \ \ \ \ \ \ g \ \ , \ \ \ 6 1. Mark the start states q01 ; ; q0k of A1; ; Ak respectively. 2. \Guess" the next input symbol and update the next states of Ak . c 85 O.H.Ibarra A2 A1 q # q 01 Ak
# # q 02 λ...λ
0k R/W tape
M 3. Repeat 2 until all states are accepting.
Clearly, M is n spacebounded, where n = length of A1 # #A . M can then be converted to an
n2 spacebounded deterministic TM.
2
Claim: L is PSPACE hard.
Proof: Let M be a p(n) spacebounded singletape DTM, where p(n) is a polynomial. Let x =
a1 a be an input to M . Like before, if M accepts x, the accepting computation of M on x can
be represented by a string ID1 # #ID where,
k n k ID = q ; a ]a
1 0 1 2 {z a n } p(n) ID = f ; x ]x x
is the direct successor of ID w:r:t: the rules of M
1 k ID i+1 2 p(n) i f is the only accepting state.
We can construct p(n) + 1 1DFA's A ; A ;
0 1 ;A p(n) , where 1. A0 is a 1DFA which when given input y checks that the input is wellformed, and ID1 and
ID2 are the start and accepting ID's. Clearly A needs O(p(n)) states.
2. For r = 1; 2; ; p(n), 1DFA A checks that the r 1; r; r + 1 bits of ID1 ; ID2 ;
compatible w.r.t. the rules of M . Clearly, A needs O(p(n)) states.
r ; ID are
k r It is easy to see that L(A0 ) \ \ L(A ) 6= , M accepts x:
p(n) 2 The Complexity of Program Evaluation
The evaluation problem for a class of programs is, given a program in the class along with corresponding inputs for the program, to determine the output of the program when executed. Here c 86 O.H.Ibarra we examine a simple variation of the problem for which the input values are always assumed to
be zero. We call this the 0evaluation problem. More formally, let be a class of programs over
the nonnegative integers. Assume the programs have only 1 output variable. We are interested
in the problem of deciding given an arbitrary program in whether outputs 0 when all its
variables are initially 0. More precisely, let ( ) = f j in and outputs 0 when all its
variables are set to 0 g. It follows from the undecidability of the halting problem for programs that,
in general, ( ) is not recursive. However, when the programs always halt (as they do, e.g., for
 programs and  programs), ( ) is recursive, and it is interesting to determine its complexity.
C P EC C PP P C P EC L Q EC Consider the class f
1,
+,
,
g of programs without nested loops
(i.e., the 1 programs). We will show that ( 1 ) has lower and upper time complexities ( ) and
( ), respectively, for some nonnegative rational constants
( is the length of the program
being evaluated), where ( ) is de ned by: (1) = 2, ( + 1) = 2 ( ) .
: x x x y x x Q y do x end EQ g cn g dn c<d P gx g n gx gx In the proof, we will use the language, . Recall that is the class of programs using only constructs
0,
+1
1,
=0
,
,
( is unrestricted, , it can be a
\forward" or \backward" label.) Note that the construct
0 can be deleted from . As we have
seen earlier, programs compute exactly the partial recursive functions. Let be a program in
with one input variable. Assume halts on all inputs. The set of nonnegative integers accepted
by is the set = f 0 j on input 0 outputs 0 g. We say that has time complexity (or runs
in time) ( ) if for all inputs 0 such that j 0 j = length of the binary representation of 0 = ,
halts after at most ( ) instructions. The length of the binary representation of is denoted by j j.
G G : x x x x x if x then goto r goto r halt r i:e: x G G P G P P L x P x Tn P x x x Tn P n P P The intermediate step is given by the following lemma.
2 Lemma 7 Let ( ) = 2 gnlevels. There is a set of nonnegative integers with the following
2 gn L properties:
1. L cannot be accepted by a g(n)time bounded TM.
2. There is a positive integer k such that L can be accepted by a program P in G in time g(n + k). Proof: The rst statement follows from the time (or space) hierarchy theorem for TM's. The second statement follows from the fact that every ( ) timebounded TM can be simulated by a
Tn
for some constant . We can construct from
( ) timebounded program, where ( )
program simulating 1 , running in time ( + 1 + 2 ) = ( + ).
1, a
Tn T Z 0 n G T G Z 0 n c c () gn c k k gn k Theorem 22 There are positive rational constants and such that the time complexity of recogc d nizing E (Q1 ) has lower and upper bounds of g(cn) and g(dn), respectively. Proof: Let , , and ( + ) be as de ned in the Lemma 7. We shall describe an algorithm
L P gn k which for any nonegative integer x0 constructs a program Px0 in Q1 such that: c 87 O.H.Ibarra (i) jPx0 j = O(jP j2 + jx0 j).
(ii) Px0 outputs 0 if and only if P on input x0 outputs 0.
Assuming for now that (i) is true. Let n = jx0 j and m be a positive integer such that jPx0 j < mn.
Now suppose that for every positive rational constant c, there is a g(cn)time bounded TM Z
accepting E (Q1 ). Then we can construct from Z a TM Z2 accepting E as follows: Z2 when given
a nonnegative integer x0 does the following:
(1) Z2 constructs Px0 .
(2) Z2 employs Z to determine if Px0 outputs 0. Z2 accepts x0 if and only if Px0 outputs 0.
We shall see that the time needed for step (1) is less than the time needed for step (2). Hence
Z2 has time complexity less than 2g(cmn), which is less than g(n) for a small enough c. This
contradicts Lemma 7.
We now describe the construction of Px0 . Px0 will have the following form:
(1) Generate x0 .
(2) Generate the number l = g(n + k) where n = jx0 j.
(3) Simulate P on input x0 .
The section of code to generate x0 places the nonnegative integer x0 in the variable x. This
takes O(jx0 j) statements using the constructs x 1 and x x + y. The section of code to generate
the number l = g(n + k) is given below: w n+k ..
. This can be coded using the constructs
x 1 and x x + y.
There are n + k code segments . where is the code:
do w
w w+w
end Clearly, at the end of the program segment above, we will have the value l = g(n + k).
Now P is a Gprogam. We may assume without loss of generality that P has exactly one halt
instruction which appears at the end of the program, and this is the last instruction executed. We
may also assume that every instruction in P is labeled and the instructions are sequentially labeled.
Thus P has the form
..
. 1 c 88 O.H.Ibarra r where halt and for 1 i r
i: if x = 0 then goto l. The section of
r do is r: 1; is of the form i : x x + 1; i : x x 1; i : goto l, or
code to simulate P on input x0 will have the following form:
: i w
1 ..
. 2 r end where is a sequence of instructions that simulates the instruction . The construction of
been done before.
i i i has The upper bound follows from the observation that there is a rational constant h such that any Q1
program P of length n when started with all its variables equal to 0 can have at most g(hn) value
in any of its variables during the execution of the program. Hence there is rational constant d such
that evaluating the program takes no more than g(dn) time.
Next, consider the fx x + 1, if x = 0 then y y 1, do x end gprograms without nested
loops. Call these R1 programs. It can be shown that E (R1 ) is PSPACEcomplete.
: Clearly, E (R1 ) is in PSPACE. (Why?)
To show that it is PSPACEhard, let M be a p(n) spacebounded singletape TM, where p(n) is
a polynomial. We have to show that there is a polynomialtime transducer that maps any string
x = a1 a to a program P in R1 such that x is accepted by M if and only if P evaluates to 0.
We omit the actual construction of P .
n x x Finally, consider the fx 0, x x + 1 x x y, x y, if x = 0 then goto r, goto r, do
x end gprograms with nested loops, and r denotes a \forward" label not in the scope of any
doloop. (dostatements can be labeled.) Call these S1 programs. It can be shown that E (S1 ) is in
PTIME. Note that one cannot simply do a stepbystep simulation of the program. Why?
: ...
View
Full
Document
This note was uploaded on 01/28/2012 for the course CS 220 taught by Professor Ibarra,o during the Winter '08 term at UCSB.
 Winter '08
 Ibarra,O

Click to edit the document details