This preview has intentionally blurred sections. Sign up to view the full version.
View Full DocumentThis preview has intentionally blurred sections. Sign up to view the full version.
View Full Document
Unformatted text preview: Implicit State Enumeration of Finite State Machines using BDD’s* Herve’ J. Touati
Robert K. Brayton Hamid Savoj
Alberto SangiovanniVincentelli Bill Lin Electrical Engineering and Computer Sciences Department
University of California, Berkeley, CA 94720, USA Abstract Coudert er al. have proposed in [4] an efﬁcient method to compute
the set of reachable states of a sequential ﬁnite state machine using
BDD‘s. This technique can handle larger ﬁnite state machines than
previously possible and has a wide range of applications in sequential
synthesis, testing and veriﬁcation. At the heart of this method is an
algorithm that computes the range of a set of Boolean functions under
a restricted domain. Coudert et al. originally proposed a simpler and
more general algorithm for range computation that was based on rela
tions, but dismissed it as impractical for all but the simplest examples.
We propose a new approach based on relations that outperforms Coud
ert's algorithm with the additional advantage of simplicity and wider
applicability. 1 Introduction Efﬁcient algorithms to perform state enumeration for ﬁnite state ma—
chines can be applied to a wide range of problems: implementation
veriﬁcation (checking that an implementation conforms to its speci
ﬁcation [4, 3]), design veriﬁcation (e.g. checking that an implemen
tation or a speciﬁcation satisﬁes fairness and liveness properties [3]),
sequential testing and sequential synthesis. Traditional state enumeration methods cannot handle machines with
more than a few million states. To break this limitation, Ghosh et al,
[6] proposed an implicit enumeration technique based on cubes. Un
fortunately, in their approach, sets of states can be manipulated as a
unit only if they can be represented as a cube in some binary encod
ing. Coudert et al. [4] were ﬁrst to recognize the advantage of rep—
resenting set of states with reduced ordered binary decision diagrams
(BDD’s) [2]. Their technique was initially applied to checking ﬁnite
state machine equivalence and was later extended by Burch et al. to
the computation of temporal logic formulas [3]. Sequential testing of
non—SCAN designs and extraction of sequential don‘t cares are other
immediate applications of this technique. In all these applications, the solution can be obtained by iterative use
of one of two operations: the computation, for a given Boolean func—
tion, of the image of a subset of its domain, and the computation of
the inverse image of a subset of its eodomain. A conceptually simple
and elegant method to perform these operations. originally introduced
in [4], consists of forming a BDD representation of the state transi
tion relation of a ﬁnite state machine (i.e. the single output function
f(c, i, y) which is equal to 1 if y is the state reached by the machine in
state in upon receiving input 2). The image and reverse image of a set
of states by the transition function can be obtained from the transition
relation in one BDD operation. Unfortunately the BDD for the transition relation can often grow ‘This project is supported in part by Defense Advanced Research Projects Agency
under contractnumber N0003987C0182 and NSF/DARPA contract MIPS‘7l9546. CH29249/90/0000/0130$O1.00 © 1990 IEEE 130 too large to be practical. Coudert er al. in [4, 5] proposed a recursive
method for image computation that only requires the ability to com
pute the BDD of the statetransition functions f(z. i) : (f1. . . . . f”).
We propose a new method based on transition relations that only re
quires the ability to compute the BDD for f, and outperforms Coudert’s
algorithm for most examples. The main contributions of this paper are: in section 2 a simple nota
tional framework to express the basic operations used in EDDbased
state enumeration algorithms in a uniﬁed way: in section 3 a set of
techniques that can speed up range computation dramatically, includ
ing a new variable ordering heuristic and a new method based on tran
sition relations. We present and discuss our experimental results in
section 4. 2 Terminology and Previous Work 2.1 Image and Inverse Image Computations In what follows B designates the set {0, 1}. Deﬁnition 1 Let f : B" —> B“ be aBoolean function and .4 a subset
ofB”. The image ofA by f is the setf(.4) : {y E B'" l y =
ﬁx), a: E A}. IfA = B", the image ofA by f is also called the
range off. Deﬁnition 2 Let f : B" ——» B” be a Boolean function and A a sub—
set of B’". The inverse image ofA by f is the set f—1(A) = {1' E
B" l f($) = y, y 6 A} Example Letf(x. i) : B“ x Blc —> B” be the next state function ofa
ﬁnite state machine, where n is the number of state variables and k the
number of input variables. Let coo be the set of states reachable from
a set of initial states e0. coo can be obtained by repeated computation
of an image as follows 61+] 2 c, U flag X Bk) coo — ct if Ct+1=cr The sequence is guaranteed to converge after a ﬁnite number of itera»
tions. 2.2 Sets, BDD’s, and Characteristic Functions Deﬁnition 3 Let E be a set and A g E. The characteristic function
ofA is thefunction XA : E A» {0, l} deﬁnedby XAW) = lifz E A,
XAW‘) = 0 otherwise. Characteristic functions are nothing but a functional representation of
a subset of a set. In the rest of this paper we will not make any distinc
tion between the BDD representing a set of states, the characteristic
function of the set, and the set itself. 2.3 The Smoothing Operator Deﬁnition 4 Let f : B" —> B be a Boolean function, and x (12,1, . . We“) a set of input variables of f. The smoothing off by
z is deﬁned as in [8] , with fa denoting the cofactor of f by literal a:
Srf — SE“ ...Swikf
Szgjf : fzgj + ff If f is interpreted as a logical predicate, the smoothing operator com
putes the existential quantiﬁcation of f relative to the 2: variables. If f
is interpreted as the characteristic function of set, the smoothing oper
ating computes the projection of f to the subspace of B" orthogonal
to the domain of the c variables. We will make use of the following
simple property of the smoothing operator (the Boolean and is denoted
by a dot): Lemma 2.1 Let f : B" X B’“ —> B and g : B’" —+ B be twoBoolean
functions Then: Srtftml  901)) = 53(f(wty)) g(y) (1) 2.4 The Transition Relation Method Deﬁnition 5 Let f : B" a Bm beaBoolean function. The transition
relation associated with f, F : B" X Bm —> B, is deﬁned as FLT, y) =
{(1,31) E B” x B’" l y = f(:t)}. Equivalently, in terms ofBooleari
operations: FUN/J = thiEfi(1‘)) (2) 1355111 We can use F to obtain the image by f of a subset A of B", by
computing the projection on B’" of the set F n (A x 13'"). In terms of
BDD operations, this is achieved by a Boolean and and a smooth The
smoothing and the Boolean and can be done in one pass on the BDD‘s
to further reduce the need for intermediate storage [3]: f(4)(y) = 51(F(zty) 4(1)) (3)
The inverse image by f of a subset A of B'" can be computed as easily:
f“ (AXE) 2 5y(F(z,y) ' Am) 2.5 The Generalized Cofactor The generalized cofactor is an important new operator that can be used
to reduce an image computation to a range computation. This operator
was initially proposed by Gender: et al. in [4] and called the constraint
operator. Given a Boolean function: f = (fl, . . . , fm) : Bn —» B'"
and a subset of B" represented by its characteristic function e, the gen
eralized cofactor fC (f1)c, . . ., (fm )c) is a function from B" to 3’"
whose range is equal to the image of c by f. In addition, in most cases,
the BDD representation of f6 is smaller than the BDD representation
of f. For a single output function f : B" a B, the pair (f, c) can
be interpreted as an incompletely speciﬁed function whose onset is
f  c and don’t care set E. Under this interpretation, the generalized
cofactor fc can be seen as a heuristic to select a representative of the
incompletely speciﬁed function ( f, c) that has a small BDD represen
tation. The generalized cofactor f,2 depends in general on the variable
ordering used in the BDD representation. If c is a cube the generalized
cofactor fc is equal to the usual cofactor of a Boolean function, and is,
in that case, independent of the variable ordering. (4) 131 function cofactor(f,c) {
assert (c 76 0) ;
if (c = 1 or is_constant(f))
else if (cﬁ = 0) return f;
return cofactor(f,,17 n,x );
else if (cI1 = 0) return cofactor(fﬁ7 cﬁ);
else return xl cofactor(fzl , cm) + Ty cofactor(f51—, cﬁ); Figure 1: Generalized Cofactor Algorithm Deﬁnition 6 Let c : B" —» B be a nonnull Boolean function and
all < 132 4 . . . 4 (on an ordering of its input variables. We deﬁne the
mapping ac : B" a B" as follows: if do) = 1 item) = a:
if c(z) = 0 «C(z) = arg min d(z,y)
y,c(y)=l
where d(z, y) = 2 lat, — yd?"
lgiSn Lemma 2.2 7r,2 is the projection that maps a minterm a to the minterm
y in the onset of e which has the closest distance to z according to
the distance d. The particular form of the distance guarantees the
uniqueness of y in this deﬁnition, for any given variable ordering. Proof Let y and y’ be two minterms in the onset of e such that
d(x, y) = d(x,y’). Each of the expressions d(a:, y) and d(e, y’) can
be interpreted as the binary representation of some integer. Since the
binary representation is unique, lit, 7 y, = :c, — y’, for 1 S i g n
and thus y = y’.  Deﬁnition 7 Letf : B" —> B andc : B" —» B, with c gé 0. The
generalized cofactor of f with respect to c, denoted by fa, is the func—
tion f0 = f on (Le. fc(2:) = f(7rc(:c))). [ff 2 B” ——> B’", then
fc : B“ a B’" is the function whose components are the cofactors by
c of the components off. The generalized cofactor can be computed very efﬁciently in a single
bottomup traversal of the BDD representations of f and c by the al
gorithm given in Figure 1. Lemma 2.3 Ifc is a cube (Le. c : clcz . . .e,, where c, = {0,1,*}),
1rc is independent of the variable ordering, More precisely, if y satisﬁes yi=0 U‘Ct:0
yi=1ifci=1
ye=£€r ifci=* then y : 7rc(a:) and fC f o 7rc is the usual cofactor of a Boolean function by a cube, Proof Any mintenn y’ in B" such that c( y’) = 1 is orthogonal to a:
in at least the same variables as y. Thus 3/ minimizes d(a:, y) over c. I
In addition, the generalized cofactor preserves the following important
properties of cofactors: Proposition 2.1 Letg : B'" —+ B and f : B" —~ B’". Then (gcaf)C :
g 0 f6. In particular the cofactor of a sum of functions is the sum of the
cofactors, and the cofactor of an inverse is the inverse of the cofactor. Prom" (90f)c=(.90f)wcandg0fc=cO(f°7rc)r Proposition 2.2 Let f : B" X 3'” »—> B andc : B“ —> B be two
Boolean functions, with c 7E 0. Then: 31(f(1,y)‘6(¢)3‘ = SAM1.11)) (5)
Proof lfc(.t~)— — 1 then Mr, y):f y.) Thus f(1 y) (:(n t C
fc($.y) and S (f( (1 y) (1)) Q 5 (5‘0 fl(1, 31)). Conversely, ify is
such that S (fc(1 1,y)) = ,eth ere exists an 1 such that fc(1 y): 1. Thus f(7rc(1) y) C( re (1)) : f(a:. yl _ 1, whichimplies that
Sx(f($ry)'c(1‘) )= 1" Proposition 2.3 Let f be a Boolean function, and c a nonnull
Boolean function. Then : is contained in f if and only if ft is a tantal ogy. Proof Suppose that c is contained in f. Let .L‘ be an arbitrary
minterm. y = «41) is such that C(y) : 1. Thus fc(1) = f(y) :
which proves that fC is a tautology. Conversely, suppose that fc is
a tautology. Let 1 be such that c(1)  1. Then 7r,(w) 2 1‘ and
f(1) = f(rrc(1)) : 12(1) : 1, which proves that c is contained
in f. I Corollary 2.4 Let f be a Boolean function, and c a non—null Boolean
function. Then c(1) : 1 implies thatf(:L') : fem), Lemma 2.5 If c is the characteristic function ofa set A then fc(B") =
f (A); that is the image of A by f is equal to the range of the cofactor f... Proof 7743") is equal to the onset of c, which is A, Thus fc(B") =
f o 7rc(B“) = f(A). I 2.6 The Recursive Image Computation Method Coudert et al. [4, 5] introduced an alternate procedure to compute the
image ofa set by a Boolean function that does not require building the
BDD for the transition relation. This procedure relies on lemma 2.5 to
reduce the image computation to a range computation. and proceeds
recursively by cofactoring by a variable of the input space or the output
space. We use the abbreviation rg( f) to denote the range of a multiple outputfunctionf =[f17H7fm]:
WOW?) = 91'79([f2:efm]f1) +Z/T'T9iif2wnfmiﬁl
MUN” = rg([f1w~afm]1:) + 7‘9([f17~7fm]ﬁ) The procedure can be sped up dramatically by caching intermediate
range computations, and detecting the case where, at any step in the
recursion, the functions [f1, . . . , fm] can be grouped into two or more
sets with disjoint support. The range computation can proceed inde
pendently on each group with disjoint support. This reduces the worst
case complexity from 2m to 2‘91 +. . .+ 25k , where (31, . . ..sk) are the
sizes ofthe groups (31 + . ..+ 5,, = m). 3 Heuristics 3.1 Variable Ordering Heuristics Variable ordering heuristics are known to have a dramatic effect on
BDD sizes. Good variable ordering heuristics have been developed
for BDD representations of combinational circuits [7]. For sequen
tial circuits, the variable ordering inﬂuences not only the size of the 132 BDD representation of the transition function but also the size of the
BDD representation of the set of reachable states In addition, both
for the transition relation method and the recursive image computa~
tion method. we usually need to use an ordering that interleaves input
and output variables. Our variable ordering heuristics are extensions of the heuristics de—
scribed by Malik et al. in [7]. We ﬁrst determine a good order—
ing of the next state variables (m. . . ..yn), or equivalently the cor—
responding next state functions (f1. . . . , f"). We then use Malik‘s
heuristics to order the supports of the functions f,, 52tpp(f.‘ ), individ
ually. Finally we interleave the input and output variables as follows:
SUPP'Zer/r. . . ..suppifn) , Uigtgniisnpptﬁ). in. To order the output functions, we want to use some permutation a of
the output functions that minimizes the following cost function, where
Al denotes the number of elements of set A: Z l U surp(fa,)l 15.3.. 1379 .)= cost(t7 Unfortunately. ﬁnding the optimal permutation is difﬁcult in general.
The best algorithm we could find is based on dynamic programming
and has complexity 0(2"). To ﬁnd an approximate solution to this
problem, we use a simple greedy algorithm with bounded lookahead
k. This algorithm computes all possible choices for the ﬁrst 1:. nine
tions, and for each choice, completes the ordering by selecting for fa,»
i Z k + 1, the function that minimizes ( Jlsjsi supplifojﬂ. Exper—
imentally, a lookahead of 2 may yield signiﬁcantly better orderings
than a lookrahead of 0, and lookaheads or 3 or more are not practi
cal for large examples We use this variable heuristics in all exam
ples reported in this paper except the MlNMAX examples, for which
a manual ordering yields signiﬁcantly better results. 3.2 Partial Product Heuristics Computing the transition relation may require too much memory to be
feasible in some examples. However, to perform an image computa
tion as in equation 3, we do not need to compute the transition relation
explicitly. Using propositions 2.1 and 2.2, we can rewrite equation 3
as follows: satay) . 4(a)) = 31.2mm» Onc efﬁcient way to compute the product is to decompose the Boolean
and of the m functions (g,(1,y) : y, E (f;)_4 (z )) into a balanced
binary tree of Boolean and. Morever, after computing every binary
and p of two partial products [)1 and 1);, we can smooth the 1 variables
that only appear in 1;. As for equation 3, the smoothing and the and
computations can be done in one pass on the BDD’s to reduce storage
requirements. This algorithm strictly dominates all the other range
computation algorithms presented in this paper, in the sense that it can
handle all the examples the other techniques can handle, and a few
more. 3.3 Heuristics for Iterative Image Computation In most applications we need to iterate image computations until we
reach a ﬁxed point. We use two techniques to speed up this iterative
computation. The ﬁrst technique was suggested to us by R. Rudell [9].
the second was originally introduced by Coudert et al. in [4]. Reordering the Variables of the Image In the transition relation
method, the image is obtained in terms of the next state variables. To
use the image as initial set for the next iteration, we need to express
it in terms of the present state variables. An efﬁcient way to perform
this computation is to order the present state variables and the next
state variables in pairs, and perform the substitution in one pass over
the BDD representing the image. Next State Heuristics Let C, be the set of states reachable in i steps
or less from the initial state. To compute CH1. we do not need to
compute the image of C,. We simply need to compute the image of
any set c, such that C, 7 0,4 g c, g 0,. Ci+1 is then obtained by
computing the union of C, and the image of q. We only care about the
value of c, outside C,_1. Thus we can choose for c, any representative
of the incompletely speciﬁed function (Chm). This is an ideal case
of application of the generalized cofactor. By choosing (cam as the
set of states to consider for the next image computation, we can reduce
in practice often quite dramatically the time required to enumerate all
reachable states. 4 Results and Discussion We present in this section some results comparing the recursive method
proposed by Coudert et al. [4, 5], the transition relation method pro
posed by Bureh et al. [3] and the new method described in sec—
tion 3.2. All methods use the variable ordering heuristics presented
in section 3.1. For each of these three methods, we measured the time required to
perform the veriﬁcation of two identical copies of a ﬁnite state machine
using the breadthﬁrst traversal technique described in [4]. The two
copies keep their state variables independent, but share the external
input wires. The veriﬁcation starts from an initial state, and implicitly
enumerates all reachable states by doing a breadthﬁrst traversal of the
state transition diagram of the product machine. At each step in the
veriﬁcation. the outputs of the two machines are checked for equal
ity. We report the time to perform the entire computation, including
parsing the input ﬁles and computing the product machine. Run times
were measured on an DEC5400. The program was implemented as
an extension ofmisII [l] and the reported time was obtained using the
misII time command. We use several ISCAS sequential benchmarks (sand, scf ,
5344 , 3444, 3526, S713, 3953, le38), an accumu
later made out of a 32 bit carry—bypass adder (cbp. 32 . 4), a circuit
computing the minimum and the maximum of a sequence of 32—bit in~
tegers (mirrmax32), a circuit computing the encryption key used in a
VLSI implemention of the data encryption standard key, and she, the
snooping bus controller for the SPUR multiprocessor. Except for the
minmax32 example, the variable ordering was performed automati
cally. The partial product method was the only one able to complete
the veriﬁcation of key and she; it is a clear winner for the large ex
amples. and performs adequately well for the smaller ones. As can be
observed from these examples, the computation times are very weakly
related to the number of states visited. We also compared our results
with the method introduced by Ghosh et a1. [6]. All three methods in
the table outperform the one in [6] when applied to most large ﬁnite
state machines. References [l] R, Brayton. R. Rudell. A. Sangiovanni—Vincentelli, and A. Wang. M15: 133 circuit _Im_m trans recur prod
sand 6 1310 32 4 60.8 16.5 16.9
scf 8 2208 115 16  64.2 18.8  18.7
5344 15 26...
View
Full Document
 Fall '09
 brewer

Click to edit the document details