Drs output theory algorithm bdr since it is

Info iconThis preview shows page 1. Sign up to view the full content.

View Full Document Right Arrow Icon
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: -DP. We tested the algorithms on di erent orderings of variables: input ordering as used by the random problem generator, min-width ordering, and min-diversity ordering. Given an interaction graph, min-width ordering selects a variable with the smallest degree, and puts it last in the ordering; the node is eliminated from the graph and the ordering continues recursively. Min-diversity ordering was described earlier. In order to test the algorithms on problems with di erent structures several random generators were used. Uniform k-cnfs were obtained using the generator proposed in [17]. It takes as an input the number of variables n, the number of clauses m, and the number of literals per clause k, and obtains each clause by choosing k variables randomly from the set of n variables and by determining the polarity of each literal with probability p = 0.5. Di erent values of p were also used. We did not check clause uniqueness since for 18 large number of variables it is unlikely that identical clauses will be generated. Our second generator, called mixed cnfs, produces theories containing clauses of length k1 or k2 . The third generator, called chains, obtains a sequence of n independent uniform k -cnf theories (called subtheories), and connects them in a chain by (n 0 1) 2-cnf clauses. Each 2-cnf clause contains variables from two consecutive subtheories in the chain (see Figure 7). Similarly, we connected sequences of independent theories into a tree structure. Both chains and tree-structures described above belong to a class of random embeddings in k-trees [1]. We implemented a generator, called (k; m)-trees, which generalizes the idea of k -trees. A (k; m)-tree is a tree of cliques, each having (k + m) nodes, where k is the size of intersection between each two neighbouring cliques. Therefore, conventional k -trees are (k; 1)-trees according to our de nition. The (k; m)-tree generators takes as an input k , m, the number of cliques in a (k; m)-tree Ncliques, and the number of clauses to be added per each new clique, Nclauses. It generates the rst clique of size k + m with Nclauses clauses in it, then, until Ncliques are generated, it chooses randomly a previously generated clique, randomly selects k variables out of this clique, adds m new variables and generates Nclauses clauses on the new clique. We should notice that Nclauses is not the number of clauses per each clique, because each new clique shares some variables with the previously (and, possible, future) generated cliques. The induced width of a (k; m)-trees is bounded by k + m 0 1. 6.1 Results for Problems with Uniform Structure We recorded CPU time for all algorithms, and the number of deadends for DP-backtracking. We recorded also the number of new clauses generated by DR, the maximal size of the generated clauses, and the induced width. The number of experiments per each data point is reported in the gures. We compared DP-backtracking with DR on uniform k -cnfs for k=3,4,5 and on mixed the...
View Full Document

This note was uploaded on 09/17/2013 for the course PMATH 330 taught by Professor W. alabama during the Spring '09 term at Waterloo.

Ask a homework question - tutors are online