The Apriori Algorithm: Example
Consider a database, D ,
consisting of 9 transactions.
Suppose min. support count
required is 2 (Le. min_sup = 2/9 =
22 % )
Let minimum confidence required
is 70%.
We have to first find out the
frequent itemset using Aprio

The Apriori Algorithm: Basics
The Apriori Algorithm is an influential algorithm for
mining frequent itemsets for boolean association rules.
Key Concepts :
- Frequent ltemsets: The sets of item which has minimum
support (denoted by Li for ith-ltemset).
- A

The Apriori Algorithm : Pseudo
code
- Join Step: Ck is generated by joining Lk_1with itself
- Prune Step: Any (k—1)—itemset that is not frequent cannot be a
subset of a frequent k-itemset
- Pseudo-code:
Ck: Candidate itemset of size k
Lk : frequent itemse

Step 5: Generating Association Rules from
Frequent ltemsets
- Let minimum confidence threshold is , say 70%.
- The resulting association rules are shown below,
each listed with its confidence.
—R1:|1"l29|5
° Conﬁdence = sc{|1,|2,|5}/sc{|1,|2} = 2/4 = 50%

Step 2: Generating 2—itemset Frequent Pattern
To discover the set of frequent 2—itemsets, L2 , the
algorithm uses L1 Join L1 to generate a candidate set of
2—itemsets, C2-
Next, the transactions in D are scanned and the support
count for each candidate it

Step 5: Generating Association Rules from Frequent
ltemsets
. Procedure:
. For each frequent itemset “I”, generate all nonempty subsets
of I.
- For every nonempty subsets of I, output the rule “3 —) (l-s)” if
support_count(l) Isupport_count(s) >= min_conf

Step 4: Generating 4-itemset Frequent Pattern
- The algorithm uses L3 Join L3 to generate a candidate
set of 4-itemsets, C4. Although thejoin results in {l1, l2,
l3, l5}, this itemset is pruned since its subset {l2, l3, l5}
is not frequent.
. Thus, C4 = (

Step 3: Generating 3-itemset Frequent Pattern
Based on the Apriori property that all subsets of a frequent itemset must
also be frequent, we can determine that four latter candidates cannot
possibly be frequent. How ’?
For example , lets take {l1, l2, l3}

Step 1: Generating 1-itemset Frequent Pattern
Itemset Sup.Count
-u
-n
C
1 L1
Compare candidate
support count with
minimum support
count
Scan D for
count of each
candidate
The set of frequent 1-itemsets, L1 , consist

Methods to Improve Apriori’s Efficiency
Hash-based itemset counting: A k-itemset whose corresponding
hashing bucket count is below the threshold cannot be frequent.
Transaction reduction: A transaction that does not contain any
frequent k—itemset is usele

Mining Frequent Patterns Without Candidate
Generation
- Compress a large database into a compact, Frequent-
Pattern tree (FP-tree) structure
— highly condensed, but complete for frequent pattern
mining
— avoid costly database scans
- Develop an efficient,

FP-Growth Method: Construction of FP-Tree
First, create the root of the tree, labeled with “null”.
Scan the database D a second time. (First time we scanned it to
create 1-itemset and then L).
The items in each transaction are processed in L order (i.e. s

Why Frequent Pattern Growth Fast ?
- Performance study shows
— FP-growth is an order of magnitude faster than Apriori,
and is also faster than tree-projection
- Reasoning
— No candidate generation, no candidate test
— Use compact data structure
— Eliminat

Mining the FP-Tree by Creating Conditional (sub)
pattern bases
Steps:
1.
2.
Start from each frequent length-1 pattern (as an initial suffix
pattern).
Construct its conditional pattern base which consists of the
set of prefix paths in the FP-Tree co-occurr

FP-Tree Example Continued
Out of these, Only l1 & I2 is selected in the conditional FP-Tree
because I3 is not satisfying the minimum support count.
For l1 , support count in conditional pattern base = 1 + 1 = 2
For l2 , support count in conditional patter

| How to Count Supports of
Candidates?
I Why counting supports of candidates a problem?
3 The total number of candidates can be very huge
3 One transaction may contain many candidates
I Method:
3 Candidate itemsets are stored in a barb-free
3 Lecy node

FP-Growth Method : An Example
Consider the same previous
example of a database, D ,
consisting of 9 transactions.
Suppose min. support count
required is 2 (Le. min_sup =
2/9 = 22 % )
The first scan of database is
same as Apriori, which derives
the set o

The Apriori Algorithm in a
Nutshell
- Find the frequent itemsets: the sets of items that have
minimum support
— A subset of a frequent itemset must also be a
frequent itemset
- i.e., if {AB} is a frequent itemset, both {A} and {B}
should be a frequent ite

Selective and Authentic
Third-Party distribution of
XML Documents
- Yashaswini Harsha Kumar
- Netaji Mandava
(Oct 16th 2006)
Contents
Terminology
Security
Properties
XML Overview
Merkle Hash function
Access Control Model
Architecture
Actor Interact