cs345-lsh-3

cs345-lsh-3 - Finding Similar Pairs DivideComputeMerge...

Info iconThis preview shows page 1. Sign up to view the full content.

View Full Document Right Arrow Icon
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: Finding Similar Pairs DivideComputeMerge LocalitySensitive Hashing Applications 1 Finding Similar Pairs x Suppose we have in main memory data representing a large number of objects. May be the objects themselves (e.g., summaries of faces). May be signatures as in minhashing. x We want to compare each to each, finding those pairs that are sufficiently similar. 2 Candidate Generation From Minhash Signatures x Pick a similarity threshold s, a fraction < 1. x A pair of columns c and d is a candidate pair if their signatures agree in at least fraction s of the rows. I.e., M (i, c ) = M (i, d ) for at least fraction s values of i. 3 Other Notions of "Sufficiently Similar" x For images, a pair of vectors is a candidate if they differ by at most a small amount t in at least s % of the components. x For entity records, a pair is a candidate if the sum of similarity scores of corresponding components exceeds a threshold. 4 Checking All Pairs is Hard x While the signatures of all columns may fit in main memory, comparing the signatures of all pairs of columns is quadratic in the number of columns. x Example: 106 columns implies 5*1011 comparisons. x At 1 microsecond/comparison: 6 days. 5 Solutions 1. DivideComputeMerge (DCM) uses external sorting, merging. 2. LocalitySensitive Hashing (LSH) can be carried out in main memory, but admits some false negatives. 6 DivideComputeMerge x Designed for "shingles" and docs. Or other problems where data is presented by column. x At each stage, divide data into batches that fit in main memory. x Operate on individual batches and write out partial results to disk. x Merge partial results from disk. 7 DCM Steps doc1: s11,s12,...,s1k doc2: s21,s22,...,s2k ... Invert s11,doc1 s12,doc1 ... s1k,doc1 s21,doc2 ... sort on shingleId t1,doc11 t1,doc12 ... t2,doc21 t2,doc22 ... Invert and pair doc11,doc12,1 doc11,doc12,1 ... doc11,doc13,1 ... doc11,doc12,1 doc11,doc13,1 ... doc21,doc22,1 ... doc11,doc12,2 doc11,doc13,10 ... Merge sort on <docId1, docId2> 8 DCM Summary 1. Start with the pairs <shingleId, docId>. 2. Sort by shingleId. 3. In a sequential scan, generate triplets <docId1, docId2, 1> for pairs of docs that share a shingle. 4. Sort on <docId1, docId2>. 5. Merge triplets with common docIds to generate triplets of the form <docId1,docId2,count>. 6. Output document pairs with count > threshold. 9 Some Optimizations x "Invert and Pair" is the most expensive step. x Speed it up by eliminating very common shingles. x Also, eliminate exactduplicate docs first. 10 "the", "404 not found", "<A HREF", etc. LocalitySensitive Hashing x Big idea: hash columns of signature matrix M several times. x Arrange that (only) similar columns are likely to hash to the same bucket. x Candidate pairs are those that hash at least once to the same bucket. 11 Partition Into Bands r rows per band b bands Matrix M 12 Partition into Bands (2) x Divide matrix M into b bands of r rows. x For each band, hash its portion of each column to a hash table with k buckets. x Candidate column pairs are those that hash to the same bucket for 1 band. x Tune b and r to catch most similar pairs, but few nonsimilar pairs. 13 Buckets Matrix M r rows b bands 14 Simplifying Assumption x There are enough buckets that columns are unlikely to hash to the same bucket unless they are identical in a particular band. x Hereafter, we assume that "same bucket" means "identical." 15 Example x Suppose 100,000 columns. x Signatures of 100 integers. x Therefore, signatures take 40Mb. x But 5,000,000,000 pairs of signatures can take a while to compare. x Choose 20 bands of 5 integers/band. 16 Suppose C1, C2 are 80% Similar x Probability C1, C2 identical in one particular band: (0.8)5 = 0.328. x Probability C1, C2 are not similar in any of the 20 bands: (10.328)20 = .00035 . i.e., we miss about 1/3000th of the 80% similar column pairs. 17 Suppose C1, C2 Only 40% Similar x Probability C1, C2 identical in any one particular band: (0.4)5 = 0.01 . x Probability C1, C2 identical in 1 of 20 bands: 20 * 0.01 = 0.2 . x But false positives much lower for similarities << 40%. 18 LSH Involves a Tradeoff x Pick the number of minhashes, the number of bands, and the number of rows per band to balance false positives/negatives. x Example: if we had fewer than 20 bands, the number of false positives would go down, but the number of false negatives would go up. 19 Analysis of LSH What We Want Probability = 1 if s > t Probability of sharing a bucket No chance if s < t t Similarity s of two columns 20 What One Row Gives You Probability of sharing a bucket Remember: probability of equal hashvalues = similarity t Similarity s of two columns 21 What b Bands of r Rows Gives You At least one band identical No bands identical Probability of sharing a bucket t ~ (1/b)1/r 1 (1 s r )b t Similarity s of two columns Some row All rows of a band of a band unequal are equal 22 LSH Summary x Tune to get almost all pairs with similar signatures, but eliminate most pairs that do not have similar signatures. x Check in main memory that candidate pairs really do have similar signatures. x Optional: In another pass through data, check that the remaining candidate pairs really are similar columns . 23 LSH for Other Applications 1. Face recognition from 1000 measurements/face. 2. Entity resolution from nameaddress phone records. x General principle: find many hash functions for elements; candidate pairs share a bucket for > 1 hash. 24 FaceRecognition Hash Functions 1. Pick a set of r of the 1000 measurements. 2. Each bucket corresponds to a range of values for each of the r measurements. 3. Hash a vector to the bucket such that each of its r components is inrange. 4. Optional: if near the edge of a range, also hash to an adjacent bucket. 25 One bucket, for (x,y) if 10<x<16 and 0<y<4 Example: r = 2 1016 1723 2430 3137 3844 (27,9) goes here. 04 59 Maybe put a copy here, too. 1014 1519 26 ManyOne Face Lookup x As for boolean matrices, use many different hash functions. x Each bucket of each hash function points to the images that hash to that bucket. Each based on a different set of the 1000 measurements. 27 Face Lookup (2) x Given a new image (the probe ), hash it according to all the hash functions. x Any member of any one of its buckets is a candidate. x For each candidate, count the number of components in which the candidate and probe are close. x Match if #components > threshold. 28 Hashing the Probe probe Look in all these buckets h1 h2 h3 h4 h5 29 ManyMany Problem x Make each pair of images that are in the same bucket according to any hash function be a candidate pair. x Score each candidate pair as for the manyone problem. 30 Entity Resolution x You don't have the convenient multidimensional view of data that you do for "facerecognition" or "similar columns." x We actually used an LSHinspired simplification. 31 Matching Customer Records x I once took a consulting job solving the following problem: Company A agreed to solicit customers for Company B, for a fee. They then had a parting of the ways, and argued over how many customers. Neither recorded exactly which customers were involved. 32 Customer Records (2) x Company B had about 1 million records of all its customers. x Company A had about 1 million records describing customers, some of which it had signed up for B. x Records had name, address, and phone, but for various reasons, they could be different for the same person. 33 Customer Records (3) x Step 1: design a measure of how similar records are: E.g., deduct points for small misspellings ("Jeffrey" vs. "Geoffery"), same phone, different area code. x Step 2: score all pairs of records; report very similar records as matches. 34 Customer Records (4) x Problem: (1 million)2 is too many pairs of records to score. x Solution: A simple LSH. Compare iff records are identical in at least one. Three hash functions: exact values of name, address, phone. Misses similar records with a small difference in all three fields. 35 Customer Records Aside x We were able to tell what values of the scoring function were reliable in an interesting way. Identical records had a creation date difference of 10 days. We only looked for records created within 90 days, so bogus matches had a 45day average. 36 Aside (2) x By looking at the pool of matches with a fixed score, we could compute the average timedifference, say x, and deduce that fraction (45x)/35 of them were valid matches. x Alas, the lawyers didn't think the jury would understand. 37 ...
View Full Document

Ask a homework question - tutors are online