by idouba

Slides
70 slides

06FPBasic.ppt

Published Feb 16, 2014 in
Direct Link :

06FPBasic.ppt... Read more

Read less


Comments

comments powered by Disqus

Presentation Slides & Transcript

Presentation Slides & Transcript

11Data Mining: Concepts and Techniques (3rd ed.)— Chapter 6 —Jiawei Han, Micheline Kamber, and Jian PeiUniversity of Illinois at Urbana-Champaign &Simon Fraser University©2011 Han, Kamber & Pei. All rights reserved.

February 16, 2014Data Mining: Concepts and Techniques2

3Chapter 5: Mining Frequent Patterns, Association and Correlations: Basic Concepts and MethodsBasic ConceptsFrequent Itemset Mining Methods Which Patterns Are Interesting?—Pattern Evaluation MethodsSummary

4What Is Frequent Pattern Analysis?Frequent pattern: a pattern (a set of items, subsequences, substructures, etc.) that occurs frequently in a data set First proposed by Agrawal, Imielinski, and Swami [AIS93] in the context of frequent itemsets and association rule miningMotivation: Finding inherent regularities in dataWhat products were often purchased together?— Beer and diapers?!What are the subsequent purchases after buying a PC?What kinds of DNA are sensitive to this new drug?Can we automatically classify web documents?ApplicationsBasket data analysis, cross-marketing, catalog design, sale campaign analysis, Web log (click stream) analysis, and DNA sequence analysis.

5Why Is Freq. Pattern Mining Important?Freq. pattern: An intrinsic and important property of datasets Foundation for many essential data mining tasksAssociation, correlation, and causality analysisSequential, structural (e.g., sub-graph) patternsPattern analysis in spatiotemporal, multimedia, time-series, and stream data Classification: discriminative, frequent pattern analysisCluster analysis: frequent pattern-based clusteringData warehousing: iceberg cube and cube-gradient Semantic data compression: fasciclesBroad applications

6Basic Concepts: Frequent Patternsitemset: A set of one or more itemsk-itemset X = {x1, …, xk}(absolute) support, or, support count of X: Frequency or occurrence of an itemset X(relative) support, s, is the fraction of transactions that contains X (i.e., the probability that a transaction contains X)An itemset X is frequent if X’s support is no less than a minsup threshold

7Basic Concepts: Association RulesFind all the rules X  Y with minimum support and confidencesupport, s, probability that a transaction contains X  Yconfidence, c, conditional probability that a transaction having X also contains YLet minsup = 50%, minconf = 50%Freq. Pat.: Beer:3, Nuts:3, Diaper:4, Eggs:3, {Beer, Diaper}:3Customerbuys diaperCustomerbuys bothCustomerbuys beerNuts, Eggs, Milk40Nuts, Coffee, Diaper, Eggs, Milk50Beer, Diaper, Eggs30Beer, Coffee, Diaper20Beer, Nuts, Diaper10Items boughtTidAssociation rules: (many more!)Beer  Diaper (60%, 100%)Diaper  Beer (60%, 75%)

8Closed Patterns and Max-PatternsA long pattern contains a combinatorial number of sub-patterns, e.g., {a1, …, a100} contains (1001) + (1002) + … + (110000) = 2100 – 1 = 1.27*1030 sub-patterns!Solution: Mine closed patterns and max-patterns insteadAn itemset X is closed if X is frequent and there exists no super-pattern Y כ X, with the same support as X (proposed by Pasquier, et al. @ ICDT’99) An itemset X is a max-pattern if X is frequent and there exists no frequent super-pattern Y כ X (proposed by Bayardo @ SIGMOD’98)Closed pattern is a lossless compression of freq. patternsReducing the # of patterns and rules

9Closed Patterns and Max-PatternsExercise. DB = {, < a1, …, a50>} Min_sup = 1.What is the set of closed itemset?: 1< a1, …, a50>: 2What is the set of max-pattern?: 1What is the set of all patterns?!!

10Computational Complexity of Frequent Itemset MiningHow many itemsets are potentially to be generated in the worst case?The number of frequent itemsets to be generated is senstive to the minsup thresholdWhen minsup is low, there exist potentially an exponential number of frequent itemsetsThe worst case: MN where M: # distinct items, and N: max length of transactionsThe worst case complexty vs. the expected probabilityEx. Suppose Walmart has 104 kinds of products The chance to pick up one product 10-4The chance to pick up a particular set of 10 products: ~10-40What is the chance this particular set of 10 products to be frequent 103 times in 109 transactions?

11Chapter 5: Mining Frequent Patterns, Association and Correlations: Basic Concepts and MethodsBasic ConceptsFrequent Itemset Mining Methods Which Patterns Are Interesting?—Pattern Evaluation MethodsSummary

12Scalable Frequent Itemset Mining MethodsApriori: A Candidate Generation-and-Test ApproachImproving the Efficiency of AprioriFPGrowth: A Frequent Pattern-Growth ApproachECLAT: Frequent Pattern Mining with Vertical Data Format

13The Downward Closure Property and Scalable Mining MethodsThe downward closure property of frequent patternsAny subset of a frequent itemset must be frequentIf {beer, diaper, nuts} is frequent, so is {beer, diaper}i.e., every transaction having {beer, diaper, nuts} also contains {beer, diaper} Scalable mining methods: Three major approachesApriori (Agrawal & Srikant@VLDB’94)Freq. pattern growth (FPgrowth—Han, Pei & Yin @SIGMOD’00)Vertical data format approach (Charm—Zaki & Hsiao @SDM’02)

14Apriori: A Candidate Generation & Test ApproachApriori pruning principle: If there is any itemset which is infrequent, its superset should not be generated/tested! (Agrawal & Srikant @VLDB’94, Mannila, et al. @ KDD’ 94)Method: Initially, scan DB once to get frequent 1-itemsetGenerate length (k+1) candidate itemsets from length k frequent itemsetsTest the candidates against DBTerminate when no frequent or candidate set can be generated

15The Apriori Algorithm—An Example Database TDB1st scanC1L1L2C2C22nd scanC3L33rd scanSupmin = 2

16The Apriori Algorithm (Pseudo-Code)Ck: Candidate itemset of size kLk : frequent itemset of size kL1 = {frequent items};for (k = 1; Lk !=; k++) do begin Ck+1 = candidates generated from Lk; for each transaction t in database do increment the count of all candidates in Ck+1 that are contained in t Lk+1 = candidates in Ck+1 with min_support endreturn k Lk;

17Implementation of AprioriHow to generate candidates?Step 1: self-joining LkStep 2: pruningExample of Candidate-generationL3={abc, abd, acd, ace, bcd}Self-joining: L3*L3abcd from abc and abdacde from acd and acePruning:acde is removed because ade is not in L3C4 = {abcd}

18How to Count Supports of Candidates?Why counting supports of candidates a problem?The total number of candidates can be very huge One transaction may contain many candidatesMethod:Candidate itemsets are stored in a hash-treeLeaf node of hash-tree contains a list of itemsets and countsInterior node contains a hash tableSubset function: finds all the candidates contained in a transaction

19Counting Supports of Candidates Using Hash TreeTransaction: 1 2 3 5 61 + 2 3 5 61 2 + 3 5 61 3 + 5 6

20Candidate Generation: An SQL ImplementationSQL Implementation of candidate generationSuppose the items in Lk-1 are listed in an orderStep 1: self-joining Lk-1 insert into Ckselect p.item1, p.item2, …, p.itemk-1, q.itemk-1from Lk-1 p, Lk-1 qwhere p.item1=q.item1, …, p.itemk-2=q.itemk-2, p.itemk-1 < q.itemk-1Step 2: pruningforall itemsets c in Ck doforall (k-1)-subsets s of c doif (s is not in Lk-1) then delete c from CkUse object-relational extensions like UDFs, BLOBs, and Table functions for efficient implementation [S. Sarawagi, S. Thomas, and R. Agrawal. Integrating association rule mining with relational database systems: Alternatives and implications. SIGMOD’98]

21Scalable Frequent Itemset Mining MethodsApriori: A Candidate Generation-and-Test ApproachImproving the Efficiency of AprioriFPGrowth: A Frequent Pattern-Growth ApproachECLAT: Frequent Pattern Mining with Vertical Data FormatMining Close Frequent Patterns and Maxpatterns

22Further Improvement of the Apriori MethodMajor computational challengesMultiple scans of transaction databaseHuge number of candidatesTedious workload of support counting for candidatesImproving Apriori: general ideasReduce passes of transaction database scansShrink number of candidatesFacilitate support counting of candidates

Partition: Scan Database Only TwiceAny itemset that is potentially frequent in DB must be frequent in at least one of the partitions of DBScan 1: partition database and find local frequent patternsScan 2: consolidate global frequent patternsA. Savasere, E. Omiecinski and S. Navathe, VLDB’95DB1DB2DBk+= DB++sup1(i) < σDB1sup2(i) < σDB2supk(i) < σDBksup(i) < σDB

24DHP: Reduce the Number of CandidatesA k-itemset whose corresponding hashing bucket count is below the threshold cannot be frequentCandidates: a, b, c, d, eHash entries{ab, ad, ae}{bd, be, de} …Frequent 1-itemset: a, b, d, eab is not a candidate 2-itemset if the sum of count of {ab, ad, ae} is below support thresholdJ. Park, M. Chen, and P. Yu. An effective hash-based algorithm for mining association rules. SIGMOD’95Hash Table

25Sampling for Frequent PatternsSelect a sample of original database, mine frequent patterns within sample using AprioriScan database once to verify frequent itemsets found in sample, only borders of closure of frequent patterns are checkedExample: check abcd instead of ab, ac, …, etc.Scan database again to find missed frequent patternsH. Toivonen. Sampling large databases for association rules. In VLDB’96

26DIC: Reduce Number of ScansABCDABCABDACDBCDABACBCADBDCDABCD{}Itemset latticeOnce both A and D are determined frequent, the counting of AD beginsOnce all length-2 subsets of BCD are determined frequent, the counting of BCD beginsTransactions1-itemsets2-itemsets…Apriori1-itemsets2-items3-itemsDICS. Brin R. Motwani, J. Ullman, and S. Tsur. Dynamic itemset counting and implication rules for market basket data. In SIGMOD’97

27Scalable Frequent Itemset Mining MethodsApriori: A Candidate Generation-and-Test ApproachImproving the Efficiency of AprioriFPGrowth: A Frequent Pattern-Growth ApproachECLAT: Frequent Pattern Mining with Vertical Data FormatMining Close Frequent Patterns and Maxpatterns

28Pattern-Growth Approach: Mining Frequent Patterns Without Candidate GenerationBottlenecks of the Apriori approachBreadth-first (i.e., level-wise) searchCandidate generation and testOften generates a huge number of candidatesThe FPGrowth Approach (J. Han, J. Pei, and Y. Yin, SIGMOD’ 00)Depth-first searchAvoid explicit candidate generationMajor philosophy: Grow long patterns from short ones using local frequent items only“abc” is a frequent patternGet all transactions having “abc”, i.e., project DB on abc: DB|abc“d” is a local frequent item in DB|abc  abcd is a frequent pattern

29Construct FP-tree from a Transaction Databasemin_support = 3TID Items bought (ordered) frequent items100 {f, a, c, d, g, i, m, p} {f, c, a, m, p}200 {a, b, c, f, l, m, o} {f, c, a, b, m}300 {b, f, h, j, o, w} {f, b}400 {b, c, k, s, p} {c, b, p}500 {a, f, c, e, l, p, m, n} {f, c, a, m, p}Scan DB once, find frequent 1-itemset (single item pattern)Sort frequent items in frequency descending order, f-listScan DB again, construct FP-treeF-list = f-c-a-b-m-p

30Partition Patterns and DatabasesFrequent patterns can be partitioned into subsets according to f-listF-list = f-c-a-b-m-pPatterns containing pPatterns having m but no p…Patterns having c but no a nor b, m, pPattern fCompleteness and non-redundency

31Find Patterns Having P From P-conditional DatabaseStarting at the frequent item header table in the FP-treeTraverse the FP-tree by following the link of each frequent item pAccumulate all of transformed prefix paths of item p to form p’s conditional pattern baseConditional pattern basesitem cond. pattern basec f:3a fc:3b fca:1, f:1, c:1m fca:2, fcab:1p fcam:2, cb:1

32From Conditional Pattern-bases to Conditional FP-trees For each pattern-baseAccumulate the count for each item in the baseConstruct the FP-tree for the frequent items of the pattern basem-conditional pattern base:fca:2, fcab:1All frequent patterns relate to mm, fm, cm, am, fcm, fam, cam, fcam{}f:4c:1b:1p:1b:1c:3a:3b:1m:2p:2m:1Header TableItem frequency head f 4c 4a 3b 3m 3p 3

33Recursion: Mining Each Conditional FP-treeCond. pattern base of “am”: (fc:3)Cond. pattern base of “cm”: (f:3){}f:3cm-conditional FP-treeCond. pattern base of “cam”: (f:3){}f:3cam-conditional FP-tree

34A Special Case: Single Prefix Path in FP-treeSuppose a (conditional) FP-tree T has a shared single prefix-path PMining can be decomposed into two partsReduction of the single prefix path into one nodeConcatenation of the mining results of the two parts+

35Benefits of the FP-tree StructureCompleteness Preserve complete information for frequent pattern miningNever break a long pattern of any transactionCompactnessReduce irrelevant info—infrequent items are goneItems in frequency descending order: the more frequently occurring, the more likely to be sharedNever be larger than the original database (not count node-links and the count field)

36The Frequent Pattern Growth Mining MethodIdea: Frequent pattern growthRecursively grow frequent patterns by pattern and database partitionMethod For each frequent item, construct its conditional pattern-base, and then its conditional FP-treeRepeat the process on each newly created conditional FP-tree Until the resulting FP-tree is empty, or it contains only one path—single path will generate all the combinations of its sub-paths, each of which is a frequent pattern

37Scaling FP-growth by Database ProjectionWhat about if FP-tree cannot fit in memory?DB projectionFirst partition a database into a set of projected DBsThen construct and mine FP-tree for each projected DBParallel projection vs. partition projection techniquesParallel projectionProject the DB in parallel for each frequent itemParallel projection is space costlyAll the partitions can be processed in parallelPartition projectionPartition the DB based on the ordered frequent itemsPassing the unprocessed parts to the subsequent partitions

38Partition-Based ProjectionParallel projection needs a lot of disk space Partition projection saves it

39FP-Growth vs. Apriori: Scalability With the Support ThresholdData set T25I20D10K

Data Mining: Concepts and Techniques40FP-Growth vs. Tree-Projection: Scalability with the Support ThresholdData set T25I20D100K

41Advantages of the Pattern Growth ApproachDivide-and-conquer: Decompose both the mining task and DB according to the frequent patterns obtained so farLead to focused search of smaller databasesOther factorsNo candidate generation, no candidate testCompressed database: FP-tree structureNo repeated scan of entire database Basic ops: counting local freq items and building sub FP-tree, no pattern search and matchingA good open-source implementation and refinement of FPGrowthFPGrowth+ (Grahne and J. Zhu, FIMI'03)

42Further Improvements of Mining MethodsAFOPT (Liu, et al. @ KDD’03)A “push-right” method for mining condensed frequent pattern (CFP) tree Carpenter (Pan, et al. @ KDD’03)Mine data sets with small rows but numerous columnsConstruct a row-enumeration tree for efficient miningFPgrowth+ (Grahne and Zhu, FIMI’03)Efficiently Using Prefix-Trees in Mining Frequent Itemsets, Proc. ICDM'03 Int. Workshop on Frequent Itemset Mining Implementations (FIMI'03), Melbourne, FL, Nov. 2003TD-Close (Liu, et al, SDM’06)

43Extension of Pattern Growth Mining Methodology Mining closed frequent itemsets and max-patternsCLOSET (DMKD’00), FPclose, and FPMax (Grahne & Zhu, Fimi’03)Mining sequential patternsPrefixSpan (ICDE’01), CloSpan (SDM’03), BIDE (ICDE’04)Mining graph patternsgSpan (ICDM’02), CloseGraph (KDD’03)Constraint-based mining of frequent patternsConvertible constraints (ICDE’01), gPrune (PAKDD’03)Computing iceberg data cubes with complex measures H-tree, H-cubing, and Star-cubing (SIGMOD’01, VLDB’03)Pattern-growth-based ClusteringMaPle (Pei, et al., ICDM’03) Pattern-Growth-Based ClassificationMining frequent and discriminative patterns (Cheng, et al, ICDE’07)

44Scalable Frequent Itemset Mining MethodsApriori: A Candidate Generation-and-Test ApproachImproving the Efficiency of AprioriFPGrowth: A Frequent Pattern-Growth ApproachECLAT: Frequent Pattern Mining with Vertical Data FormatMining Close Frequent Patterns and Maxpatterns

45ECLAT: Mining by Exploring Vertical Data FormatVertical format: t(AB) = {T11, T25, …}tid-list: list of trans.-ids containing an itemset Deriving frequent patterns based on vertical intersectionst(X) = t(Y): X and Y always happen togethert(X)  t(Y): transaction having X always has YUsing diffset to accelerate miningOnly keep track of differences of tidst(X) = {T1, T2, T3}, t(XY) = {T1, T3} Diffset (XY, X) = {T2}Eclat (Zaki et al. @KDD’97)Mining Closed patterns using vertical format: CHARM (Zaki & Hsiao@SDM’02)

46Scalable Frequent Itemset Mining MethodsApriori: A Candidate Generation-and-Test ApproachImproving the Efficiency of AprioriFPGrowth: A Frequent Pattern-Growth ApproachECLAT: Frequent Pattern Mining with Vertical Data FormatMining Close Frequent Patterns and Maxpatterns

Mining Frequent Closed Patterns: CLOSETFlist: list of all frequent items in support ascending orderFlist: d-a-f-e-cDivide search spacePatterns having dPatterns having d but no a, etc.Find frequent closed pattern recursivelyEvery transaction having d also has cfa  cfad is a frequent closed patternJ. Pei, J. Han & R. Mao. “CLOSET: An Efficient Algorithm for Mining Frequent Closed Itemsets", DMKD'00.Min_sup=2

CLOSET+: Mining Closed Itemsets by Pattern-GrowthItemset merging: if Y appears in every occurrence of X, then Y is merged with XSub-itemset pruning: if Y כ X, and sup(X) = sup(Y), X and all of X’s descendants in the set enumeration tree can be prunedHybrid tree projectionBottom-up physical tree-projectionTop-down pseudo tree-projectionItem skipping: if a local frequent item has the same support in several header tables at different levels, one can prune it from the header table at higher levelsEfficient subset checking

MaxMiner: Mining Max-Patterns1st scan: find frequent itemsA, B, C, D, E2nd scan: find support for AB, AC, AD, AE, ABCDEBC, BD, BE, BCDECD, CE, CDE, DESince BCDE is a max-pattern, no need to check BCD, BDE, CDE in later scanR. Bayardo. Efficiently mining long patterns from databases. SIGMOD’98Potential max-patterns

CHARM: Mining by Exploring Vertical Data FormatVertical format: t(AB) = {T11, T25, …}tid-list: list of trans.-ids containing an itemset Deriving closed patterns based on vertical intersectionst(X) = t(Y): X and Y always happen togethert(X)  t(Y): transaction having X always has YUsing diffset to accelerate miningOnly keep track of differences of tidst(X) = {T1, T2, T3}, t(XY) = {T1, T3} Diffset (XY, X) = {T2}Eclat/MaxEclat (Zaki et al. @KDD’97), VIPER(P. Shenoy et al.@SIGMOD’00), CHARM (Zaki & Hsiao@SDM’02)

51Visualization of Association Rules: Plane Graph

52Visualization of Association Rules: Rule Graph

53Visualization of Association Rules (SGI/MineSet 3.0)

54Chapter 5: Mining Frequent Patterns, Association and Correlations: Basic Concepts and MethodsBasic ConceptsFrequent Itemset Mining Methods Which Patterns Are Interesting?—Pattern Evaluation MethodsSummary

55Interestingness Measure: Correlations (Lift)play basketball  eat cereal [40%, 66.7%] is misleadingThe overall % of students eating cereal is 75% > 66.7%.play basketball  not eat cereal [20%, 33.3%] is more accurate, although with lower support and confidenceMeasure of dependent/correlated events: lift

56Are lift and 2 Good Measures of Correlation?“Buy walnuts  buy milk [1%, 80%]” is misleading if 85% of customers buy milkSupport and confidence are not good to indicate correlationsOver 20 interestingness measures have been proposed (see Tan, Kumar, Sritastava @KDD’02)Which are good ones?

57Null-Invariant Measures

February 16, 2014Data Mining: Concepts and Techniques58Comparison of Interestingness MeasuresNull-(transaction) invariance is crucial for correlation analysisLift and 2 are not null-invariant5 null-invariant measuresNull-transactions w.r.t. m and cNull-invariantSubtle: They disagreeKulczynski measure (1927)

59Analysis of DBLP Coauthor RelationshipsAdvisor-advisee relation: Kulc: high, coherence: low, cosine: middleRecent DB conferences, removing balanced associations, low sup, etc.Tianyi Wu, Yuguo Chen and Jiawei Han, “Association Mining in Large Databases: A Re-Examination of Its Measures”, Proc. 2007 Int. Conf. Principles and Practice of Knowledge Discovery in Databases (PKDD'07), Sept. 2007

Which Null-Invariant Measure Is Better? IR (Imbalance Ratio): measure the imbalance of two itemsets A and B in rule implicationsKulczynski and Imbalance Ratio (IR) together present a clear picture for all the three datasets D4 through D6D4 is balanced & neutralD5 is imbalanced & neutralD6 is very imbalanced & neutral

61Chapter 5: Mining Frequent Patterns, Association and Correlations: Basic Concepts and MethodsBasic ConceptsFrequent Itemset Mining Methods Which Patterns Are Interesting?—Pattern Evaluation MethodsSummary

62SummaryBasic concepts: association rules, support-confident framework, closed and max-patternsScalable frequent pattern mining methodsApriori (Candidate generation & test)Projection-based (FPgrowth, CLOSET+, ...)Vertical format approach (ECLAT, CHARM, ...)Which patterns are interesting? Pattern evaluation methods

63Ref: Basic Concepts of Frequent Pattern Mining(Association Rules) R. Agrawal, T. Imielinski, and A. Swami. Mining association rules between sets of items in large databases. SIGMOD'93.(Max-pattern) R. J. Bayardo. Efficiently mining long patterns from databases. SIGMOD'98. (Closed-pattern) N. Pasquier, Y. Bastide, R. Taouil, and L. Lakhal. Discovering frequent closed itemsets for association rules. ICDT'99.(Sequential pattern) R. Agrawal and R. Srikant. Mining sequential patterns. ICDE'95

64Ref: Apriori and Its ImprovementsR. Agrawal and R. Srikant. Fast algorithms for mining association rules. VLDB'94.H. Mannila, H. Toivonen, and A. I. Verkamo. Efficient algorithms for discovering association rules. KDD'94.A. Savasere, E. Omiecinski, and S. Navathe. An efficient algorithm for mining association rules in large databases. VLDB'95.J. S. Park, M. S. Chen, and P. S. Yu. An effective hash-based algorithm for mining association rules. SIGMOD'95.H. Toivonen. Sampling large databases for association rules. VLDB'96.S. Brin, R. Motwani, J. D. Ullman, and S. Tsur. Dynamic itemset counting and implication rules for market basket analysis. SIGMOD'97.S. Sarawagi, S. Thomas, and R. Agrawal. Integrating association rule mining with relational database systems: Alternatives and implications. SIGMOD'98.

65Ref: Depth-First, Projection-Based FP MiningR. Agarwal, C. Aggarwal, and V. V. V. Prasad. A tree projection algorithm for generation of frequent itemsets. J. Parallel and Distributed Computing:02.J. Han, J. Pei, and Y. Yin. Mining frequent patterns without candidate generation. SIGMOD’ 00. J. Liu, Y. Pan, K. Wang, and J. Han. Mining Frequent Item Sets by Opportunistic Projection. KDD'02. J. Han, J. Wang, Y. Lu, and P. Tzvetkov. Mining Top-K Frequent Closed Patterns without Minimum Support. ICDM'02.J. Wang, J. Han, and J. Pei. CLOSET+: Searching for the Best Strategies for Mining Frequent Closed Itemsets. KDD'03. G. Liu, H. Lu, W. Lou, J. X. Yu. On Computing, Storing and Querying Frequent Patterns. KDD'03.G. Grahne and J. Zhu, Efficiently Using Prefix-Trees in Mining Frequent Itemsets, Proc. ICDM'03 Int. Workshop on Frequent Itemset Mining Implementations (FIMI'03), Melbourne, FL, Nov. 2003

66Ref: Vertical Format and Row Enumeration MethodsM. J. Zaki, S. Parthasarathy, M. Ogihara, and W. Li. Parallel algorithm for discovery of association rules. DAMI:97.Zaki and Hsiao. CHARM: An Efficient Algorithm for Closed Itemset Mining, SDM'02. C. Bucila, J. Gehrke, D. Kifer, and W. White. DualMiner: A Dual-Pruning Algorithm for Itemsets with Constraints. KDD’02.F. Pan, G. Cong, A. K. H. Tung, J. Yang, and M. Zaki , CARPENTER: Finding Closed Patterns in Long Biological Datasets. KDD'03.H. Liu, J. Han, D. Xin, and Z. Shao, Mining Interesting Patterns from Very High Dimensional Data: A Top-Down Row Enumeration Approach, SDM'06.

67Ref: Mining Correlations and Interesting RulesM. Klemettinen, H. Mannila, P. Ronkainen, H. Toivonen, and A. I. Verkamo. Finding interesting rules from large sets of discovered association rules. CIKM'94.S. Brin, R. Motwani, and C. Silverstein. Beyond market basket: Generalizing association rules to correlations. SIGMOD'97.C. Silverstein, S. Brin, R. Motwani, and J. Ullman. Scalable techniques for mining causal structures. VLDB'98.P.-N. Tan, V. Kumar, and J. Srivastava. Selecting the Right Interestingness Measure for Association Patterns. KDD'02.E. Omiecinski. Alternative Interest Measures for Mining Associations. TKDE’03.T. Wu, Y. Chen and J. Han, “Association Mining in Large Databases: A Re-Examination of Its Measures”, PKDD'07

68Ref: Freq. Pattern Mining ApplicationsY. Huhtala, J. Kärkkäinen, P. Porkka, H. Toivonen. Efficient Discovery of Functional and Approximate Dependencies Using Partitions. ICDE’98. H. V. Jagadish, J. Madar, and R. Ng. Semantic Compression and Pattern Extraction with Fascicles. VLDB'99.T. Dasu, T. Johnson, S. Muthukrishnan, and V. Shkapenyuk. Mining Database Structure; or How to Build a Data Quality Browser. SIGMOD'02.K. Wang, S. Zhou, J. Han. Profit Mining: From Patterns to Actions. EDBT’02.

February 16, 2014Data Mining: Concepts and Techniques69

70Chapter 5: Mining Frequent Patterns, Association and Correlations: Basic Concepts and MethodsBasic ConceptsMarket Basket Analysis: A Motivating ExampleFrequent Itemsets and Association RulesEfficient and Scalable Frequent Itemset Mining MethodsThe Apriori Algorithm: Finding Frequent Itemsets Using Candidate GenerationGenerating Association Rules from Frequent ItemsetsImproving the Efficiency of AprioriMining Frequent Itemsets without Candidate GenerationMining Frequent Itemsets Using Vertical Data FormatAre All the Pattern Interesting?—Pattern Evaluation MethodsStrong Rules Are Not Necessarily InterestingFrom Association Analysis to Correlation AnalysisSelection of Good Measures for Pattern EvaluationApplications of frequent pattern and associationsWeblog miningCollaborative FilteringBioinformaticsSummary