-
Notifications
You must be signed in to change notification settings - Fork 29k
[SPARK-19791] [ML] Add doc and example for fpgrowth #17130
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Changes from 1 commit
fdce240
ca12877
4223d94
9ce0093
fa4c734
0a5dbb2
de1bfc8
d4828b7
9fef280
9e908d0
16f845c
2f0ef8e
c957ba5
8d0ccb1
e9b090a
99530f1
0fb5a87
170c31e
2b1efb3
45139cd
af0b755
ea3b973
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,68 @@ | ||
| --- | ||
| layout: global | ||
| title: Frequent Pattern Mining | ||
| displayTitle: Frequent Pattern Mining | ||
| --- | ||
|
|
||
| Mining frequent items, itemsets, subsequences, or other substructures is usually among the | ||
| first steps to analyze a large-scale dataset, which has been an active research topic in | ||
| data mining for years. | ||
| We refer users to Wikipedia's [association rule learning](http://en.wikipedia.org/wiki/Association_rule_learning) | ||
| for more information. | ||
|
|
||
| **Table of Contents** | ||
|
|
||
| * This will become a table of contents (this text will be scraped). | ||
| {:toc} | ||
|
|
||
| ## FP-Growth | ||
|
|
||
| The FP-growth algorithm is described in the paper | ||
| [Han et al., Mining frequent patterns without candidate generation](http://dx.doi.org/10.1145/335191.335372), | ||
| where "FP" stands for frequent pattern. | ||
| Given a dataset of transactions, the first step of FP-growth is to calculate item frequencies and identify frequent items. | ||
| Different from [Apriori-like](http://en.wikipedia.org/wiki/Apriori_algorithm) algorithms designed for the same purpose, | ||
| the second step of FP-growth uses a suffix tree (FP-tree) structure to encode transactions without generating candidate sets | ||
| explicitly, which are usually expensive to generate. | ||
| After the second step, the frequent itemsets can be extracted from the FP-tree. | ||
| In `spark.mllib`, we implemented a parallel version of FP-growth called PFP, | ||
| as described in [Li et al., PFP: Parallel FP-growth for query recommendation](http://dx.doi.org/10.1145/1454008.1454027). | ||
| PFP distributes the work of growing FP-trees based on the suffices of transactions, | ||
| and hence more scalable than a single-machine implementation. | ||
|
||
| We refer users to the papers for more details. | ||
|
|
||
| `spark.ml`'s FP-growth implementation takes the following (hyper-)parameters: | ||
|
|
||
| * `minSupport`: the minimum support for an itemset to be identified as frequent. | ||
| For example, if an item appears 3 out of 5 transactions, it has a support of 3/5=0.6. | ||
| * `minConfidence`: minimum confidence for generating Association Rule. The parameter has no effect during `fit`, but specify | ||
| the minimum confidence for generating association rules from frequent itemsets. | ||
| * `numPartitions`: the number of partitions used to distribute the work. | ||
|
|
||
| The `FPGrowthModel` provides: | ||
|
|
||
| * `freqItemsets`: frequent itemsets in the format of DataFrame("items"[Seq], "freq"[Long]) | ||
| * `associationRules`: association rules generated with confidence above `minConfidence`, in the format of | ||
| DataFrame("antecedent"[Seq], "consequent"[Seq], "confidence"[Double]). | ||
| * `transform`: The transform method examines the input items against all the association rules and | ||
| summarize the consequents as prediction. The prediction column has the same data type as the | ||
|
||
| input column and does not contain existing items in the input column. | ||
|
|
||
|
|
||
| **Examples** | ||
|
|
||
| <div class="codetabs"> | ||
|
|
||
| <div data-lang="scala" markdown="1"> | ||
| Refer to the [Scala API docs](api/scala/index.html#org.apache.spark.ml.fpm.FPGrowth) for more details. | ||
|
|
||
| {% include_example scala/org/apache/spark/examples/ml/FPGrowthExample.scala %} | ||
| </div> | ||
|
|
||
| <div data-lang="java" markdown="1"> | ||
| Refer to the [Java API docs](api/java/org/apache/spark/ml/fpm/FPGrowth.html) for more details. | ||
|
|
||
| {% include_example java/org/apache/spark/examples/ml/JavaFPGrowthExample.java %} | ||
| </div> | ||
|
|
||
| </div> | ||
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,73 @@ | ||
| /* | ||
| * Licensed to the Apache Software Foundation (ASF) under one or more | ||
| * contributor license agreements. See the NOTICE file distributed with | ||
| * this work for additional information regarding copyright ownership. | ||
| * The ASF licenses this file to You under the Apache License, Version 2.0 | ||
| * (the "License"); you may not use this file except in compliance with | ||
| * the License. You may obtain a copy of the License at | ||
| * | ||
| * http://www.apache.org/licenses/LICENSE-2.0 | ||
| * | ||
| * Unless required by applicable law or agreed to in writing, software | ||
| * distributed under the License is distributed on an "AS IS" BASIS, | ||
| * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | ||
| * See the License for the specific language governing permissions and | ||
| * limitations under the License. | ||
| */ | ||
|
|
||
| package org.apache.spark.examples.ml; | ||
|
|
||
| // $example on$ | ||
| import java.util.Arrays; | ||
| import java.util.List; | ||
|
|
||
| import org.apache.spark.ml.fpm.FPGrowth; | ||
| import org.apache.spark.ml.fpm.FPGrowthModel; | ||
| import org.apache.spark.sql.Dataset; | ||
| import org.apache.spark.sql.Row; | ||
| import org.apache.spark.sql.RowFactory; | ||
| import org.apache.spark.sql.SparkSession; | ||
| import org.apache.spark.sql.types.*; | ||
| // $example off$ | ||
|
|
||
| public class JavaFPGrowthExample { | ||
| public static void main(String[] args) { | ||
| SparkSession spark = SparkSession | ||
| .builder() | ||
| .appName("JavaFPGrowthExample") | ||
| .getOrCreate(); | ||
|
|
||
| // $example on$ | ||
| List<Row> data = Arrays.asList( | ||
| RowFactory.create(Arrays.asList("1 2 5".split(" "))), | ||
| RowFactory.create(Arrays.asList("1 2 3 5".split(" "))), | ||
| RowFactory.create(Arrays.asList("1 2".split(" "))) | ||
| ); | ||
| StructType schema = new StructType(new StructField[]{ new StructField( | ||
| "features", new ArrayType(DataTypes.StringType, true), false, Metadata.empty()) | ||
| }); | ||
| Dataset<Row> itemsDF = spark.createDataFrame(data, schema); | ||
|
|
||
| // Learn a mapping from words to Vectors. | ||
| FPGrowth fpgrowth = new FPGrowth() | ||
| .setMinSupport(0.5) | ||
| .setMinConfidence(0.6); | ||
|
|
||
| FPGrowthModel model = fpgrowth.fit(itemsDF); | ||
|
|
||
| // get frequent itemsets. | ||
| model.freqItemsets().show(); | ||
|
|
||
| // get generated association rules. | ||
| model.associationRules().show(); | ||
|
|
||
| // transform examines the input items against all the association rules and summarize the | ||
| // consequents as prediction | ||
| Dataset<Row> result = model.transform(itemsDF); | ||
|
|
||
| result.show(); | ||
| // $example off$ | ||
|
|
||
| spark.stop(); | ||
| } | ||
| } |
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,71 @@ | ||
| /* | ||
| * Licensed to the Apache Software Foundation (ASF) under one or more | ||
| * contributor license agreements. See the NOTICE file distributed with | ||
| * this work for additional information regarding copyright ownership. | ||
| * The ASF licenses this file to You under the Apache License, Version 2.0 | ||
| * (the "License"); you may not use this file except in compliance with | ||
| * the License. You may obtain a copy of the License at | ||
| * | ||
| * http://www.apache.org/licenses/LICENSE-2.0 | ||
| * | ||
| * Unless required by applicable law or agreed to in writing, software | ||
| * distributed under the License is distributed on an "AS IS" BASIS, | ||
| * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | ||
| * See the License for the specific language governing permissions and | ||
| * limitations under the License. | ||
| */ | ||
|
|
||
| package org.apache.spark.examples.ml | ||
|
|
||
| // scalastyle:off println | ||
|
|
||
| // $example on$ | ||
| import org.apache.spark.ml.fpm.FPGrowth | ||
| // $example off$ | ||
| import org.apache.spark.sql.SparkSession | ||
|
|
||
| /** | ||
| * An example demonstrating FP-Growth. | ||
| * Run with | ||
| * {{{ | ||
| * bin/run-example ml.FPGrowthExample | ||
| * }}} | ||
| */ | ||
| object FPGrowthExample { | ||
|
|
||
|
Member
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. nit: remove blank line |
||
| def main(args: Array[String]): Unit = { | ||
|
|
||
|
||
| val spark = SparkSession | ||
| .builder | ||
| .appName(s"${this.getClass.getSimpleName}") | ||
| .getOrCreate() | ||
| import spark.implicits._ | ||
|
|
||
| // $example on$ | ||
| // Loads data. | ||
|
||
| val dataset = spark.createDataset(Seq( | ||
| "1 2 5", | ||
| "1 2 3 5", | ||
| "1 2") | ||
| ).map(t => t.split(" ")).toDF("features") | ||
|
||
|
|
||
| // Trains a FPGrowth model. | ||
|
||
| val fpgrowth = new FPGrowth().setMinSupport(0.5).setMinConfidence(0.6) | ||
| val model = fpgrowth.fit(dataset) | ||
|
|
||
| // get frequent itemsets. | ||
|
||
| model.freqItemsets.show() | ||
|
|
||
| // get generated association rules. | ||
|
||
| model.associationRules.show() | ||
|
|
||
| // transform examines the input items against all the association rules and summarize the | ||
| // consequents as prediction | ||
| model.transform(dataset).show() | ||
|
|
||
|
||
| // $example off$ | ||
|
|
||
| spark.stop() | ||
| } | ||
| } | ||
| // scalastyle:on println | ||
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -56,8 +56,8 @@ private[fpm] trait FPGrowthParams extends Params with HasFeaturesCol with HasPre | |
| def getMinSupport: Double = $(minSupport) | ||
|
|
||
| /** | ||
| * Number of partitions (>=1) used by parallel FP-growth. By default the param is not set, and | ||
| * partition number of the input dataset is used. | ||
| * Number of partitions (positive) used by parallel FP-growth. By default the param is not set, | ||
|
||
| * and partition number of the input dataset is used. | ||
| * @group expertParam | ||
| */ | ||
| @Since("2.2.0") | ||
|
|
@@ -240,12 +240,13 @@ class FPGrowthModel private[ml] ( | |
| val predictUDF = udf((items: Seq[_]) => { | ||
| if (items != null) { | ||
| val itemset = items.toSet | ||
| brRules.value.flatMap(rule => | ||
| if (items != null && rule._1.forall(item => itemset.contains(item))) { | ||
| brRules.value.flatMap { rule => | ||
|
||
| if (rule._1.forall(item => itemset.contains(item))) { | ||
| rule._2.filter(item => !itemset.contains(item)) | ||
| } else { | ||
| Seq.empty | ||
| }) | ||
| } | ||
| } | ||
| } else { | ||
| Seq.empty | ||
| }.distinct }, dt) | ||
|
|
||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
suffixes