Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Part Four PDF

Download as pdf or txt
Download as pdf or txt
You are on page 1of 73

Data Mining

◼ Association Rules & Sequential


Patterns

PART 4

Prof. Ahmed Sultan Al-Hegami


Professor of AI & Intelligent Systems
Sana’a University
Prof. Ahmed Sultan Al-Hegami
Road map

◼ Basic concepts of Association Rules


◼ Apriori algorithm
◼ Different data formats for mining
◼ Mining with multiple minimum supports
◼ Mining class association rules
◼ Sequential pattern mining
◼ Summary

Prof. Ahmed Sultan Al-Hegami


Association rule mining
◼ Proposed by Agrawal et al in 1993.
◼ It is an important data mining model studied
extensively by the database and data mining
community.
◼ Assume all data are categorical.
◼ No good algorithm for numeric data.
◼ Initially used for Market Basket Analysis to find
how items purchased by customers are related.

Bread → Milk [sup = 5%, conf = 100%]

Prof. Ahmed Sultan Al-Hegami


Market Analysis

◼ Where does the data come from?


❑ Credit card transactions, loyalty cards, discount coupons, customer complaint calls, plus
(public) lifestyle studies
◼ Target marketing
❑ Find clusters of “model” customers who share the same characteristics: interest, income
level, spending habits, etc.
❑ Determine customer purchasing patterns over time
◼ Cross-market analysis
❑ Associations/co-relations between product sales, & prediction based on such association
◼ Customer profiling
❑ What types of customers buy what products (clustering or classification)
◼ Customer requirement analysis
❑ identifying the best products for different customers
❑ Predict what factors will attract new customers)

Prof. Ahmed Sultan Al-Hegami


Market Basket Example

? Where should detergents be placed in the


Store to maximize their sales?

? Are window cleaning products purchased


when detergents and orange juice are
bought together?

? Is soda typically purchased with bananas?


Does the brand of soda make a difference?

? How are the demographics of the


neighborhood affecting what customers
are buying?

Prof. Ahmed Sultan Al-Hegami


Association Rules

◼ There has been a considerable amount of research in the area of Market


Basket Analysis. Its appeal comes from the clarity and utility of its results,
which are expressed in the form association rules.
◼ Given
❑ A database of transactions
❑ Each transaction contains a set of items

◼ Find all rules X->Y that correlate the presence of one set of items X with
another set of items Y
❑ Example: When a customer buys bread and butter, they buy milk 85% of the time

+
Prof. Ahmed Sultan Al-Hegami
The model: data

◼ I = {i1, i2, …, im}: a set of items.


◼ Transaction t :
❑ t a set of items, and t  I.

◼ Transaction Database T: a set of transactions


T = {t1, t2, …, tn}.

Prof. Ahmed Sultan Al-Hegami


Transaction data: supermarket data
◼ Market basket transactions:
t1: {bread, cheese, milk}
t2: {apple, eggs, salt, yogurt}
… …
tn: {biscuit, eggs, milk}
◼ Concepts:
❑ An item: an item/article in a basket
❑ I: the set of all items sold in the store
❑ A transaction: items purchased in a basket; it may
have TID (transaction ID)
❑ A transactional dataset: A set of transactions
Prof. Ahmed Sultan Al-Hegami
Transaction data: a set of documents
◼ A text document data set. Each document
is treated as a “bag” of keywords
doc1: Student, Teach, School
doc2: Student, School
doc3: Teach, School, City, Game
doc4: Baseball, Basketball
doc5: Basketball, Player, Spectator
doc6: Baseball, Coach, Game, Team
doc7: Basketball, Team, City, Game

Prof. Ahmed Sultan Al-Hegami


The model: rules
◼ A transaction t contains X, a set of items
(itemset) in I, if X  t.
◼ An association rule is an implication of the
form:
X → Y, where X, Y  I, and X Y = 

◼ An itemset is a set of items.


❑ E.g., X = {milk, bread, cereal} is an itemset.
◼ A k-itemset is an itemset with k items.
❑ E.g., {milk, bread, cereal} is a 3-itemset

Prof. Ahmed Sultan Al-Hegami


Rule strength measures
◼ Support: The rule holds with support sup in T
(the transaction data set) if sup% of
transactions contain X  Y.
❑ sup = Pr(X  Y).
◼ Confidence: The rule holds in T with
confidence conf if conf% of transactions that
contain X also contain Y.
❑ conf = Pr(Y | X)
◼ An association rule is a pattern that states
when X occurs, Y occurs with certain
probability.
Prof. Ahmed Sultan Al-Hegami
Support and Confidence
◼ Support count: The support count of an
itemset X, denoted by X.count, in a data set
T is the number of transactions in T that
contain X. Assume T has n transactions.
◼ Then,
( X  Y ).count
support =
n
( X  Y ).count
confidence =
X .count
Prof. Ahmed Sultan Al-Hegami
Goal and key features

◼ Goal: Find all rules that satisfy the user-


specified minimum support (minsup) and
minimum confidence (minconf).

◼ Key Features
❑ Completeness: find all rules.
❑ No target item(s) on the right-hand-side
❑ Mining with data on hard disk (not in memory)

Prof. Ahmed Sultan Al-Hegami


t1: Beef, Chicken, Milk
An example t2: Beef, Cheese
t3: Cheese, Boots
t4: Beef, Chicken, Cheese
t5: Beef, Chicken, Clothes, Cheese, Milk
◼ Transaction data t6: Chicken, Clothes, Milk
t7: Chicken, Milk, Clothes
◼ Assume:
minsup = 30%
minconf = 80%
◼ An example frequent itemset:
{Chicken, Clothes, Milk} [sup = 3/7]
◼ Association rules from the itemset:
Clothes → Milk, Chicken [sup = 3/7, conf = 3/3]
… …
Clothes, Chicken → Milk, [sup = 3/7, conf = 3/3]

Prof. Ahmed Sultan Al-Hegami


The Apriori Algorithm — Example
Database D itemset sup.
L1 itemset sup.
TID Items C1 {1} 2 {1} 2
100 134 {2} 3 {2} 3
200 235 Scan D {3} 3 {3} 3
300 1235 {4} 1 {5} 3
400 25 {5} 3
C2 itemset sup C2 itemset
L2 itemset sup {1 2} 1 Scan D {1 2}
{1 3} 2 {1 3} 2 {1 3}
{2 3} 2 {1 5} 1 {1 5}
{2 3} 2 {2 3}
{2 5} 3
{2 5} 3 {2 5}
{3 5} 2
{3 5} 2 {3 5}
C3 itemset Scan D L3 itemset sup
{2 3 5}
Prof. Ahmed Sultan Al-Hegami {2 3 5} 2
Transaction data representation

◼ A simplistic view of shopping baskets,


◼ Some important information not considered.
E.g,
❑ the quantity of each item purchased and
❑ the price paid.

Prof. Ahmed Sultan Al-Hegami


Many mining algorithms
◼ There are a large number of them!!
◼ They use different strategies and data structures.
◼ Their resulting sets of rules are all the same.
❑ Given a transaction data set T, and a minimum support and
a minimum confident, the set of association rules existing in
T is uniquely determined.
◼ Any algorithm should find the same set of rules
although their computational efficiencies and
memory requirements may be different.
◼ We study only one: the Apriori Algorithm

Prof. Ahmed Sultan Al-Hegami


Road map

◼ Basic concepts of Association Rules


◼ Apriori algorithm
◼ Different data formats for mining
◼ Mining with multiple minimum supports
◼ Mining class association rules
◼ Sequential pattern mining
◼ Summary

Prof. Ahmed Sultan Al-Hegami


The Apriori algorithm
◼ The best known algorithm
◼ Two steps:
❑ Find all itemsets that have minimum support
(frequent itemsets, also called large itemsets).
❑ Use frequent itemsets to generate rules.

◼ E.g., a frequent itemset


{Chicken, Clothes, Milk} [sup = 3/7]
and one rule from the frequent itemset
Clothes → Milk, Chicken [sup = 3/7, conf = 3/3]

Prof. Ahmed Sultan Al-Hegami


Step 1: Mining all frequent itemsets
◼ A frequent itemset is an itemset whose support
is ≥ minsup.
◼ Key idea: The apriori property (downward
closure property): any subsets of a frequent
itemset are also frequent itemsets
ABC ABD ACD BCD

AB AC AD BC BD CD

A B C D

Prof. Ahmed Sultan Al-Hegami


The Algorithm
◼ Iterative algo. (also called level-wise search):
Find all 1-item frequent itemsets; then all 2-item
frequent itemsets, and so on.
❑ In each iteration k, only consider itemsets that
contain some k-1 frequent itemset.
◼ Find frequent itemsets of size 1: F1
◼ From k = 2
❑ Ck = candidates of size k: those itemsets of size k
that could be frequent, given Fk-1
❑ Fk = those itemsets that are actually frequent, Fk
 Ck (need to scan the database once).
Prof. Ahmed Sultan Al-Hegami
Dataset T
Example – minsup=0.5
TID Items
T100 1, 3, 4
Finding frequent itemsets T200 2, 3, 5
T300 1, 2, 3, 5
T400 2, 5
itemset:count
1. scan T ➔ C1: {1}:2, {2}:3, {3}:3, {4}:1, {5}:3
➔ F1: {1}:2, {2}:3, {3}:3, {5}:3
➔ C2 : {1,2}, {1,3}, {1,5}, {2,3}, {2,5}, {3,5}
2. scan T ➔ C2: {1,2}:1, {1,3}:2, {1,5}:1, {2,3}:2, {2,5}:3, {3,5}:2
➔ F2: {1,3}:2, {2,3}:2, {2,5}:3, {3,5}:2
➔ C3 : {2, 3,5}
3. scan T ➔ C3: {2, 3, 5}:2 ➔ F3: {2, 3, 5}

Prof. Ahmed Sultan Al-Hegami


Details: ordering of items

◼ The items in I are sorted in lexicographic


order (which is a total order).
◼ The order is used throughout the algorithm in
each itemset.
◼ {w[1], w[2], …, w[k]} represents a k-itemset w
consisting of items w[1], w[2], …, w[k], where
w[1] < w[2] < … < w[k] according to the total
order.

Prof. Ahmed Sultan Al-Hegami


Details: the algorithm
Algorithm Apriori(T)
C1  init-pass(T);
F1  {f | f  C1, f.count/n  minsup}; // n: no. of transactions in T
for (k = 2; Fk-1  ; k++) do
Ck  candidate-gen(Fk-1);
for each transaction t  T do
for each candidate c  Ck do
if c is contained in t then
c.count++;
end
end
Fk  {c  Ck | c.count/n  minsup}
end
return F  k Fk;

Prof. Ahmed Sultan Al-Hegami


Apriori candidate generation
◼ The candidate-gen function takes Fk-1 and
returns a superset (called the candidates)
of the set of all frequent k-itemsets. It has
two steps
❑ join step: Generate all possible candidate
itemsets Ck of length k
❑ prune step: Remove those candidates in Ck
that cannot be frequent.

Prof. Ahmed Sultan Al-Hegami


Candidate-gen function
Function candidate-gen(Fk-1)
Ck  ;
forall f1, f2  Fk-1
with f1 = {i1, … , ik-2, ik-1}
and f2 = {i1, … , ik-2, i’k-1}
and ik-1 < i’k-1 do
c  {i1, …, ik-1, i’k-1}; // join f1 and f2
Ck  Ck  {c};
for each (k-1)-subset s of c do
if (s  Fk-1) then
delete c from Ck; // prune
end
end
return Ck;

Prof. Ahmed Sultan Al-Hegami


An example
◼ F3 = {{1, 2, 3}, {1, 2, 4}, {1, 3, 4},
{1, 3, 5}, {2, 3, 4}}

◼ After join
❑ C4 = {{1, 2, 3, 4}, {1, 3, 4, 5}}
◼ After pruning:
❑ C4 = {{1, 2, 3, 4}}
because {1, 4, 5} is not in F3 ({1, 3, 4, 5} is removed)

Prof. Ahmed Sultan Al-Hegami


Another Example:

◼ L3={abc, abd, acd, ace, bcd}


◼ Self-joining: L3*L3
❑ abcd from abc and abd
❑ acde from acd and ace

◼ Pruning:
❑ acde is removed because ade is not in L3

◼ C4={abcd}

Prof. Ahmed Sultan Al-Hegami


Step 2: Generating rules from frequent
itemsets
◼ Frequent itemsets  association rules
◼ One more step is needed to generate
association rules
◼ For each frequent itemset X,
For each proper nonempty subset A of X,
❑ Let B = X - A
❑ A → B is an association rule if
◼ Confidence(A → B) ≥ minconf,

support(A → B) = support(AB) = support(X)


confidence(A → B) = support(A  B) / support(A)
Prof. Ahmed Sultan Al-Hegami
Generating rules: an example
◼ Suppose {2,3,4} is frequent, with sup=50%
❑ Proper nonempty subsets: {2,3}, {2,4}, {3,4}, {2}, {3}, {4}, with
sup=50%, 50%, 75%, 75%, 75%, 75% respectively
❑ These generate these association rules:
◼ 2,3 → 4, confidence=100%
◼ 2,4 → 3, confidence=100%
◼ 3,4 → 2, confidence=67%
◼ 2 → 3,4, confidence=67%
◼ 3 → 2,4, confidence=67%
◼ 4 → 2,3, confidence=67%
◼ All rules have support = 50%

Prof. Ahmed Sultan Al-Hegami


Generating rules: summary
◼ To recap, in order to obtain A → B, we need
to have support(A  B) and support(A)
◼ All the required information for confidence
computation has already been recorded in
itemset generation. No need to see the data
T any more.
◼ This step is not as time-consuming as
frequent itemsets generation.

Prof. Ahmed Sultan Al-Hegami


Other Association Rule Applications

◼ Quantitative Association Rules


❑ Age[35..40] and Married[Yes] -> NumCars[2]

◼ Association Rules with Constraints


❑ Find all association rules where the prices of items are > 100
dollars
◼ Temporal Association Rules
❑ Diaper -> Beer (1% support, 80% confidence)

❑ Diaper -> Beer (20%support) 7:00-9:00 PM weekdays

+
Prof. Ahmed Sultan Al-Hegami
On Apriori Algorithm

Seems to be very expensive


◼ Level-wise search

◼ K = the size of the largest itemset

◼ It makes at most K passes over data

◼ In practice, K is bounded (10).

◼ The algorithm is very fast. Under some conditions,


all rules can be found in linear time.
◼ Scale up to large data sets

Prof. Ahmed Sultan Al-Hegami


More on association rule mining
◼ Clearly the space of all association rules is
exponential, O(2m), where m is the number of
items in I.
◼ The mining exploits sparseness of data, and
high minimum support and high minimum
confidence values.
◼ Still, it always produces a huge number of
rules, thousands, tens of thousands, millions,
...

Prof. Ahmed Sultan Al-Hegami


Road map

◼ Basic concepts of Association Rules


◼ Apriori algorithm
◼ Different data formats for mining
◼ Mining with multiple minimum supports
◼ Mining class association rules
◼ Sequential pattern mining
◼ Summary

Prof. Ahmed Sultan Al-Hegami


Different data formats for mining
◼ The data can be in transaction form or table
form
Transaction form: a, b
a, c, d, e
a, d, f
Table form: Attr1 Attr2 Attr3
a, b, d
b, c, e
◼ Table data need to be converted to
transaction form for association mining

Prof. Ahmed Sultan Al-Hegami


From a table to a set of transactions
Table form: Attr1 Attr2 Attr3
a, b, d
b, c, e

 Transaction form:
(Attr1, a), (Attr2, b), (Attr3, d)
(Attr1, b), (Attr2, c), (Attr3, e)

candidate-gen can be slightly improved. Why?

Prof. Ahmed Sultan Al-Hegami


Road map

◼ Basic concepts of Association Rules


◼ Apriori algorithm
◼ Different data formats for mining
◼ Mining with multiple minimum supports
◼ Mining class association rules
◼ Sequential pattern mining
◼ Summary

Prof. Ahmed Sultan Al-Hegami


Problems with the association mining
◼ Single minsup: It assumes that all items in
the data are of the same nature and/or
have similar frequencies.
◼ Not true: In many applications, some items
appear very frequently in the data, while
others rarely appear.
E.g., in a supermarket, people buy food processor
and cooking pan much less frequently than they
buy bread and milk.

Prof. Ahmed Sultan Al-Hegami


Rare Item Problem
◼ If the frequencies of items vary a great deal,
we will encounter two problems
❑ If minsup is set too high, those rules that involve
rare items will not be found.
❑ To find rules that involve both frequent and rare
items, minsup has to be set very low. This may
cause combinatorial explosion because those
frequent items will be associated with one another
in all possible ways.

Prof. Ahmed Sultan Al-Hegami


Multiple minsups model (READING)
◼ The minimum support of a rule is expressed in
terms of minimum item supports (MIS) of the items
that appear in the rule.
◼ Each item can have a minimum item support.
◼ By providing different MIS values for different
items, the user effectively expresses different
support requirements for different rules.
◼ To prevent very frequent items and very rare items
from appearing in the same itemsets, we introduce
a support difference constraint.
maxis{sup{i} − minis{sup(i)} ≤ ,

Prof. Ahmed Sultan Al-Hegami


Minsup of a rule (READING)

◼ Let MIS(i) be the MIS value of item i. The


minsup of a rule R is the lowest MIS value of
the items in the rule.
◼ I.e., a rule R: a1, a2, …, ak → ak+1, …, ar
satisfies its minimum support if its actual
support is 
min(MIS(a1), MIS(a2), …, MIS(ar)).

Prof. Ahmed Sultan Al-Hegami


An Example (READING)

◼ Consider the following items:


bread, shoes, clothes
The user-specified MIS values are as follows:
MIS(bread) = 2% MIS(shoes) = 0.1%
MIS(clothes) = 0.2%
The following rule doesn’t satisfy its minsup:
clothes → bread [sup=0.15%,conf =70%]
The following rule satisfies its minsup:
clothes → shoes [sup=0.15%,conf =70%]

Prof. Ahmed Sultan Al-Hegami


Downward closure property (READING)

◼ In the new model, the property no longer


holds (?)
E.g., Consider four items 1, 2, 3 and 4 in a
database. Their minimum item supports are
MIS(1) = 10% MIS(2) = 20%
MIS(3) = 5% MIS(4) = 6%

{1, 2} with support 9% is infrequent, but {1, 2, 3}


and {1, 2, 4} could be frequent.

Prof. Ahmed Sultan Al-Hegami


To deal with the problem (READING)

◼ We sort all items in I according to their MIS


values (make it a total order).
◼ The order is used throughout the algorithm in
each itemset.
◼ Each itemset w is of the following form:
{w[1], w[2], …, w[k]}, consisting of items,
w[1], w[2], …, w[k],
where MIS(w[1])  MIS(w[2])  …  MIS(w[k]).

Prof. Ahmed Sultan Al-Hegami


The MSapriori algorithm (READING)
Algorithm MSapriori(T, MS, ) //  is for support difference constraint
M  sort(I, MS);
L  init-pass(M, T);
F1  {{i} | i  L, i.count/n  MIS(i)};
for (k = 2; Fk-1  ; k++) do
if k=2 then
Ck  level2-candidate-gen(L, )
else Ck  MScandidate-gen(Fk-1, );
end;
for each transaction t  T do
for each candidate c  Ck do
if c is contained in t then
c.count++;
if c – {c[1]} is contained in t then
c.tailCount++
end
end
Fk  {c  Ck | c.count/n  MIS(c[1])}
end
return F  kFk;

Prof. Ahmed Sultan Al-Hegami


Candidate itemset generation (READING)

◼ Special treatments needed:


❑ Sorting the items according to their MIS values
❑ First pass over data (the first three lines)
◼ Let us look at this in detail.
❑ Candidate generation at level-2
◼ Read it in the handout.
❑ Pruning step in level-k (k > 2) candidate
generation.
◼ Read it in the handout.

Prof. Ahmed Sultan Al-Hegami


First pass over data (READING)
◼ It makes a pass over the data to record the
support count of each item.
◼ It then follows the sorted order to find the
first item i in M that meets MIS(i).
❑ i is inserted into L.
❑ For each subsequent item j in M after i, if
j.count/n  MIS(i) then j is also inserted into L,
where j.count is the support count of j and n is
the total number of transactions in T. Why?
◼ L is used by function level2-candidate-gen
Prof. Ahmed Sultan Al-Hegami
First pass over data: an example (READING)
◼ Consider the four items 1, 2, 3 and 4 in a data set.
Their minimum item supports are:
MIS(1) = 10% MIS(2) = 20%
MIS(3) = 5% MIS(4) = 6%
◼ Assume our data set has 100 transactions. The first
pass gives us the following support counts:
{3}.count = 6, {4}.count = 3,
{1}.count = 9, {2}.count = 25.
◼ Then L = {3, 1, 2}, and F1 = {{3}, {2}}
◼ Item 4 is not in L because 4.count/n < MIS(3) (= 5%),
◼ {1} is not in F1 because 1.count/n < MIS(1) (= 10%).

Prof. Ahmed Sultan Al-Hegami


Rule generation (READING)
◼ The following two lines in MSapriori algorithm
are important for rule generation, which are
not needed for the Apriori algorithm
if c – {c[1]} is contained in t then
c.tailCount++
◼ Many rules cannot be generated without
them.
◼ Why?

Prof. Ahmed Sultan Al-Hegami


On multiple minsup rule mining (READING)

◼ Multiple minsup model subsumes the single


support model.
◼ It is a more realistic model for practical
applications.
◼ The model enables us to found rare item rules
yet without producing a huge number of
meaningless rules with frequent items.
◼ By setting MIS values of some items to 100% (or
more), we effectively instruct the algorithms not
to generate rules only involving these items.

Prof. Ahmed Sultan Al-Hegami


Road map

◼ Basic concepts of Association Rules


◼ Apriori algorithm
◼ Different data formats for mining
◼ Mining with multiple minimum supports
◼ Mining class association rules
◼ Sequential pattern mining
◼ Summary

Prof. Ahmed Sultan Al-Hegami


Mining class association rules (CAR)
◼ Normal association rule mining does not have
any target (class value).
◼ It finds all possible rules that exist in data, i.e.,
any item can appear as a consequent or a
condition of a rule.
◼ However, in some applications, the user is
interested in some targets.
❑ E.g, the user has a set of text documents from
some known topics. He/she wants to find out what
words are associated or correlated with each topic.

Prof. Ahmed Sultan Al-Hegami


Problem definition
◼ Let T be a transaction data set consisting of n
transactions.
◼ Each transaction is also labeled with a class y.
◼ Let I be the set of all items in T, Y be the set of all
class labels and I  Y = .
◼ A class association rule (CAR) is an implication of
the form
X → y, where X  I, and y  Y.
◼ The definitions of support and confidence are the
same as those for normal association rules.

Prof. Ahmed Sultan Al-Hegami


An example
◼ A text document data set
doc 1: Student, Teach, School : Education
doc 2: Student, School : Education
doc 3: Teach, School, City, Game : Education
doc 4: Baseball, Basketball : Sport
doc 5: Basketball, Player, Spectator : Sport
doc 6: Baseball, Coach, Game, Team : Sport
doc 7: Basketball, Team, City, Game : Sport

◼ Let minsup = 20% and minconf = 60%. The following are two
examples of class association rules:
Student, School → Education [sup= 2/7, conf = 2/2]
game → Sport [sup= 2/7, conf = 2/3]

Prof. Ahmed Sultan Al-Hegami


Mining algorithm (READING)
◼ Unlike normal association rules, CARs can be mined
directly in one step.
◼ The key operation is to find all ruleitems that have
support above minsup. A ruleitem is of the form:
(condset, y)
where condset is a set of items from I (i.e., condset
 I), and y  Y is a class label.
◼ Each ruleitem basically represents a rule:
condset → y,
◼ The Apriori algorithm can be modified to generate
CARs

Prof. Ahmed Sultan Al-Hegami


Multiple minimum class supports (READING)

◼ The multiple minimum support idea can also be


applied here.
◼ The user can specify different minimum supports to
different classes, which effectively assign a different
minimum support to rules of each class.
◼ For example, we have a data set with two classes,
Yes and No. We may want
❑ rules of class Yes to have the minimum support of 5% and
❑ rules of class No to have the minimum support of 10%.
◼ By setting minimum class supports to 100% (or
more for some classes), we tell the algorithm not to
generate rules of those classes.
❑ This is a very useful trick in applications.

Prof. Ahmed Sultan Al-Hegami


Road map

◼ Basic concepts of Association Rules


◼ Apriori algorithm
◼ Different data formats for mining
◼ Mining with multiple minimum supports
◼ Mining class association rules
◼ Sequential pattern mining
◼ Summary

Prof. Ahmed Sultan Al-Hegami


Sequential pattern mining

◼ Association rule mining does not consider the


order of transactions.
◼ In many applications such orderings are
significant. E.g.,
❑ in market basket analysis, it is interesting to know
whether people buy some items in sequence,
◼ e.g., buying bed first and then bed sheets some time
later.
❑ In Web usage mining, it is useful to find
navigational patterns of users in a Web site from
sequences of page visits of users

Prof. Ahmed Sultan Al-Hegami


Basic concepts (READING)
◼ Let I = {i1, i2, …, im} be a set of items.
◼ Sequence: An ordered list of itemsets.
◼ Itemset/element: A non-empty set of items X  I.
We denote a sequence s by a1a2…ar, where ai is
an itemset, which is also called an element of s.
◼ An element (or an itemset) of a sequence is denoted
by {x1, x2, …, xk}, where xj  I is an item.
◼ We assume without loss of generality that items in
an element of a sequence are in lexicographic
order ( ‫(الترتيب المعجمي‬

Prof. Ahmed Sultan Al-Hegami


Basic concepts (contd) (READING)

◼ Size: The size of a sequence is the number of


elements (or itemsets) in the sequence.
◼ Length: The length of a sequence is the number of
items in the sequence.
❑ A sequence of length k is called k-sequence.
◼ A sequence s1 = a1a2…ar is a subsequence of
another sequence s2 = b1b2…bv, or s2 is a
supersequence of s1, if there exist integers 1 ≤ j1 <
j2 < … < jr−1 < jr  v such that a1  bj1, a2  bj2, …, ar
 bjr. We also say that s2 contains s1.

Prof. Ahmed Sultan Al-Hegami


An example (READING)

◼ Let I = {1, 2, 3, 4, 5, 6, 7, 8, 9}.


◼ Sequence {3}{4, 5}{8} is contained in (or is a
subsequence of) {6} {3, 7}{9}{4, 5, 8}{3, 8}
❑ because {3}  {3, 7}, {4, 5}  {4, 5, 8}, and {8}  {3,
8}.
❑ However, {3}{8} is not contained in {3, 8} or vice
versa.
❑ The size of the sequence {3}{4, 5}{8} is 3, and the
length of the sequence is 4.

Prof. Ahmed Sultan Al-Hegami


Objective (READING)

◼ Given a set S of input data sequences (or


sequence database), the problem of mining
sequential patterns is to find all the
sequences that have a user-specified
minimum support.
◼ Each such sequence is called a frequent
sequence, or a sequential pattern.
◼ The support for a sequence is the fraction of
total data sequences in S that contains this
sequence.
Prof. Ahmed Sultan Al-Hegami
Example (READING)

Prof. Ahmed Sultan Al-Hegami


Example (cond) (READING)

Prof. Ahmed Sultan Al-Hegami


GSP mining algorithm (READING)
◼ Very similar to the Apriori algorithm

Prof. Ahmed Sultan Al-Hegami


Candidate generation (READING)

Prof. Ahmed Sultan Al-Hegami


An example (READING)

Prof. Ahmed Sultan Al-Hegami


Now it is your turn …

Programming assignment!
◼ Implement two algorithms for sequential pattern
mining considering
❑ multiple minimum supports
❑ support difference constraint
◼ Algorithms: (1) MS-GSP, and (2) MSprefixSpan
◼ Each group implements only 1 algorithm
❑ Deadline: May 27, 2010 (Demo your program on that day)
❑ Test data sequences will be in one file in the same format
as those in the book.

Prof. Ahmed Sultan Al-Hegami


Road map

◼ Basic concepts of Association Rules


◼ Apriori algorithm
◼ Different data formats for mining
◼ Mining with multiple minimum supports
◼ Mining class association rules
◼ Sequential pattern mining
◼ Summary

Prof. Ahmed Sultan Al-Hegami


Summary
◼ Association rule mining has been extensively studied
in the data mining community.
◼ So is sequential pattern mining
◼ There are many efficient algorithms and model
variations.
◼ Other related work includes
❑ Multi-level or generalized rule mining
❑ Constrained rule mining
❑ Incremental rule mining
❑ Maximal frequent itemset mining (READING)
❑ Closed itemset mining (READING)
❑ Rule interestingness and visualization
❑ Parallel algorithms
❑ …
Prof. Ahmed Sultan Al-Hegami
Thank you !!!
Prof. Ahmed Sultan Al-Hegami
Discussion

Prof. Ahmed Sultan Al-Hegami

You might also like