Final Weka Lab Tutorial
Final Weka Lab Tutorial
Table of Contents
About the Tutorial ............................................................................................................................................ i
Audience ........................................................................................................................................................... i
Prerequisites ..................................................................................................................................................... i
Removing Attributes...................................................................................................................................... 20
ii
Weka
Clustering....................................................................................................................................................... 32
Associator ...................................................................................................................................................... 42
Features Extraction........................................................................................................................................ 46
Conclusion ..................................................................................................................................................... 51
iii
1. WEKA — Introduction Weka
The foundation of any Machine Learning application is data - not just a little data but a
huge data which is termed as Big Data in the current terminology.
To train the machine to analyze big data, you need to have several considerations on the
data:
In short, your big data needs lots of preprocessing before it can be used for Machine
Learning. Once the data is ready, you would apply various Machine Learning algorithms
such as classification, regression, clustering and so on to solve the problem at your end.
The type of algorithms that you apply is based largely on your domain knowledge. Even
within the same type, for example classification, there are several algorithms available.
You may like to test the different algorithms under the same class to build an efficient
machine learning model. While doing so, you would prefer visualization of the processed
data and thus you also require visualization tools.
In the upcoming chapters, you will learn about Weka, a software that accomplishes all the
above with ease and lets you work with big data comfortably.
1
2. WEKA — What is WEKA? Weka
WEKA - an open source software provides tools for data preprocessing, implementation of
several Machine Learning algorithms, and visualization tools so that you can develop
machine learning techniques and apply them to real-world data mining problems. What
WEKA offers is summarized in the following diagram:
If you observe the beginning of the flow of the image, you will understand that there are
many stages in dealing with Big Data to make it suitable for machine learning:
First, you will start with the raw data collected from the field. This data may contain several
null values and irrelevant fields. You use the data preprocessing tools provided in WEKA
to cleanse the data.
Then, you would save the preprocessed data in your local storage for applying ML
algorithms.
2
Weka
Next, depending on the kind of ML model that you are trying to develop you would select
one of the options such as Classify, Cluster, or Associate. The Attributes Selection
allows the automatic selection of features to create a reduced dataset.
Note that under each category, WEKA provides the implementation of several algorithms.
You would select an algorithm of your choice, set the desired parameters and run it on the
dataset.
Then, WEKA would give you the statistical output of the model processing. It provides you
a visualization tool to inspect the data.
The various models can be applied on the same dataset. You can then compare the outputs
of different models and select the best that meets your purpose.
Thus, the use of WEKA results in a quicker development of machine learning models on
the whole.
Now that we have seen what WEKA is and what it does, in the next chapter let us learn
how to install WEKA on your local computer.
3
3. WEKA — Installation Weka
To install WEKA on your machine, visit WEKA’s official website and download the
installation file. WEKA supports installation on Windows, Mac OS X and Linux. You just
need to follow the instructions on this page to install WEKA for your OS.
4
Weka
The WEKA GUI Chooser application will start and you would see the following screen:
The GUI Chooser application allows you to run five different types of applications as listed
here:
Explorer
Experimenter
KnowledgeFlow
Workbench
Simple CLI
5
4. WEKA — Launching Explorer Weka
In this chapter, let us look into various functionalities that the explorer provides for
working with big data.
When you click on the Explorer button in the Applications selector, it opens the following
screen:
Preprocess
Classify
Cluster
Associate
Select Attributes
Visualize
6
Weka
Under these tabs, there are several pre-implemented machine learning algorithms. Let us
look into each of them in detail now.
Preprocess Tab
Initially as you open the explorer, only the Preprocess tab is enabled. The first step in
machine learning is to preprocess the data. Thus, in the Preprocess option, you will select
the data file, process it and make it fit for applying the various machine learning
algorithms.
Classify Tab
The Classify tab provides you several machine learning algorithms for the classification
of your data. To list a few, you may apply algorithms such as Linear Regression, Logistic
Regression, Support Vector Machines, Decision Trees, RandomTree, RandomForest,
NaiveBayes, and so on. The list is very exhaustive and provides both supervised and
unsupervised machine learning algorithms.
Cluster Tab
Under the Cluster tab, there are several clustering algorithms provided - such as
SimpleKMeans, FilteredClusterer, HierarchicalClusterer, and so on.
Associate Tab
Under the Associate tab, you would find Apriori, FilteredAssociator and FPGrowth.
Visualize Tab
Lastly, the Visualize option allows you to visualize your processed data for analysis.
As you noticed, WEKA provides several ready-to-use algorithms for testing and building
your machine learning applications. To use WEKA effectively, you must have a sound
knowledge of these algorithms, how they work, which one to choose under what
circumstances, what to look for in their processed output, and so on. In short, you must
have a solid foundation in machine learning to use WEKA effectively in building your apps.
In the upcoming chapters, you will study each tab in the explorer in depth.
7
5. WEKA — Loading Data Weka
In this chapter, we start with the first tab that you use to preprocess the data. This is
common to all algorithms that you would apply to your data for building the model and is
a common step for all subsequent operations in WEKA.
For a machine learning algorithm to give acceptable accuracy, it is important that you
must cleanse your data first. This is because the raw data collected from the field may
contain null values, irrelevant columns and so on.
In this chapter, you will learn how to preprocess the raw data and create a clean,
meaningful dataset for further use.
First, you will learn to load the data file into the WEKA explorer. The data can be loaded
from the following sources:
Open file …
Open URL …
Open DB …
8
Weka
Click on the Open file ... button. A directory navigator window opens as shown in the
following screen:
Now, navigate to the folder where your data files are stored. WEKA installation comes up
with many sample databases for you to experiment. These are available in the data folder
of the WEKA installation.
For learning purpose, select any data file from this folder. The contents of the file would
be loaded in the WEKA environment. We will very soon learn how to inspect and process
this loaded data. Before that, let us look at how to load the data file from the Web.
9
Weka
We will open the file from a public URL Type the following URL in the popup box:
https://storm.cis.fordham.edu/~gweiss/data-mining/weka-data/weather.nominal.arff
You may specify any other URL where your data is stored. The Explorer will load the data
from the remote site into its environment.
10
Weka
Set the connection string to your database, set up the query for data selection, process
the query and load the selected records in WEKA.
11
6. WEKA — File Formats Weka
WEKA supports a large number of file formats for the data. Here is the complete list:
arff
arff.gz
bsi
csv
dat
data
json
json.gz
libsvm
m
names
xrff
xrff.gz
The types of files that it supports are listed in the drop-down list box at the bottom of the
screen. This is shown in the screenshot given below.
12
Weka
As you would notice it supports several formats including CSV and JSON. The default file
type is Arff.
Arff Format
An Arff file contains two sections - header and data.
13
Weka
As an example for Arff format, the Weather data file loaded from the WEKA sample
databases is shown below:
The @data tag starts the list of data rows each containing the comma separated
fields.
The attributes can take nominal values as in the case of outlook shown here:
You can also set a Target or a Class variable called play as shown here:
14
Weka
Other Formats
The Explorer can load the data in any of the earlier mentioned formats. As arff is the
preferred format in WEKA, you may load the data from any format and save it to arff
format for later use. After preprocessing the data, just save it to arff format for further
analysis.
Now that you have learned how to load data into WEKA, in the next chapter, you will learn
how to preprocess the data.
15
7. WEKA — Preprocessing the Data Weka
The data that is collected from the field contains many unwanted things that leads to
wrong analysis. For example, the data may contain null fields, it may contain columns that
are irrelevant to the current analysis, and so on. Thus, the data must be preprocessed to
meet the requirements of the type of analysis you are seeking. This is the done in the
preprocessing module.
To demonstrate the available features in preprocessing, we will use the Weather database
that is provided in the installation.
Using the Open file ... option under the Preprocess tag select the weather-
nominal.arff file.
16
Weka
When you open the file, your screen looks like as shown here:
This screen tells us several things about the loaded data, which are discussed further in
this chapter.
17
Weka
Understanding Data
Let us first look at the highlighted Current relation sub window. It shows the name of
the database that is currently loaded. You can infer two points from this sub window:
The table contains 5 attributes - the fields, which are discussed in the upcoming
sections.
On the left side, notice the Attributes sub window that displays the various fields in the
database.
The weather database contains five fields - outlook, temperature, humidity, windy and
play. When you select an attribute from this list by clicking on it, further details on the
attribute itself are displayed on the right hand side.
18
Weka
Let us select the temperature attribute first. When you click on it, you would see the
following screen:
The table underneath this information shows the nominal values for this field as
hot, mild and cold.
It also shows the count and weight in terms of a percentage for each nominal value.
At the bottom of the window, you see the visual representation of the class values.
19
Weka
If you click on the Visualize All button, you will be able to see all features in one single
window as shown here:
Removing Attributes
Many a time, the data that you want to use for model building comes with many irrelevant
fields. For example, the customer database may contain his mobile number which is
relevant in analysing his credit rating.
20
Weka
To remove Attribute/s select them and click on the Remove button at the bottom.
The selected attributes would be removed from the database. After you fully preprocess
the data, you can save it for model building.
Next, you will learn to preprocess the data by applying filters on this data.
Applying Filters
Some of the machine learning techniques such as association rule mining requires
categorical data. To illustrate the use of filters, we will use weather-numeric.arff
database that contains two numeric attributes - temperature and humidity.
We will convert these to nominal by applying a filter on our raw data. Click on the Choose
button in the Filter subwindow and select the following filter:
weka->filters->supervised->attribute->Discretize
Click on the Apply button and examine the temperature and/or humidity attribute. You
will notice that these have changed from numeric to nominal types.
21
Weka
Let us look into another filter now. Suppose you want to select the best attributes for
deciding the play. Select and apply the following filter:
weka->filters->supervised->attribute->AttributeSelection
You will notice that it removes the temperature and humidity attributes from the
database.
After you are satisfied with the preprocessing of your data, save the data by clicking the
Save … button. You will use this saved file for model building.
In the next chapter, we will explore the model building using several predefined ML
algorithms.
22
8. WEKA — Classifiers Weka
Many machine learning applications are classification related. For example, you may like
to classify a tumor as malignant or benign. You may like to decide whether to play an
outside game depending on the weather conditions. Generally, this decision is dependent
on several features/conditions of the weather. So you may prefer to use a tree classifier
to make your decision of whether to play or not.
In this chapter, we will learn how to build such a tree classifier on weather data to decide
on the playing conditions.
23
Weka
Before you learn about the available classifiers, let us examine the Test options. You will
notice four testing options as listed below:
Training set
Supplied test set
Cross-validation
Percentage split
Unless you have your own training set or a client supplied test set, you would use cross-
validation or percentage split options. Under cross-validation, you can set the number of
folds in which entire data would be split and used during each iteration of training. In the
percentage split, you will split the data between training and testing using the set split
percentage.
Now, keep the default play option for the output class:
24
Weka
Selecting Classifier
Click on the Choose button and select the following classifier:
weka->classifiers>trees>J48
25
Weka
Click on the Start button to start the classification process. After a while, the classification
results would be presented on your screen as shown here:
Let us examine the output shown on the right hand side of the screen.
It says the size of the tree is 6. You will very shortly see the visual representation of the
tree. In the Summary, it says that the correctly classified instances as 2 and the incorrectly
classified instances as 3, It also says that the Relative absolute error is 110%. It also
shows the Confusion Matrix. Going into the analysis of these results is beyond the scope
of this tutorial. However, you can easily make out from these results that the classification
is not acceptable and you will need more data for analysis, to refine your features selection,
rebuild the model and so on until you are satisfied with the model’s accuracy. Anyway,
that’s what WEKA is all about. It allows you to test your ideas quickly.
26
Weka
Visualize Results
To see the visual representation of the results, right click on the result in the Result list
box. Several options would pop up on the screen as shown here:
Select Visualize tree to get a visual representation of the traversal tree as seen in the
screenshot below:
27
Weka
Selecting Visualize classifier errors would plot the results of classification as shown
here:
28
Weka
The current plot is outlook versus play. These are indicated by the two drop down list
boxes at the top of the screen.
Now, try a different selection in each of these boxes and notice how the X & Y axes change.
The same can be achieved by using the horizontal strips on the right hand side of the plot.
Each strip represents an attribute. Left click on the strip sets the selected attribute on the
X-axis while a right click would set it on the Y-axis.
29
Weka
There are several other plots provided for your deeper analysis. Use them judiciously to
fine tune your model. One such plot of Cost/Benefit analysis is shown below for your
quick reference.
Explaining the analysis in these charts is beyond the scope of this tutorial. The reader is
encouraged to brush up their knowledge of analysis of machine learning algorithms.
In the next chapter, we will learn the next set of machine learning algorithms, that is
clustering.
30
9. WEKA — Clustering Weka
A clustering algorithm finds groups of similar instances in the entire dataset. WEKA
supports several clustering algorithms such as EM, FilteredClusterer, HierarchicalClusterer,
SimpleKMeans and so on. You should understand these algorithms completely to fully
exploit the WEKA capabilities.
As in the case of classification, WEKA allows you to visualize the detected clusters
graphically. To demonstrate the clustering, we will use the provided iris database. The
data set contains three classes of 50 instances each. Each class refers to a type of iris
plant.
Loading Data
In the WEKA explorer select the Preprocess tab. Click on the Open file ... option and
select the iris.arff file in the file selection dialog. When you load the data, the screen looks
like as shown below:
31
Weka
You can observe that there are 150 instances and 5 attributes. The names of attributes
are listed as sepallength, sepalwidth, petallength, petalwidth and class. The first
four attributes are of numeric type while the class is a nominal type with 3 distinct values.
Examine each attribute to understand the features of the database. We will not do any
preprocessing on this data and straight-away proceed to model building.
Clustering
Click on the Cluster TAB to apply the clustering algorithms to our loaded data. Click on
the Choose button. You will see the following screen:
32
Weka
Now, select EM as the clustering algorithm. In the Cluster mode sub window, select the
Classes to clusters evaluation option as shown in the screenshot below:
Click on the Start button to process the data. After a while, the results will be presented
on the screen.
33
Weka
Examining Output
The output of the data processing is shown in the screen below:
34
Weka
If you scroll up the output window, you will also see some statistics that gives the mean
and standard deviation for each of the attributes in the various detected clusters. This is
shown in the screenshot given below:
35
Weka
Visualizing Clusters
To visualize the clusters, right click on the EM result in the Result list. You will see the
following options:
36
Weka
Select Visualize cluster assignments. You will see the following output:
As in the case of classification, you will notice the distinction between the correctly and
incorrectly identified instances. You can play around by changing the X and Y axes to
analyze the results. You may use jittering as in the case of classification to find out the
concentration of correctly identified instances. The operations in visualization plot are
similar to the one you studied in the case of classification.
37
Weka
38
Weka
Choose the Cluster mode selection to Classes to cluster evaluation, and click on the
Start button. You will see the following output:
Notice that in the Result list, there are two results listed: the first one is the EM result
and the second one is the current Hierarchical. Likewise, you can apply multiple ML
algorithms to the same dataset and quickly compare their results.
39
Weka
If you examine the tree produced by this algorithm, you will see the following output:
In the next chapter, you will study the Associate type of ML algorithms.
40
10. WEKA — Association Weka
It was observed that people who buy beer also buy diapers at the same time. That is there
is an association in buying beer and diapers together. Though this seems not well
convincing, this association rule was mined from huge databases of supermarkets.
Similarly, an association may be found between peanut butter and bread.
Finding such associations becomes vital for supermarkets as they would stock diapers next
to beers so that customers can locate both items easily resulting in an increased sale for
the supermarket.
The Apriori algorithm is one such algorithm in ML that finds out the probable associations
and creates association rules. WEKA provides the implementation of the Apriori algorithm.
You can define the minimum support and an acceptable confidence level while computing
these rules. You will apply the Apriori algorithm to the supermarket data provided in
the WEKA installation.
Loading Data
In the WEKA explorer, open the Preprocess tab, click on the Open file ... button and
select supermarket.arff database from the installation folder. After the data is loaded
you will see the following screen:
41
Weka
The database contains 4627 instances and 217 attributes. You can easily understand how
difficult it would be to detect the association between such a large number of attributes.
Fortunately, this task is automated with the help of Apriori algorithm.
Associator
Click on the Associate TAB and click on the Choose button. Select the Apriori association
as shown in the screenshot:
42
Weka
To set the parameters for the Apriori algorithm, click on its name, a window will pop up as
shown below that allows you to set the parameters:
43
Weka
After you set the parameters, click the Start button. After a while you will see the results
as shown in the screenshot below:
At the bottom, you will find the detected best rules of associations. This will help the
supermarket in stocking their products in appropriate shelves.
44
11. WEKA — Feature Selection Weka
When a database contains a large number of attributes, there will be several attributes
which do not become significant in the analysis that you are currently seeking. Thus,
removing the unwanted attributes from the dataset becomes an important task in
developing a good machine learning model.
You may examine the entire dataset visually and decide on the irrelevant attributes. This
could be a huge task for databases containing a large number of attributes like the
supermarket case that you saw in an earlier lesson. Fortunately, WEKA provides an
automated tool for feature selection.
Loading Data
In the Preprocess tag of the WEKA explorer, select the labor.arff file for loading into the
system. When you load the data, you will see the following screen:
45
Weka
Notice that there are 17 attributes. Our task is to create a reduced dataset by eliminating
some of the attributes which are irrelevant to our analysis.
Features Extraction
Click on the Select attributes TAB. You will see the following screen:
Under the Attribute Evaluator and Search Method, you will find several options. We
will just use the defaults here. In the Attribute Selection Mode, use full training set
option.
46
Weka
Click on the Start button to process the dataset. You will see the following output:
47
Weka
At the bottom of the result window, you will get the list of Selected attributes. To get the
visual representation, right click on the result in the Result list.
48
Weka
Clicking on any of the squares will give you the data plot for your further analysis. A typical
data plot is shown below:
This is similar to the ones we have seen in the earlier chapters. Play around with the
different options available to analyze the results.
What’s Next?
You have seen so far the power of WEKA in quickly developing machine learning models.
What we used is a graphical tool called Explorer for developing these models. WEKA also
provides a command line interface that gives you more power than provided in the
explorer.
49
Weka
Clicking the Simple CLI button in the GUI Chooser application starts this command line
interface which is shown in the screenshot below:
Type your commands in the input box at the bottom. You will be able to do all that you
have done so far in the explorer plus much more. Refer to WEKA documentation
(https://www.cs.waikato.ac.nz/ml/weka/documentation.html) for further details.
Lastly, WEKA is developed in Java and provides an interface to its API. So if you are a Java
developer and keen to include WEKA ML implementations in your own Java projects, you
can do so easily.
50
Weka
Conclusion
WEKA is a powerful tool for developing machine learning models. It provides
implementation of several most widely used ML algorithms. Before these algorithms are
applied to your dataset, it also allows you to preprocess the data. The types of algorithms
that are supported are classified under Classify, Cluster, Associate, and Select attributes.
The result at various stages of processing can be visualized with a beautiful and powerful
visual representation. This makes it easier for a Data Scientist to quickly apply the various
machine learning techniques on his dataset, compare the results and create the best model
for the final use.
51
DATA WAREHOUSING AND DATA MINING 2018-2019
81
10 WEEK:10 How does the complexity of a Decision Tree relate
to the bias of the model?
82
WEEK : 11. One approach is to use Reduced Error Pruning.
11 Explain this idea briefly. Try reduced error pruning for training
your Decision Trees using cross validation and report the
Decision Trees you obtain? Also Report your accuracy using the
pruned model Does your Accuracy increase? 83
WEEK :12.How Can you Convert Decision Tree in to “If then
12 else Rules”.Make Up your own Small Decision Tree consisting
2-3 levels and convert into a set of rules. Report the rule
obtained by training a one R classifier. Rank the performance of
j48,PART,oneR. 84
13 Beyond the Syllabus -Simple Project on Data Preprocessing 86
DATA WAREHOUSING AND DATA MINING 2018-2019
WEEK -1
Explore visualization features of the tool for analysis like identifying trends etc.
Ans:
Visualization Features:
WEKA’s visualization allows you to visualize a 2-D plot of the current working relation.
Visualization is very useful in practice, it helps to determine difficulty of the learning problem.
WEKA can visualize single attributes (1-d) and pairs of attributes (2-d), rotate 3-d visualizations
(Xgobi-style). WEKA has “Jitter” option to deal with nominal attributes and to detect “hidden”
data points.
Access To Visualization From The Classifier, Cluster And Attribute Selection Panel Is
Available From A Popup Menu. Click The Right Mouse Button Over An Entry In The
Result List To Bring Up The Menu. You Will Be Presented With Options For Viewing Or
Saving The Text Output And --- Depending On The Scheme --- Further Options For
Visualizing Errors, Clusters, Trees Etc.
Select a square that corresponds to the attributes you would like to visualize. For example, let’s
choose ‘outlook’ for X – axis and ‘play’ for Y – axis. Click anywhere inside the square that
corresponds to ‘play on the left and ‘outlook’ at the top
Page | 1
DEPARTMENT OF IT
DATA WAREHOUSING AND DATA MINING 2018-2019
In the visualization window, beneath the X-axis selector there is a drop-down list,
‘Colour’, for choosing the color scheme. This allows you to choose the color of points based on
the attribute selected. Below the plot area, there is a legend that describes what values the colors
correspond to. In your example, red represents ‘no’, while blue represents ‘yes’. For better
visibility you should change the color of label ‘yes’. Left-click on ‘yes’ in the ‘Class colour’ box
and select lighter color from the color palette.
To the right of the plot area there are series of horizontal strips. Each strip represents an
attribute, and the dots within it show the distribution values of the attribute. You can choose
what axes are used in the main graph by clicking on these strips (left-click changes X-axis, right-
click changes Y-axis).
The software sets X - axis to ‘Outlook’ attribute and Y - axis to ‘Play’. The instances are spread
out in the plot area and concentration points are not visible. Keep sliding ‘Jitter’, a random
displacement given to all points in the plot, to the right, until you can spot concentration points.
Page | 2
DEPARTMENT OF IT
DATA WAREHOUSING AND DATA MINING 2018-2019
The results are shown below. But on this screen we changed ‘Colour’ to temperature. Besides
‘outlook’ and ‘play’, this allows you to see the ‘temperature’ corresponding to the
‘outlook’. It will affect your result because if you see ‘outlook’ = ‘sunny’ and ‘play’ = ‘no’ to
explain the result, you need to see the ‘temperature’ – if it is too hot, you do not want to play.
Change ‘Colour’ to ‘windy’, you can see that if it is windy, you do not want to play as well.
Selecting Instances
Sometimes it is helpful to select a subset of the data using visualization tool. A special
case is the ‘UserClassifier’, which lets you to build your own classifier by interactively selecting
instances. Below the Y – axis there is a drop-down list that allows you to choose a selection
method. A group of points on the graph can be selected in four ways [2]:
attributes of the point. If more than one point will appear at the same location, more than
one set of attributes will be shown.
Page | 3
DEPARTMENT OF IT
DATA WAREHOUSING AND DATA MINING 2018-2019
3. Polygon. You can select several points by building a free-form polygon. Left-click on the
graph to add vertices to the polygon and right-click to complete it.
Page | 4
DEPARTMENT OF IT
DATA WAREHOUSING AND DATA MINING 2018-2019
4. Polyline. To distinguish the points on one side from the once on another, you can build a
polyline. Left-click on the graph to add vertices to the polyline and right-click to finish.
Page | 5
DEPARTMENT OF IT
B) Explore WEKA Data Mining/Machine Learning Toolkit.
1. Download the software as your requirements from the below given link.
http://www.cs.waikato.ac.nz/ml/weka/downloading.html
2. The Java is mandatory for installation of WEKA so if you have already Java on your
machine then download only WEKA else download the software with JVM.
3. Then open the file location and double click on the file
4. Click Next
Page 6
DEPARTMENT OF IT
DATA WAREHOUSING AND DATA MINING 2018-2019
5. Click I Agree.
Page 7
DEPARTMENT OF IT
DATA WAREHOUSING AND DATA MINING 2018-2019
6. As your requirement do the necessary changes of settings and click Next. Full and
Associate files are the recommended settings.
Page 8
DEPARTMENT OF IT
DATA WAREHOUSING AND DATA MINING 2018-2019
8. If you want a shortcut then check the box and click Install.
9. The Installation will start wait for a while it will finish within a minute.
Page 9
DEPARTMENT OF IT
DATA WAREHOUSING AND DATA MINING 2018-2019
11. Hurray !!!!!!! That’s all click on the Finish and take a shovel and start Mining. Best of
Luck.
Page 10
DEPARTMENT OF IT
DATA WAREHOUSING AND DATA MINING 2018-2019
This is the GUI you get when started. You have 4 options Explorer, Experimenter,
KnowledgeFlow and Simple CLI.
C.(ii)Understand the features of WEKA tool kit such as Explorer, Knowledge flow interface,
Experimenter, command-line interface.
Ans: WEKA
Page 11
DEPARTMENT OF IT
DATA WAREHOUSING AND DATA MINING 2018-2019
The Weka GUI Chooser (class weka.gui.GUIChooser) provides a starting point for
launching Weka’s main GUI applications and supporting tools. If one prefers a MDI (“multiple
document interface”) appearance, then this is provided by an alternative launcher called “Main”
(class weka.gui.Main). The GUI Chooser consists of four buttons—one for each of the four major
Weka applications—and four menus.
Explorer An environment for exploring data with WEKA (the rest of this Documentation
deals with this application in more detail).
Experimenter An environment for performing experiments and conducting statistical tests
between learning schemes.
Knowledge Flow This environment supports essentially the same functions as the Explorer but
with a drag-and-drop interface. One advantage is that it supports incremental learning.
SimpleCLI Provides a simple command-line interface that allows direct execution of WEKA
commands for operating systems that do not provide their own command line interface.
Page 12
DEPARTMENT OF IT
DATA WAREHOUSING AND DATA MINING 2018-2019
1. Explorer
At the very top of the window, just below the title bar, is a row of tabs. When the Explorer
is first started only the first tab is active; the others are grayed out. This is because it is
necessary to open (and potentially pre-process) a data set before starting to explore the data.
The tabs are as follows:
Once the tabs are active, clicking on them flicks between different screens, on which the
respective actions can be performed. The bottom area of the window (including the status box, the
log button, and the Weka bird) stays visible regardless of which section you are in. The Explorer
can be easily extended with custom tabs. The Wiki article “Adding tabs in the Explorer”
explains this in detail.
2. Weka Experimenter:-
The Weka Experiment Environment enables the user to create, run, modify, and analyze
experiments in a more convenient manner than is possible when processing the schemes
individually. For example, the user can create an experiment that runs several schemes against a
series of datasets and then analyze the results to determine if one of the schemes is (statistically)
better than the other schemes.
Page 13
DEPARTMENT OF IT
DATA WAREHOUSING AND DATA MINING 2018-2019
The Experiment Environment can be run from the command line using the Simple CLI. For
example, the following commands could be typed into the CLI to run the OneR scheme on the Iris
dataset using a basic train and test process. (Note that the commands would be typed on one line
into the CLI.) While commands can be typed directly into the CLI, this technique is not particularly
convenient and the experiments are not easy to modify. The Experimenter comes in two flavors’,
either with a simple interface that provides most of the functionality one needs for experiments, or
with an interface with full access to the Experimenter’s capabilities. You can choose between
those two with the Experiment Configuration Mode radio buttons:
Simple
Advanced
Both setups allow you to setup standard experiments, that are run locally on a single machine,
or remote experiments, which are distributed between several hosts. The distribution of
experiments cuts down the time the experiments will take until completion, but on the other hand
the setup takes more time. The next section covers the standard experiments (both, simple and
advanced), followed by the remote experiments and finally the analyzing of the results.
Page 14
DEPARTMENT OF IT
DATA WAREHOUSING AND DATA MINING 2018-2019
3. Knowledge Flow
Introduction
The Knowledge Flow provides an alternative to the Explorer as a graphical front end to
WEKA’s core algorithms.
The Knowledge Flow presents a data-flow inspired interface to WEKA. The user can select
WEKA components from a palette, place them on a layout canvas and connect them together in
order to form a knowledge flow for processing and analyzing data. At present, all of WEKA’s
classifiers, filters, clusterers, associators, loaders and savers are available in the Knowledge
Flow along with some extra tools.
The Knowledge Flow can handle data either incrementally or in batches (the Explorer
handles batch data only). Of course learning from data incremen- tally requires a classifier that can
Page 15
DEPARTMENT OF IT
DATA WAREHOUSING AND DATA MINING 2018-2019
be updated on an instance by instance basis. Currently in WEKA there are ten classifiers that can
handle data incrementally.
The Simple CLI provides full access to all Weka classes, i.e., classifiers, filters, clusterers,
etc., but without the hassle of the CLASSPATH (it facilitates the one, with which Weka was
started). It offers a simple Weka shell with separated command line and output.
Page 16
DEPARTMENT OF IT
DATA WAREHOUSING AND DATA MINING 2018-2019
Commands
Break
Stops the current thread, e.g., a running classifier, in a friendly manner kill stops the current
thread in an unfriendly fashion.
Cls
Clears the output area
Lists the capabilities of the specified class, e.g., for a classifier with its.
option:
exit
help [<command>]
Page 17
DEPARTMENT OF IT
DATA WAREHOUSING AND DATA MINING 2018-2019
Invocation
In order to invoke a Weka class, one has only to prefix the class with ”java”. This
command tells the Simple CLI to load a class and execute it with any given parameters. E.g., the
J48 classifier can be invoked on the iris dataset with the following command:
Command redirection
Note: the > must be preceded and followed by a space, otherwise it is not recognized as redirection,
but part of another parameter.
Command completion
Commands starting with java support completion for classnames and filenames via Tab
(Alt+BackSpace deletes parts of the command again). In case that there are several matches, Weka
lists all possible matches.
weka.classifiers
weka.clusterers
Classname completion
Page 18
DEPARTMENT OF IT
DATA WAREHOUSING AND DATA MINING 2018-2019
Possible matches:
weka.classifiers.meta.AdaBoostM1
weka.classifiers.meta.AdditiveRegression
weka.classifiers.meta.AttributeSelectedClassifier
Filename Completion
In order for Weka to determine whether a the string under the cursor is a classname or a
filename, filenames need to be absolute (Unix/Linx: /some/path/file;Windows: C:\Some\Path\file)
or relative and starting with a dot (Unix/Linux:./some/other/path/file; Windows:
.\Some\Other\Path\file).
Page 19
DEPARTMENT OF IT
DATA WAREHOUSING AND DATA MINING 2018-2019
Page 20
DEPARTMENT OF IT
DATA WAREHOUSING AND DATA MINING 2018-2019
An ARFF (= Attribute-Relation File Format) file is an ASCII text file that describes a list of
instances sharing a set of attributes.
ARFF files are not the only format one can load, but all files that can be converted with
Weka’s “core converters”. The following formats are currently supported:
ARFF (+ compressed)
C4.5
CSV
libsvm
binary serialized instances
XRFF (+ compressed)
Overview
ARFF files have two distinct sections. The first section is the Header information, which is
followed the Data information. The Header of the ARFF file contains the name of the relation, a list of
the attributes (the columns in the data), and their types.
2. Sources:
Page 21
DEPARTMENT OF IT
DATA WAREHOUSING AND DATA MINING 2018-2019
@RELATION iris
@ATTRIBUTE sepal length NUMERIC
@ATTRIBUTE sepal width NUMERIC
@ATTRIBUTE petal length NUMERIC
@ATTRIBUTE petal width NUMERIC
@ATTRIBUTE class {Iris-setosa, Iris-versicolor, Iris-irginica} The Data of the ARFF file looks like
the following:
@DATA
5.1,3.5,1.4,0.2,Iris-setosa
4.9,3.0,1.4,0.2,Iris-setosa
4.7,3.2,1.3,0.2,Iris-setosa
4.6,3.1,1.5,0.2,Iris-setosa
5.0,3.6,1.4,0.2,Iris-setosa
5.4,3.9,1.7,0.4,Iris-setosa
4.6,3.4,1.4,0.3,Iris-setosa
5.0,3.4,1.5,0.2,Iris-setosa
4.4,2.9,1.4,0.2,Iris-setosa
4.9,3.1,1.5,0.1,Iris-setosa
The ARFF Header section of the file contains the relation declaration and at• tribute
declarations.
The relation name is defined as the first line in the ARFF file. The format is: @relation
<relation-name>
where <relation-name> is a string. The string must be quoted if the name includes spaces.
Attribute declarations take the form of an ordered sequence of @attribute statements. Each
attribute in the data set has its own @attribute statement which uniquely defines the name of
that attribute and it’s data type. The order the attributes are declared indicates the column
position in the data section of the file. For example, if an attribute is the third one declared then
Weka expects that all that attributes values will be found in the third comma delimited column.
where the <attribute-name> must start with an alphabetic character. If spaces are to be included
in the name then the entire name must be quoted.
numeric
integer is treated as numeric
real is treated as numeric
<nominal-specification>
string
date [<date-format>]
relational for multi-instance data (for future use)
where <nominal-specification> and <date-format> are defined below. The keywords numeric, real,
integer, string and date are case insensitive.
Page 23
DEPARTMENT OF IT
DATA WAREHOUSING AND DATA MINING 2018-2019
Numeric attributes
Nominal values are defined by providing an <nominal-specification> listing the possible values:
<nominal-name1>, <nominal-name2>, <nominal-name3>,
For example, the class value of the Iris dataset can be defined as follows: @ATTRIBUTE class
{Iris-setosa,Iris-versicolor,Iris-virginica} Values that contain spaces must be quoted.
String attributes
String attributes allow us to create attributes containing arbitrary textual values. This is very
useful in text-mining applications, as we can create datasets with string attributes, then write
Weka Filters to manipulate strings (like String- ToWordVectorFilter). String attributes are
declared as follows:
Date attributes
Date attribute declarations take the form: @attribute <name> date [<date-format>] where
<name> is the name for the attribute and <date-format> is an optional string specifying how
date values should be parsed and printed (this is the same format used by SimpleDateFormat).
The default format string accepts the ISO-8601 combined date and time format: yyyy-MM-
dd’T’HH:mm:ss. Dates must be specified in the data section as the corresponding string
representations of the date/time (see example below).
Relational attributes
@attribute f1 numeric
...
@attribute f166 numeric @end bag
@attribute class {0,1}
The ARFF Data section of the file contains the data declaration line and the actual instance
lines.
The @data declaration is a single line denoting the start of the data segment in the file. The
format is:
@data
Each instance is represented on a single line, with carriage returns denoting the end of the
instance. A percent sign (%) introduces a comment, which continues to the end of the line.
Attribute values for each instance are delimited by commas. They must appear in the order that
they were declared in the header section (i.e. the data corresponding to the nth @attribute
declaration is always the nth field of the attribute).
@data 4.4,?,1.5,?,Iris-setosa
Values of string and nominal attributes are case sensitive, and any that contain space or the
comment-delimiter character % must be quoted. (The code suggests that double-quotes are
Page 25
DEPARTMENT OF IT
DATA WAREHOUSING AND DATA MINING 2018-2019
Dates must be specified in the data section using the string representation specified in the attribute
declaration.
For example:
@RELATION Timestamps
@ATTRIBUTE timestamp DATE "yyyy-MM-dd HH:mm:ss" @DATA
"2001-04-03 12:12:12"
"2001-05-03 12:59:55"
Relational data must be enclosed within double quotes ”. For example an instance of the MUSK1
dataset (”...” denotes an omission):
MUSK-188,"42,...,30",1
Page 26
DEPARTMENT OF IT
DATA WAREHOUSING AND DATA MINING 2018-2019
contact-lens.arff
cpu.arff
cpu.with-vendor.arff
diabetes.arff
glass.arff
ionospehre.arff
iris.arff
labor.arff
ReutersCorn-train.arff
ReutersCorn-test.arff
Page 27
DEPARTMENT OF IT
DATA WAREHOUSING AND DATA MINING 2018-2019
ReutersGrain-train.arff
ReutersGrain-test.arff
segment-challenge.arff
segment-test.arff
soybean.arff
supermarket.arff
vote.arff
weather.arff
weather.nominal.arff
EXERCISE-1
1. Write Steps for load the Iris data set.
Page 28
DEPARTMENT OF IT
DATA WAREHOUSING AND DATA MINING 2018-2019
1. outlook
2. temperature
3. humidity
4. windy
5. play
Page 29
DEPARTMENT OF IT
DATA WAREHOUSING AND DATA MINING 2018-2019
EXERCISE 2:
List attribute names and types of Dataset SuperMarket.
1. sunny
2. overcast
3. rainy
Page 30
DEPARTMENT OF IT
DATA WAREHOUSING AND DATA MINING 2018-2019
Plot Histogram
Steps for identify the plot histogram
Page 31
DEPARTMENT OF IT
DATA WAREHOUSING AND DATA MINING 2018-2019
sunny,hot,high,FALSE,no
sunny,hot,high,TRUE,no
overcast,hot,high,FALSE,yes
rainy,mild,high,FALSE,yes
rainy,cool,normal,FALSE,yes
rainy,cool,normal,TRUE,no
overcast,cool,normal,TRUE,yes
sunny,mild,high,FALSE,no
sunny,cool,normal,FALSE,yes
rainy,mild,normal,FALSE,yes
sunny,mild,normal,TRUE,yes
overcast,mild,high,TRUE,yes
overcast,hot,normal,FALSE,yes
rainy,mild,high,TRUE,no
Visualize the data in various dimensions
Page 32
DEPARTMENT OF IT
DATA WAREHOUSING AND DATA MINING 2018-2019
A data warehouse is a electronic storage of an Organization's historical data for the purpose of
reporting, analysis and data mining or knowledge discovery.
A data warehouse helps to integrate data and store them historically so that we can analyze different
aspects of business including, performance analysis, trend, prediction etc. over a given time frame and use
the result of our analysis to improve the efficiency of business processes.
3. What is Fact?
A fact is something that is quantifiable (Or measurable). Facts are typically (but not always) numerical
values that can be aggregated.
SIGNATURE OF FACULTY
WEEK 2-
Perform data preprocessing tasks and Demonstrate performing association rule
mining on data sets
A. Explore various options in Weka for Preprocessing data and apply (like Discretization
Filters, Resample filter, etc.) n each dataset.
Ans:
Preprocess Tab
Page 33
DEPARTMENT OF IT
DATA WAREHOUSING AND DATA MINING 2018-2019
1. Loading Data
The first four buttons at the top of the preprocess section enable you to load data into WEKA:
1. Open file.... Brings up a dialog box allowing you to browse for the data file on the local file system.
2. Open URL .... Asks for a Uniform Resource Locator address for where the data is stored.
3. Open DB ....Reads data from a database. (Note that to make this work you might have to edit the
file in weka/experiment/DatabaseUtils.props.)
4. Generate ....Enables you to generate artificial data from a variety of Data Generators. Using the
Open file ... button you can read files in a variety of formats: WEKA’s ARFF format, CSV
format, C4.5 format, or serialized Instances format. ARFF files typically have a .arff extension, CSV
files a .csv extension, C4.5 files a .data and .names extension, and serialized Instances objects a .bsi
extension.
Page 34
DEPARTMENT OF IT
DATA WAREHOUSING AND DATA MINING 2018-2019
Current Relation: Once some data has been loaded, the Preprocess panel shows a variety of
information. The Current relation box (the “current relation” is the currently loaded data, which
can be interpreted as a single relational table in database terminology) has three entries:
1. Relation. The name of the relation, as given in the file it was loaded from. Filters (described below)
modify the name of a relation.
Below the Current relation box is a box titled Attributes. There are four buttons, and beneath
them is a list of the attributes in the current relation.
1. No.. A number that identifies the attribute in the order they are specified in the data file.
2. Selection tick boxes. These allow you select which attributes are present in the relation.
3. Name. The name of the attribute, as it was declared in the data file. When you click on different
rows in the list of attributes, the fields change in the box to the right titled Selected attribute.
This box displays the characteristics of the currently highlighted attribute in the list:
1. Name. The name of the attribute, the same as that given in the attribute list.
3. Missing. The number (and percentage) of instances in the data for which this attribute is missing
(unspecified).
4. Distinct. The number of different values that the data contains for this attribute.
Page 35
DEPARTMENT OF IT
DATA WAREHOUSING AND DATA MINING 2018-2019
5. Unique. The number (and percentage) of instances in the data having a value for this attribute that no
other instances have.
Below these statistics is a list showing more information about the values stored in this attribute,
which differ depending on its type. If the attribute is nominal, the list consists of each possible value for
the attribute along with the number of instances that have that value. If the attribute is numeric, the list
gives four statistics describing the distribution of values in the data— the minimum, maximum, mean
and standard deviation. And below these statistics there is a coloured histogram, colour-coded
according to the attribute chosen as the Class using the box above the histogram. (This box will bring
up a drop-down list of available selections when clicked.) Note that only nominal Class attributes will
result in a colour-coding. Finally, after pressing the Visualize All button, histograms for all the
attributes in the data are shown in a separate window.Returning to the attribute list, to begin with all the
tick boxes are unticked.
They can be toggled on/off by clicking on them individually. The four buttons above can also
be used to change the selection:
PREPROCESSING
4. Pattern. Enables the user to select attributes based on a Perl 5 Regular Expression. E.g., .* id
selects all attributes which name ends with id.
Once the desired attributes have been selected, they can be removed by clicking the Remove button
below the list of attributes. Note that this can be undone by clicking the Undo button, which is located
next to the Edit button in the top-right corner of the Preprocess panel.
The preprocess section allows filters to be defined that transform the data in various
ways. The Filter box is used to set up the filters that are required. At the left of the Filter box is a
Choose button. By clicking this button it is possible to select one of the filters in WEKA. Once a
filter has been selected, its name and options are shown in the field next to the Choose button.
Page 36
DEPARTMENT OF IT
DATA WAREHOUSING AND DATA MINING 2018-2019
Clicking on this box with the left mouse button brings up a GenericObjectEditor dialog box. A
click with the right mouse button (or Alt+Shift+left click) brings up a menu where you can
choose, either to display the properties in a GenericObjectEditor dialog box, or to copy the
current setup string to the clipboard.
The GenericObjectEditor dialog box lets you configure a filter. The same kind of
dialog box is used to configure other objects, such as classifiers and clusterers
(see below). The fields in the window reflect the available options.
Right-clicking (or Alt+Shift+Left-Click) on such a field will bring up a popup menu, listing the following
options:
1. Show properties... has the same effect as left-clicking on the field, i.e., a dialog appears allowing
you to alter the settings.
2. Copy configuration to clipboard copies the currently displayed configuration string to the system’s
clipboard and therefore can be used anywhere else in WEKA or in the console. This is rather handy if
you have to setup complicated, nested schemes.
3. Enter configuration... is the “receiving” end for configurations that got copied to the clipboard
Page 37
DEPARTMENT OF IT
DATA WAREHOUSING AND DATA MINING 2018-2019
earlier on. In this dialog you can enter a class name followed by options (if the class supports these).
This also allows you to transfer a filter setting from the Preprocess panel to a Filtered Classifier used in
the Classify panel.
Left-Clicking on any of these gives an opportunity to alter the filters settings. For example, the
setting may take a text string, in which case you type the string into the text field provided. Or it may
give a drop-down box listing several states to choose from. Or it may do something else, depending on
the information required. Information on the options is provided in a tool tip if you let the mouse
pointer hover of the corresponding field. More information on the filter and its options can be obtained
by clicking on the More button in the About panel at the top of the GenericObjectEditor window.
Applying Filters
Once you have selected and configured a filter, you can apply it to the data by pressing the
Apply button at the right end of the Filter panel in the Preprocess panel. The Preprocess panel will then
show the transformed data. The change can be undone by pressing the Undo button. You can also use
the Edit...button to modify your data manually in a dataset editor. Finally, the Save... button at the top
right of the Preprocess panel saves the current version of the relation in file formats that can represent
the relation, allowing it to be kept for future use.
Page 38
DEPARTMENT OF IT
DATA WAREHOUSING AND DATA MINING 2018-2019
Page 39
DEPARTMENT OF IT
DATA WAREHOUSING AND DATA MINING 2018-2019
EXERCISE 4:
Explore various options in Weka for preprocessing data and apply in each dataset.
Eg: creditg,Soybean, Vote, Iris, Contactlense,
OUTPUT:
VIVA QUESTIONS:
Agriculture, biological data analysis, call record analysis, DSS, Business intelligence system etc
A virtual data warehouse provides a compact view of the data inventory. It contains meta
data and uses middle-ware to establish connection between different data sources.
5.Define KDD.
6.Define metadata.
A database that describes various aspects of data in the warehouse is called metadata.
Page 40
DEPARTMENT OF IT
DATA WAREHOUSING AND DATA MINING 2018-2019
B. Load each dataset into Weka and run Apriori algorithm with different support
and confidence values. Study the rules generated.
AIM: To select interesting rules from the set of all possible rules, constraints on various measures of
significance and interest can be used. The best known constraints are minimum thresholds on support
and confidence. The support supp(X) of an itemset X is defined as the proportion of transactions in
the data set which contain the itemset. In the example database, the itemset {milk, bread} has a
support of 2 / 5 = 0.4 since it occurs in 40% of all transactions (2 out of 5 transactions).
THEORY:
Association rule mining is defined as: Let be a set of n binary attributes called items. Let be a set of
transactions called the database. Each transaction in D has a unique transaction ID and contains a subset of the
items in I. A rule is defined as an implication of the form X=>Y where X,Y C I and X Π Y=Φ . The sets of
items (for short itemsets) X and Y are called antecedent (left hand side or LHS) and consequent (right hand
side or RHS) of the rule respectively.
To illustrate the concepts, we use a small example from the supermarket domain.
The set of items is I = {milk,bread,butter,beer} and a small database containing the items (1 codes presence
and 0 absence of an item in a transaction) is shown in the table to the right. An example rule for the
supermarket could be meaning that if milk and bread is bought, customers also buy butter.
Note: this example is extremely small. In practical applications, a rule needs a support of several hundred
transactions before it can be considered statistically significant, and datasets often contain thousands or
millions of transactions.
To select interesting rules from the set of all possible rules, constraints on various measures of significance
and interest can be used. The best known constraints are minimum thresholds on support and confidence. The
support supp(X) of an itemset X is defined as the proportion of transactions in the data set which contain the
Page 41
DEPARTMENT OF IT
DATA WAREHOUSING AND DATA MINING 2018-2019
itemset. In the example database, the itemset {milk, bread} has a support of 2 / 5 = 0.4 since it occurs in 40%
of all transactions (2 out of 5 transactions).
The confidence of a rule is defined. For example, the rule has a confidence of 0.2 / 0.4 = 0.5 in the database,
which means that for 50% of the transactions containing milk and bread the rule is correct. Confidence can be
interpreted as an estimate of the probability P(Y | X), the probability of finding the RHS of the rule in
transactions under the condition that these transactions also contain the LHS.
ALGORITHM:
Association rule mining is to find out association rules that satisfy the predefined minimum support and
confidence from a given database. The problem is usually decomposed into two sub problems. One is to find
those itemsets whose occurrences exceed a predefined threshold in the database; those itemsets are called
frequent or large itemsets. The second problem is to generate association rules from those large itemsets with
the constraints of minimal confidence.
Suppose one of the large itemsets is Lk, Lk = {I1, I2, … , Ik}, association rules with this itemsets are
generated in the following way: the first rule is {I1, I2, … , Ik1} and {Ik}, by checking the confidence this
rule can be determined as interesting or not.
Then other rule are generated by deleting the last items in the antecedent and inserting it to the consequent,
further the confidences of the new rules are checked to determine the interestingness of them. Those processes
iterated until the antecedent becomes empty.
Since the second subproblem is quite straight forward, most of the researches focus on the first subproblem.
Apriori (T,£)
L<{ Large 1itemsets that appear in more than transactions }
while L(k1)≠ Φ C(k)<Generate( Lk − 1) for transactions t € T
Page 42
DEPARTMENT OF IT
DATA WAREHOUSING AND DATA MINING 2018-2019
C(t)Subset(Ck,t)
OUTPUT:
Page 43
DEPARTMENT OF IT
DATA WAREHOUSING AND DATA MINING 2018-2019
Association Rule:
An association rule has two parts, an antecedent (if) and a consequent (then). An antecedent is an item
found in the data. A consequent is an item that is found in combination with the antecedent.
Association rules are created by analyzing data for frequent if/then patterns and using the criteria
support and confidence to identify the most important relationships. Support is an indication of how
frequently the items appear in the database. Confidence indicates the number of times the if/then
statements have been found to be true.
In data mining, association rules are useful for analyzing and predicting customer behavior. They play
an important part in shopping basket data analysis, product clustering, catalog design and store layout.
Support count: The support count of an itemset X, denoted by X.count, in a data set T is the
number of transactions in T that contain X. Assume T has n transactions.
Then,
( X Y ).count
support
n
( X Y ).count
confidence
X .count
EXERCISE 5: Apply different discretization filters on numerical attributes and run the
Apriori association rule algorithm. Study the rules generated. Derive interesting insights
and observe the effect of discretization in the rule generation process.
Eg:Dataset like Vote,soybean,supermarket,Iris..
Page 44
DEPARTMENT OF IT
DATA WAREHOUSING AND DATA MINING 2018-2019
There is a third type of Datamart called Hybrid. The Hybrid datamart having source data
from Operational systems or external files and central Datawarehouse as well. I will definitely
check for Dependent and Independent Datawarehouses and update.
Association algorithm is used for recommendation engine that is based on a market based
analysis. This engine suggests products to customers based on what they bought earlier. The model
is built on a dataset containing identifiers. These identifiers are both for individual cases and for the
items that cases contain. These groups of items in a data set are called as an item set. The algorithm
traverses a data set to find items that appear in a case. MINIMUM_SUPPORT parameter is used
any associated items that appear into an item set.
Page 45
DEPARTMENT OF IT
DATA WAREHOUSING AND DATA MINING 2018-2019
The support for a rule R is the ratio of the number of occurrences of R, given all occurrences
of all rules.The confidence of a rule X->Y, is the ratio of the number of occurrences of Y
given X, among all other occurrences given X.
SIGNATURE OF FACULTY
AIM: Implementing the decision tree analysis and the training data in the data set.
THEORY:
Classification is a data mining function that assigns items in a collection to target categories or
classes. The goal of classification is to accurately predict the target class for each case in the data.
For example, a classification model could be used to identify loan applicants as low, medium, or high
credit risks. A classification task begins with a data set in which the class assignments are known.
For example, a classification model that predicts credit risk could be developed based on observed
data for many loan applicants over a period of time.
In addition to the historical credit rating, the data might track employment history, home ownership
or rental, years of residence, number and type of investments, and so on. Credit rating would be the
target, the other attributes would be the predictors, and the data for each customer would constitute a
case.
Page 46
DEPARTMENT OF IT
DATA WAREHOUSING AND DATA MINING 2018-2019
Classifications are discrete and do not imply order. Continuous, floating point values would indicate
a numerical, rather than a categorical, target. A predictive model with a numerical target uses a
regression algorithm, not a classification algorithm. The simplest type of classification problem is
binary classification. In binary classification, the target attribute has only two possible values: for
example, high credit rating or low credit rating. Multiclass targets have more than two values: for
example, low, medium, high, or unknown credit rating. In the model build (training) process, a
classification algorithm finds relationships between the values of the predictors and the values of the
target. Different classification algorithms use different techniques for finding relationships. These
relationships are summarized in a model, which can then be applied to a different data set in which
the class assignments are unknown
Different Classification Algorithms: Oracle Data Mining provides the following algorithms for
classification:
Decision Tree - Decision trees automatically generate rules, which are conditional statements
that reveal the logic used to build the tree.
Naive Bayes - Naive Bayes uses Bayes' Theorem, a formula that calculates a probability by
counting the frequency of values and combinations of values in the historical data.
Classification Tab
Selecting a Classifier
At the top of the classify section is the Classifier box. This box has a text fieldthat gives the
name of the currently selected classifier, and its options. Clicking on the text box with the left mouse
button brings up a GenericObjectEditor dialog box, just the same as for filters, that you can use to
configure the options of the current classifier. With a right click (or Alt+Shift+left click) you can once
again copy the setup string to the clipboard or display the properties in a GenericObjectEditor dialog
box. The Choose button allows you to choose one of the classifiers that are available in WEKA.
Test Options
The result of applying the chosen classifier will be tested according to the options that are set by
clicking in the Test options box. There are four test modes:
Page 47
DEPARTMENT OF IT
DATA WAREHOUSING AND DATA MINING 2018-2019
1. Use training set. The classifier is evaluated on how well it predicts the class of the instances it was
trained on.
2. Supplied test set. The classifier is evaluated on how well it predicts the class of a set of instances
loaded from a file. Clicking the Set... button brings up a dialog allowing you to choose the file to test
on.
3. Cross-validation. The classifier is evaluated by cross-validation, using the number of folds that are
entered in the Folds text field.
4. Percentage split. The classifier is evaluated on how well it predicts a certain percentage of the data
which is held out for testing. The amount of data held out depends on the value entered in the
% field.
Classifier Evaluation Options:
1. Output model. The classification model on the full training set is output so that it can be viewed,
visualized, etc. This option is selected by default.
2. Output per-class stats. The precision/recall and true/false statistics for each class are output. This
option is also selected by default.
3. Output entropy evaluation measures. Entropy evaluation measures are included in the output.
This option is not selected by default.
4. Output confusion matrix. The confusion matrix of the classifier’s predictions is included in the
output. This option is selected by default.
5. Store predictions for visualization. The classifier’s predictions are remembered so that they can
be visualized. This option is selected by default.
Note that in the case of a cross-validation the instance numbers do not correspond to the location in the
data!
predictions, e.g., an ID attribute for tracking misclassifications, then the index of this attribute can be
specified here. The usual Weka ranges are supported,“first” and “last” are therefore valid indices
as well (example: “first-3,6,8,12-last”).
Page 48
DEPARTMENT OF IT
DATA WAREHOUSING AND DATA MINING 2018-2019
8. Cost-sensitive evaluation. The errors is evaluated with respect to a cost matrix. The Set...
button allows you to specify the cost matrix used.
9. Random seed for xval / % Split. This specifies the random seed used when randomizing the data
before it is divided up for evaluation purposes.
10. Preserve order for % Split. This suppresses the randomization of the data before splitting into
train and test set.
11. Output source code. If the classifier can output the built model as Java source code, you can
specify the class name here. The code will be printed in the “Classifier output” area.
attribute, which is the target for prediction. Some classifiers can only learn nominal classes; others can
only learn numeric classes (regression problems) still others can learn both.
By default, the class is taken to be the last attribute in the data. If you want to train a classifier to
predict a different attribute, click on the box below the Test options box to bring up a drop-down
list of attributes to choose from.
Training a Classifier
Once the classifier, test options and class have all been set, the learning process is started by
clicking on the Start button. While the classifier is busy being trained, the little bird moves around. You
can stop the training process at any time by clicking on the Stop button. When training is complete,
several things happen. The Classifier output area to the right of the display is filled with text describing
the results of training and testing. A new entry appears in the Result list box. We look at the result list
below; but first we investigate the text that has been output.
Page 49
DEPARTMENT OF IT
DATA WAREHOUSING AND DATA MINING 2018-2019
A.Load each dataset into Weka and run id3, j48 classification algorithm, study the
classifier output. Compute entropy values, Kappa statistic.
Ans:
Page 50
DEPARTMENT OF IT
DATA WAREHOUSING AND DATA MINING 2018-2019
The text in the Classifier output area has scroll bars allowing you to browse the
results. Clicking with the left mouse button into the text area, while holding Alt and
Shift, brings up a dialog that enables you to save the displayed output
in a variety of formats (currently, BMP, EPS, JPEG and PNG). Of course, you can
also resize the Explorer window to get a larger display area.
The output is
1. Run information. A list of information giving the learning scheme options, relation name, instances,
attributes and test mode that were involved in the process.
Page 51
DEPARTMENT OF IT
DATA WAREHOUSING AND DATA MINING 2018-2019
2. Classifier model (full training set). A textual representation of the classification model that was
produced on the full training data.
3. The results of the chosen test mode are broken down thus.
4. Summary. A list of statistics summarizing how accurately the classifier was able to predict the true
class of the instances under the chosen test mode.
5. Detailed Accuracy By Class. A more detailed per-class break down of the classifier’s
prediction accuracy.
6. Confusion Matrix. Shows how many instances have been assigned to each class. Elements show the
number of test examples whose actual class is the row and whose predicted class is the column.
7. Source code (optional). This section lists the Java source code if one
chose “Output source code” in the “More options” dialog.
B. extract if-then rues from decision tree generated by classifier, Observe the confusion matrix and
derive Accuracy, F- measure, TPrate, FPrate , Precision and recall values. Apply cross-validation
strategy with various fold levels and compare the accuracy results.
A decision tree is a structure that includes a root node, branches, and leaf nodes. Each internal node
denotes a test on an attribute, each branch denotes the outcome of a test, and each leaf node holds a
class label. The topmost node in the tree is the root node.
The following decision tree is for the concept buy _computer that indicates whether a customer at a
company is likely to buy a computer or not. Each internal node represents a test on an attribute. Each
leaf node represents a class.
Page 52
DEPARTMENT OF IT
DATA WAREHOUSING AND DATA MINING 2018-2019
IF-THEN Rules:
Rule-based classifier makes use of a set of IF-THEN rules for classification. We can express a rule in
the following from −
Points to remember −
The antecedent part the condition consist of one or more attribute tests and these tests are
logically ANDed.
Page 53
DEPARTMENT OF IT
DATA WAREHOUSING AND DATA MINING 2018-2019
Rule Extraction
Here we will learn how to build a rule-based classifier by extracting IF-THEN rules from a decision
tree.
Points to remember −
One rule is created for each path from the root to the leaf node.
The leaf node holds the class prediction, forming the rule consequent.
Some of the sequential Covering Algorithms are AQ, CN2, and RIPPER. As per the general strategy
the rules are learned one at a time. For each time rules are learned, a tuple covered by the rule is
removed and the process continues for the rest of the tuples. This is because the path to each leaf in a
decision tree corresponds to a rule.
Note − The Decision tree induction can be considered as learning a set of rules simultaneously.
The Following is the sequential learning Algorithm where rules are learned for one class at a time.
When learning a rule from a class Ci, we want the rule to cover all the tuples from class C only and no
tuple form any other class.
Algorithm: Sequential Covering
Input:
D, a data set class-labeled tuples,
Page 54
DEPARTMENT OF IT
DATA WAREHOUSING AND DATA MINING 2018-2019
class c do
repeat
Rule = Learn_One_Rule(D, Att_valls, c); remove
tuples covered by Rule form D; until termination
condition;
The Assessment of quality is made on the original set of training data. The rule may perform
well on training data but less well on subsequent data. That's why the rule pruning is required.
The rule is pruned by removing conjunct. The rule R is pruned, if pruned version of R has
greater quality than what was assessed on an independent set of tuples.
FOIL is one of the simple and effective method for rule pruning. For a given rule R,
Note − This value will increase with the accuracy of R on the pruning set. Hence, if the FOIL_Prune
value is higher for the pruned version of R, then we prune R.
OUTPUT:
EXERCISE 6: Load each dataset into Weka and run id3, j48 classification algorithm, study
the classifier output with available Datasets.
OUTPUT:
Viva voice questions
1. What is a Decision Tree Algorithm?
A decision tree is a tree in which every node is either a leaf node or a decision node.
This tree takes an input an object and outputs some decision. All Paths from root node to the leaf
node are reached by either using AND or OR or BOTH. The tree is constructed using the regularities
of the data. The decision tree is not affected by Automatic Data Preparation.
Page 56
DEPARTMENT OF IT
DATA WAREHOUSING AND DATA MINING 2018-2019
Agriculture, biological data analysis, call record analysis, DSS, Business intelligence
system etc.
SIGNATURE OF FACULTY:
THEORY:
Naive Bayes classifier assumes that the presence (or absence) of a particular feature of a class is
unrelated to the presence (or absence) of any other feature. For example, a fruit may be considered to
be an apple if it is red, round, and about 4" in diameter. Even though these features depend on the
existence of the other features, a naive Bayes classifier considers all of these properties to
independently contribute to the probability that this fruit is an apple.
An advantage of the naive Bayes classifier is that it requires a small amount of training data to
estimate the parameters (means and variances of the variables) necessary for classification. Because
independent variables are assumed, only the variances of the variables for each class need to be
determined and not the entire covariance matrix The naive Bayes probabilistic model :
Now the "naive" conditional independence assumptions come into play: assume that each feature
Fi is conditionally independent of every other feature Fj .
This means that under the above independence assumptions, the conditional distribution over the
class variable C can be expressed like this:
Page 58
DEPARTMENT OF IT
DATA WAREHOUSING AND DATA MINING 2018-2019
Steps for run Naïve-bayes and k-nearest neighbor Classification algorithms in WEKA
Page 59
DEPARTMENT OF IT
DATA WAREHOUSING AND DATA MINING 2018-2019
OUTPUT:
Page 60
DEPARTMENT OF IT
DATA WAREHOUSING AND DATA MINING 2018-2019
Page 61
DEPARTMENT OF IT
DATA WAREHOUSING AND DATA MINING 2018-2019
EXERCISE:7
Compare classification results of ID3,J48, Naïve-Bayes and k-NN classifiers for each
dataset , and reduce which classifier is performing best and poor for each dataset and
justify.
OUTPUT:
A decision tree is an hierarchically based classifier which compares data with a range of properly
selected features.
Multimedia Data Mining is a subfield of data mining that deals with an extraction of
implicit knowledge, multimedia data relationships, or other patterns not explicitly stored in
multimedia databases.
Page 63
DEPARTMENT OF IT
DATA WAREHOUSING AND DATA MINING 2018-2019
Text mining is the procedure of synthesizing information, by analyzing relations, patterns, and rules
among textual data. These procedures contains text summarization, text categorization, and text
clustering.
Naïve Bayes Algorithm is used to generate mining models. These models help to identify
relationships between input columns and the predictable columns. This algorithm can be used in the
initial stage of exploration. The algorithm calculates the probability of every state of each input
column given predictable columns possible states. After the model is made, the results can be used
for exploration and making predictions.
Distributed data warehouse shares data across multiple data repositories for the purpose of
OLAP operation.
SIGNATURE OF FACULTY:
Page 64
DEPARTMENT OF IT
DATA WAREHOUSING AND DATA MINING 2018-2019
AIM: To understanding the selected attributes and removing attributes also to reload & the
arff data file to get all the attributes in the data set.
Selecting a Clusterer
By now you will be familiar with the process of selecting and configuring objects. Clicking on the
clustering scheme listed in the Clusterer box at the top of the window brings up a GenericObjectEditor
dialog with which to choose a new clustering scheme.
Cluster Modes
The Cluster mode box is used to choose what to cluster and how to evaluate
the results. The first three options are the same as for classification: Use training set, Supplied test set and
Percentage split (Section 5.3.1)—except that now the data is assigned to clusters instead of trying to
predict a specific class. The fourth mode, Classes to clusters evaluation, compares how well the chosen
clusters match up with a pre-assigned class in the data. The drop-down box below this option selects the
class, just as in the Classify panel.
An additional option in the Cluster mode box, the Store clusters for visualization tick box,
determines whether or not it will be possible to visualize the clusters once training is complete. When
dealing with datasets that are so large that memory becomes a problem it may be helpful to disable this
option.
Ignoring Attributes
Often, some attributes in the data should be ignored when clustering. The Ignore attributes button
brings up a small window that allows you to select which attributes are ignored. Clicking on an attribute
in the window highlights it, holding down the SHIFT key selects a range
of consecutive attributes, and holding down CTRL toggles individual attributes on and off. To cancel the
selection, back out with the Cancel button. To activate it, click the Select button. The next time clustering
Page 65
DEPARTMENT OF IT
DATA WAREHOUSING AND DATA MINING 2018-2019
Learning Clusters
The Cluster section, like the Classify section, has Start/Stop buttons, a result text area and a result
list. These all behave just like their classification counterparts. Right-clicking an entry in the result list
brings up a similar menu, except that it shows only two visualization options: Visualize cluster
assignments and Visualize tree. The latter is grayed out when it is not applicable.
A.Load each dataset into Weka and run simple k-means clustering algorithm with different
values of k(number of desired clusters). Study the clusters formed. Observe the sum of
squared errors and centroids, and derive insights.
Ans:
Page 66
DEPARTMENT OF IT
DATA WAREHOUSING AND DATA MINING 2018-2019
OUTPUT:
Page 67
DEPARTMENT OF IT
DATA WAREHOUSING AND DATA MINING 2018-2019
Visualize Features
WEKA’s visualization allows you to visualize a 2-D plot of the current working relation.
Visualization is very useful in practice, it helps to determine difficulty of the learning problem.
WEKA can visualize single attributes (1-d) and pairs of attributes (2-d), rotate 3-d visualizations
(Xgobi-style). WEKA has “Jitter” option to deal with nominal attributes and to detect “hidden” data
points.
Access To Visualization From The Classifier, Cluster And Attribute Selection Panel Is Available From A
Popup Menu. Click The Right Mouse Button Over An Entry In The Result List To Bring Up The Menu.
You Will Be Presented With Options For Viewing Or Saving The Text Output And
--- Depending On The Scheme --- Further Options For Visualizing Errors, Clusters, Trees Etc.
Page 68
DEPARTMENT OF IT
DATA WAREHOUSING AND DATA MINING 2018-2019
Select a square that corresponds to the attributes you would like to visualize. For example, let’s choose
‘outlook’ for X – axis and ‘play’ for Y – axis. Click anywhere inside the square that corresponds to ‘play
o
In the visualization window, beneath the X-axis selector there is a drop-down list,
‘Colour’, for choosing the color scheme. This allows you to choose the color of points based on the
attribute selected. Below the plot area, there is a legend that describes what values the colors
correspond to. In your example, red represents ‘no’, while blue represents ‘yes’. For better visibility
you should change the color of label ‘yes’. Left-click on ‘yes’ in the ‘Class colour’ box and select
lighter color from the color palette.n the left and ‘outlook’ at the top.
Page 69
DEPARTMENT OF IT
DATA WAREHOUSING AND DATA MINING 2018-2019
Selecting Instances
Sometimes it is helpful to select a subset of the data using visualization tool. A special case is
the ‘UserClassifier’, which lets you to build your own classifier by interactively selecting instances.
Below the Y – axis there is a drop-down list that allows you to choose a selection method. A group of
points on the graph can be selected in four ways [2]:
attributes of the point. If more than one point will appear at the same location, more than one set
of attributes will be shown.
Page 70
DEPARTMENT OF IT
DATA WAREHOUSING AND DATA MINING 2018-2019
3. Polygon. You can select several points by building a free-form polygon. Left-click on the
graph to add vertices to the polygon and right-click to complete it.
Page 71
DEPARTMENT OF IT
DATA WAREHOUSING AND DATA MINING 2018-2019
4. Polyline. To distinguish the points on one side from the once on another, you can build a
polyline. Left-click on the graph to add vertices to the polyline and right-click to finish.
SIGNATURE OF FACULTY:
Page 72
DEPARTMENT OF IT
DATA WAREHOUSING AND DATA MINING 2018-2019
Description: The business of banks is making loans. Assessing the credit worthiness of an
applicant is of crucial importance. You have to develop a system to help a loan officer decide
whether the credit of a customer is good. Or bad. A bank’s business rules regarding loans
must consider two opposing factors. On th one han, a bank wants to make as many loans as
possible.
Interest on these loans is the banks profit source. On the other hand, a bank can not afford to
make too many bad loans. Too many bad loans could lead to the collapse of the bank. The
bank’s loan policy must involved a compromise. Not too strict and not too lenient.
To do the assignment, you first and foremost need some knowledge about the world of credit.
You can acquire such knowledge in a number of ways.
1. Knowledge engineering: Find a loan officer who is willing to talk. Interview her and try to
represent her knowledge in a number of ways.
2. Books: Find some training manuals for loan officers or perhaps a suitable textbook on finance.
Translate this knowledge from text from to production rule form.
3. Common sense: Imagine yourself as a loan officer and make up reasonable rules which can be
used to judge the credit worthiness of a loan applicant.
4. Case histories: Find records of actual cases where competent loan officers correctly judged
when and not to. Approve a loan application.
Page 73
DEPARTMENT OF IT
DATA WAREHOUSING AND DATA MINING 2018-2019
Page 74
DEPARTMENT OF IT
DATA WAREHOUSING AND DATA MINING 2018-2019
A.List all the categorical (or nominal) attributes and the real valued attributes separately.
Page 75
DEPARTMENT OF IT
DATA WAREHOUSING AND DATA MINING 2018-2019
What attributes do you think might be crucial in making the credit assessment? Come up with some
simple rules in plain English using your selected attributes.
EXERCISE:9
Explain One type of model that you can create is a Decision tree . train a Decision tree using the
complete data set as the training data. Report the model obtained after training
Page 76
DEPARTMENT OF IT
DATA WAREHOUSING AND DATA MINING 2018-2019
EXERCISE :10
1)Suppose you use your above model trained on the complete dataset, and classify
credit good/bad for each of the examples in the dataset. What % of examples can you
classify correctly?(This is also called testing on the training set) why do you think can
not get 100% training accuracy?
Ans) Steps followed are:
1. Double click on credit-g.arff file.
2. Click on classify tab.
3. Click on choose button.
4. Expand tree folder and select J48
5. Click on use training set in test options.
6. Click on start button.
7. On right side we find confusion matrix
8. Note the correctly classified instances.
Output:
If we used our above model trained on the complete dataset and classified credit as good/bad for each
of the examples in that dataset. We can not get 100% training accuracy only 85.5% of examples, we
can classify correctly.
2) Is testing on the training set as you did above a good idea? Why or why not?
SIGNATURE OF FACULTY:
Page 77
DEPARTMENT OF IT
DATA WAREHOUSING AND DATA MINING 2018-2019
WEEK-6
One approach for solving the problem encountered in the previous question is using
cross-validation? Describe what is cross validation briefly. Train a decision tree again
using cross validation and report your results. Does accuracy increase/decrease? Why?
Ans) steps followed are:
9. Double click on credit-g.arff file.
10. Click on classify tab.
11. Click on choose button.
12. Expand tree folder and select J48
13. Click on cross validations in test options.
14. Select folds as 10
15. Click on start
16. Change the folds to 5
17. Again click on start
18. Change the folds with 2
19. Click on start.
20. Right click on blue bar under result list and go to visualize tree
Output:
Cross-Validation Definition: The classifier is evaluated by cross validation using the number of folds that
are entered in the folds text field.
In Classify Tab, Select cross-validation option and folds size is 2 then Press Start Button, next time
change as folds size is 5 then press start, and next time change as folds size is 10 then press start.
SIGNATURE OF FACULTY:
Page 78
DEPARTMENT OF IT
DATA WAREHOUSING AND DATA MINING 2018-2019
WEEK:7
Check to see if the data shows a bias against “foreign workers” or “personal-status”.
One way to do this is to remove these attributes from the data set and see if the decision
tree created in those cases is significantly different from the full dataset case which you
have already done. Did removing these attributes have any significantly effect? Discuss.
We use the Preprocess Tab in Weka GUI Explorer to remove an attribute “Foreign• workers” &
“Perosnal_status” one by one. In Classify Tab, Select Use Training set option then
Press Start Button, If these attributes removed from the dataset, we can see change in the accuracy
compare to full data set when we removed.
SIGNATURE OF FACULTY:
Page 79
DEPARTMENT OF IT
DATA WAREHOUSING AND DATA MINING 2018-2019
WEEK :8
Another question might be, do you really need to input so many attributes to get good
results? May be only a few would do. For example, you could try just having attributes
2,3,5,7,10,17 and 21. Try out some combinations.(You had removed two attributes in
problem 7. Remember to reload the arff data file to get all the attributes initially before
you start selecting the ones you want.)
OUTPUT:
We use the Preprocess Tab in Weka GUI Explorer to remove 2 nd attribute (Duration). In Classify Tab,
Select Use Training set option then Press Start Button, If these attributes removed from the dataset, we
can see change in the accuracy compare to full data set when we removed.
SIGNATURE OF FACULTY:
Page 80
DEPARTMENT OF IT
DATA WAREHOUSING AND DATA MINING 2018-2019
WEEK-9
Sometimes, The cost of rejecting an applicant who actually has good credit might be
higher than accepting an applicant who has bad credit. Instead of counting the
misclassification equally in both cases, give a higher cost to the first case ( say cost 5)
and lower cost to the second case. By using a cost matrix in weak. Train your decision
tree and report the Decision Tree and cross validation results. Are they significantly
different from results obtained in problem 6.
Ans) steps followed are:
1. Double click on credit-g.arff file.
2. Click on classify tab.
3. Click on choose button.
4. Expand tree folder and select J48
5. Click on start
6. Note down the accuracy values
7. Now click on credit arff file
8. Click on attributes 2,3,5,7,10,17,21
9. Click on invert
10. Click on classify tab
11. Choose J48 algorithm
12. Select Cross validation fold as 2
13. Click on start and note down the accuracy values.
14. Again make cross validation folds as 10 and note down the accuracy values.
15. Again make cross validation folds as 20 and note down the accuracy values.
OUTPUT:
In Weka GUI Explorer, Select Classify Tab, In that Select Use Training set option . In Classify Tab
then press Choose button in that select J48 as Decision Tree Technique. In Classify Tab then press
More options button then we get classifier evaluation options window in that select cost sensitive
evaluation the press set option Button then we get Cost Matrix Editor. In that change classes as 2 then
press Resize button. Then we get 2X2 Cost matrix. In Cost Matrix (0,1) location value change as 5,
then we get modified cost matrix is as follows.
0.0 5.0
1.0 0.0
Then close the cost matrix editor, then press ok button. Then press start button.
SIGNATURE OF FACULTY:
Page 81
DEPARTMENT OF IT
DATA WAREHOUSING AND DATA MINING 2018-2019
WEEK:10
Do you think it is a good idea to prefect simple decision trees instead of having long
complex decision tress? How does the complexity of a Decision Tree relate to the bias
of the model?
Ans)
OUTPUT:
SIGNATURE OF FACULTY:
Page 82
DEPARTMENT OF IT
DATA WAREHOUSING AND DATA MINING 2018-2019
WEEK : 11
You can make your Decision Trees simpler by pruning the nodes. One approach is to use
Reduced Error Pruning. Explain this idea briefly. Try reduced error pruning for training
your Decision Trees using cross validation and report the Decision Trees you obtain?
Also Report your accuracy using the pruned model Does your Accuracy increase?
Ans)
OUTPUT:
SIGNATURE OF FACULTY:
Page 83
DEPARTMENT OF IT
DATA WAREHOUSING AND DATA MINING 2018-2019
WEEK :12
How Can you Convert Decision Tree in to “If then else Rules”.Make Up your own Small
Decision Tree consisting 2-3 levels and convert into a set of rules. There also exist
different classifiers that output the model in the form of rules. One such classifier in
weka is rules. PART, train this model and report the set of rules obtained. Sometimes
just one attribute can be good enough in making the decision, yes, just one ! Can you
predict what attribute that might be in this data set? OneR classifier uses a single
attribute to make decisions(it chooses the attribute based on minimum error).Report the
rule obtained by training a one R classifier. Rank the performance of j48,PART,oneR.
Ans)
Page 84
DEPARTMENT OF IT
DATA WAREHOUSING AND DATA MINING 2018-2019
OUTPUT:
SIGNATURE OF FACULTY:
Page 85
DEPARTMENT OF IT
DATA WAREHOUSING AND DATA MINING 2018-2019
Data Preprocessing
The preprocess section allows filters to be defined that transform the data in various ways. The Filter
box is used to set up filters that are required. At the left of the Filter box is a Choose button. By
clicking this button it is possible to select one of the filters in Weka. Once a filter has been selected,
its name and options are shown in the field next to the Choose button. Clicking on this box brings up
a GenericObjectEditor dialog box, which lets you configure a filter. Once you are happy with the
settings you have chosen, click OK to return to the main Explorer window.
Now you can apply it to the data by pressing the Apply button at the right end of the Filter panel.
The Preprocess panel will then show the transformed data. The change can be undone using the
Undo button. Use the Edit button to view your transformed data in the dataset editor.
• Use the filter AddExpression and add an attribute which is the average of attributes M1 and
M2. Name this attribute as AVG.
• Use the attribute filters Discretize and PKIDiscretize to discretize the M1 and M2
attributes into five bins. (NOTE: Open the file afresh to apply the second filter
since there would be no numeric attribute to dicretize after you have applied the first filter.)
• Perform Normalize and Standardize on the dataset and identify the difference between
Page 86
DEPARTMENT OF IT
DATA WAREHOUSING AND DATA MINING 2018-2019
these operations.
• Use the attribute filter FirstOrder to convert the M1 and M2 attributes into a single
attribute representing the first differences between them.
• Add a nominal attribute Grade and use the filter MakeIndicator to convert the attribute into
a Boolean attribute.
• Try if you can accomplish the task in the previous step using the filter MergeTwoValues.
• Try the following transformation functions and identify the purpose of each
• NumericTransform
• NominalToBinary
• NumericToBinary
• Remove
• RemoveType
• RemoveUseless
• ReplaceMissingValues
• SwapValues
• Perform Randomize on the given dataset and try to correlate the resultant sequence with
the given one.
Page 87
DEPARTMENT OF IT