Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

module_2

Download as pdf or txt
Download as pdf or txt
You are on page 1of 35

Machine Learning 21AI63

MODULE 2
Introduction: End-to-End Machine Learning Project
In this chapter, an example project end to end is presented, imagining the scenario of being a
recently hired data scientist in a real estate company. Here are the main steps to go through:

1. Look at the big picture.


2. Get the data.
3. Discover and visualize the data to gain insights.
4. Prepare the data for Machine Learning algorithms.
5. Select a model and train it.
6. Fine-tune your model.
7. Present your solution.
8. Launch, monitor, and maintain your system.

Look at the Big Picture


Welcome to Machine Learning Housing Corporation!
• The first task is to build a model of housing prices in California using the California
census data.
• This data includes metrics such as population, median income, median housing price,
and more for each block group in California.
• Block groups are the smallest geographical units for which the US Census Bureau
publishes sample data, typically having a population of 600 to 3,000 people. These
will be referred to as “districts” for simplicity.
• The model should learn from this data and be able to predict the median housing price
in any district based on all the other metrics

Frame the Problem

Each of questions helps in framing and understanding the machine learning project more
effectively.
1. What exactly is the business objective? - This question aims to clarify the ultimate
goal of the project and how the company expects to benefit from the model.

2. How does the company expect to use and benefit from this model? - This is important
because it will determine how to frame the problem, what algorithms to select, what
performance measure should use to evaluate model, and how much effort should
spend tweaking it.

1 Deepak D, Asst. Prof., Dept. of AIML, Canara Engineering College, Mangaluru


Machine Learning 21AI63

Here, the model’s output (a prediction of a district’s median housing price) will be fed
to another Machine Learning system (Figure 2-2), along with many other signals. This
downstream system will determine whether it is worth investing in a given area or not.
Getting this right is critical, as it directly affects revenue.

3. What does the current solution look like (if any)? - It will give a reference
performance, as well as insights on how to solve the problem.

With all this information, it is ready to start designing the system

1. Is it supervised, unsupervised, or reinforcement learning? - This is a supervised


learning task because we have labeled training examples where each instance comes
with the expected output, i.e., the district’s median housing price.

2. Is it a classification task, a regression task, or something else? - It is a regression task


because we are asked to predict a continuous value (the median housing price). More
specifically, it is a multiple regression problem since the system will use multiple
features to make a prediction (such as population, median income, etc.). It is also a
univariate regression problem since we are only trying to predict a single value for
each district. If we were trying to predict multiple values per district, it would be a
multivariate regression problem

3. Should you use batch learning or online learning techniques? - Batch learning should
be chosen because there is no continuous flow of new data, no immediate need to
adjust to changing data, and the data is small enough to fit in memory.

2 Deepak D, Asst. Prof., Dept. of AIML, Canara Engineering College, Mangaluru


Machine Learning 21AI63

Select a Performance Measure

1. Root Mean Square Error (RMSE)


A performance measure for regression problems is the Root Mean Square Error (RMSE). It
gives an idea of how much error the system typically makes in its predictions, with a higher
weight for large errors.

This equation introduces several very common Machine Learning notations


• m is the number of instances in the dataset. For example, if you are evaluating the
RMSE on a validation set of 2,000 districts, then m = 2,000.
• x(i) is a vector of all the feature values (excluding the label) of the ith instance in the
dataset, and y(i) is its label (the desired output value for that instance).
• X is a matrix containing all the feature values (excluding labels) of all instances in the
dataset.
• h is called a hypothesis. When system is given an instance’s feature vector x(i), it
outputs a predicted value ŷ(i) = h(x(i)) for that instance
• RMSE(X,h) is the cost function measured on the set of examples using hypothesis h.

2. Mean Absolute Error (Average Absolute Deviation)

This performance measure is used when there are many outliers.

Both the RMSE and the MAE are ways to measure the distance between two vectors: the
vector of predictions and the vector of target values.

3 Deepak D, Asst. Prof., Dept. of AIML, Canara Engineering College, Mangaluru


Machine Learning 21AI63

Get the Data

Create the Workspace

• First, ensure Python is installed.


• Next, create a workspace directory for your Machine Learning code and datasets.
• Open a terminal and type the following commands:

• A number of Python modules are needed: Jupyter, NumPy, Pandas, Matplotlib, and
Scikit-Learn.
• The system’s packaging system (e.g., apt-get on Ubuntu, or MacPorts or HomeBrew
on MacOS) can be used. Install a Scientific Python distribution such as Anaconda and
its packaging system or Python’s own packaging system, pip, can be used.
• All the required modules and their dependencies can now be installed using this
simple pip command.

• To check your installation, try to import every module like this:

There should be no output and no error.

• Now you can fire up Jupyter by typing:

A Jupyter server is now running in your terminal, listening to port 8888


• Now create a new Python notebook by clicking on the New button and selecting the
appropriate Python version

Download the Data

• For this project, just download a single compressed file, housing.tgz, which contains a
comma-separated value (CSV) file called housing.csv with all the data.
• A simple method is to use web browser to download it, decompress the file and extract
the CSV file.
• But it is preferable to create a small function / script to download the data because it is
useful in particular if data changes regularly, it can run whenever you need to fetch the
latest data.

4 Deepak D, Asst. Prof., Dept. of AIML, Canara Engineering College, Mangaluru


Machine Learning 21AI63

Here is the function to fetch the data:

Now when you call fetch_housing_data(), it creates a datasets/housing directory in


workspace, downloads the housing.tgz file, and extracts the housing.csv from it in this
directory.

Now let’s load the data using Pandas.

This function returns a Pandas DataFrame object containing all the data.

Take a Quick Look at the Data Structure

1. head() : Let’s take a look at the top five rows using the DataFrame’s head() method.
Each row represents one district. There are 10 attributes “longitude, latitude,
housing_median_age, total_rooms, total_bed_rooms, population, households,
median_income, median_house_value, and ocean_proximity.”

5 Deepak D, Asst. Prof., Dept. of AIML, Canara Engineering College, Mangaluru


Machine Learning 21AI63

2. info(): The info() method is useful to get a quick description of the data, in particular
the total number of rows, and each attribute’s type and number of non-null values

3. value_counts(): You can find out what categories exist and how many districts belong
to each category by using the value_counts() method:

6 Deepak D, Asst. Prof., Dept. of AIML, Canara Engineering College, Mangaluru


Machine Learning 21AI63

4. describe(): The describe() method shows a summary of the numerical attributes

• The count, mean, min, and max rows are self-explanatory. Note that the null values are
ignored (so, for example, count of total_bedrooms is 20,433, not 20,640).
• The std row shows the standard deviation, which measures how dispersed the values
are.
• The 25%, 50%, and 75% rows show the corresponding percentiles: a percentile
indicates the value below which a given percentage of observations in a group of
observations falls.
• For example, 25% of the districts have a housing_median_age lower than 18, while
50% are lower than 29 and 75% are lower than 37. These are often called the 25th
percentile (or 1st quartile), the median, and the 75th percentile (or 3rd quartile).

Create a Test Set

When splitting the data into training and test sets, it's important to ensure that test set remains
consistent across different runs of the program.

The Problem: If the dataset is randomly split into training and test sets each time the
program is run, different test sets will be generated each time. Over time, the model might see
the entire dataset, which defeats the purpose of having a separate test set.

Solution 1: Saving the Test Set

One way to address the issue of different test sets on each run is to save the test set when it is
first created. Then, load this saved test set in future runs. However, this approach has
limitations, especially if there is a need to update the dataset.

7 Deepak D, Asst. Prof., Dept. of AIML, Canara Engineering College, Mangaluru


Machine Learning 21AI63

Solution 2: Using a Random Seed

Another option is to set the random number generator’s seed (e.g., np.random.seed(42)) so
that it always generates the same shuffled indices.

But both these solutions will break next time you fetch an updated dataset.

A more robust approach is to use each instance's unique identifier to determine whether it
should be in the test set. This way, even if dataset is refreshed, the split remains consistent.

Here it can do:


• Compute a hash of each instance’s identifier.
• Put the instance in the test set if the hash value is below a certain threshold (e.g., 20%
of the maximum hash value).
This method ensures that test set contains approximately 20% of the data and remains
consistent across runs, even when the dataset is updated

Discover and Visualize the Data to Gain Insights

Visualizing Geographical Data

The dataset has geographical information (latitude and longitude), it is a good idea to create a
scatterplot of all districts to visualize the data.

8 Deepak D, Asst. Prof., Dept. of AIML, Canara Engineering College, Mangaluru


Machine Learning 21AI63

The above plot looks like California, but other than that it is hard to see any particular
pattern. Setting the alpha option to 0.1 makes it much easier to visualize the places where
there is a high density of data points

Now let’s look at the housing prices. The radius of each circle represents the district’s
population (s), and the color represents the price (c). We will use a predefined color map
(cmap) called jet, which ranges from blue (low values) to red (high prices)

9 Deepak D, Asst. Prof., Dept. of AIML, Canara Engineering College, Mangaluru


Machine Learning 21AI63

Looking for Correlations

To compute the standard correlation coefficient (also called Pearson’s r) between every pair
of attributes use the corr() method:

Now let’s look at how much each attribute correlates with the median house value:

• The correlation coefficient ranges from –1 to 1.


• When it is close to 1, it means that there is a strong positive correlation; for example,
the median house value tends to go up when the median income goes up.
• When the coefficient is close to –1, it means that there is a strong negative correlation;
For example, there is small negative correlation between the latitude and the median
house value (i.e., prices have a slight tendency to go down when you go north).
• When coefficients close to zero mean that there is no linear correlation.

The below figure shows various plots along with the correlation coefficient between their
horizontal and vertical axes.

Figure: Standard correlation coefficient of various datasets

10 Deepak D, Asst. Prof., Dept. of AIML, Canara Engineering College, Mangaluru


Machine Learning 21AI63

Another way to check for correlation between attributes is to use Pandas’ scatter_matrix
function, which plots every numerical attribute against every other numerical attribute

The main diagonal (top left to bottom right) would be full of straight lines if Pandas plotted
each variable against itself, which would not be very useful. So instead, Pandas displays a
histogram of each attribute

The most promising attribute to predict the median house value is the median income, so let’s
look in on their correlation scatterplot.

11 Deepak D, Asst. Prof., Dept. of AIML, Canara Engineering College, Mangaluru


Machine Learning 21AI63

Figure: Median income versus median house value

Prepare the Data for Machine Learning Algorithms

Data Cleaning
Start to clean training set. Let’s separate the predictors and the labels since we don’t want to
apply the same transformations to the predictors and the target values.

Missing Features: Most Machine Learning algorithms cannot work with missing features. If
any attribute has some missing values there are three options to handle:
• Get rid of the corresponding attribute.
• Get rid of the whole attribute.
• Set the values to some value (zero, the mean, the median, etc.).

These can be accomplish easily by using DataFrame’s dropna(), drop(), and fillna() methods:

If option 3 is chosen, compute the median value on the training set and use it to fill the
missing values in the training set. Save the computed median value, as it will be needed later
to replace missing values in the test set for system evaluation, and also to handle missing
values in new data once the system goes live.

12 Deepak D, Asst. Prof., Dept. of AIML, Canara Engineering College, Mangaluru


Machine Learning 21AI63

Scikit-Learn provides a class to take care of missing values: SimpleImputer

Since the median can only be computed on numerical attributes, we need to create a copy of
the data without the text attribute ocean_proximity:

Now, fit the imputer instance to the training data using the fit() method:

The imputer has simply computed the median of each attribute and stored the result in its
statistics_ instance variable.

Now you can use this “trained” imputer to transform the training set by replacing missing
values by the learned medians:

The result is a plain NumPy array containing the transformed features. If you want to put it
back into a Pandas DataFrame, it’s simple:

Handling Text and Categorical Attributes

To convert categories from text to numbers, we can use Scikit-Learn’s OrdinalEncoder

13 Deepak D, Asst. Prof., Dept. of AIML, Canara Engineering College, Mangaluru


Machine Learning 21AI63

Another way to create one binary attribute per category: one attribute equal to 1 when the
category is “<1H OCEAN” (and 0 otherwise), another attribute equal to 1 when the category
is “INLAND” (and 0 otherwise), and so on. This is called one-hot encoding, because only
one attribute will be equal to 1 (hot), while the others will be 0 (cold). The new attributes are
sometimes called dummy attributes. Scikit-Learn provides a OneHotEn coder class to convert
categorical values into one-hot vectors.

By default, the OneHotEncoder class returns a sparse array, but we can convert it to a dense
array if needed by calling the toarray() method:

Feature Scaling
Machine Learning algorithms don’t perform well when the input numerical attributes have
very different scales.

There are two common ways to get all attributes to have the same scale:
1. Min-max scaling: In min-max scaling (normalization) the values are shifted and
rescaled so that they end up ranging from 0 to 1.

Scikit-Learn provides a transformer called MinMaxScaler for this.

14 Deepak D, Asst. Prof., Dept. of AIML, Canara Engineering College, Mangaluru


Machine Learning 21AI63

2. Standardization (Z-score Normalization): The value x is subtracting the mean


value, and then it divides by the standard deviation so that the resulting distribution
has unit variance.

Scikit-Learn provides a transformer called StandardScaler for standardization

Transformation Pipelines
• There are many data transformation steps that need to be executed in the right order.
Scikit-Learn provides the Pipeline class to help with such sequences of
transformations.
• Here is a small pipeline for the numerical attributes:

First line imports the necessary classes from the sklearn library. Pipeline is used to create a
sequence of data processing steps.
StandardScaler is used to standardize features by removing the mean and scaling to unit
variance.
This code defines a pipeline named num_pipeline consisting of three steps:

1. 'imputer': Uses SimpleImputer to handle missing values by replacing them with the
median value of the column. This is specified by strategy="median".
2. 'attribs_adder': Uses a custom transformer CombinedAttributesAdder(), which is
assumed to be defined elsewhere. This step adds new attributes to the dataset based on
existing ones.
3. 'std_scaler': Uses StandardScaler to standardize the numerical attributes.
Standardization is the process of rescaling the features so that they have the properties
of a standard normal distribution with a mean of 0 and a standard deviation of 1.

15 Deepak D, Asst. Prof., Dept. of AIML, Canara Engineering College, Mangaluru


Machine Learning 21AI63

The last line applies the pipeline to the housing_num data. The fit_transform method first fits
the pipeline to the data i.e., it computes the necessary statistics such as median values for
imputation and mean/standard deviation for scaling and then transforms the data according to
the fitted pipeline.

Select and Train a Model

Training and Evaluating on the Training Set

Let’s first train a Linear Regression model.

Let’s try it out on a few instances from the training set:

Let’s measure this regression model’s RMSE on the whole training set using Scikit-Learn’s
mean_squared_error function:

• This score is better than nothing but clearly not a great score: most districts’
median_housing_values range between $120,000 and $265,000, so a typical prediction
error of $68,628 is not very satisfying.
• This is an example of a model underfitting the training data. When this happens it can
mean that the features do not provide enough information to make good predictions, or
that the model is not powerful enough.
• To fix underfitting are to select a more powerful model, to feed the training algorithm
with better features, or to reduce the constraints on the model.

16 Deepak D, Asst. Prof., Dept. of AIML, Canara Engineering College, Mangaluru


Machine Learning 21AI63

Better Evaluation Using Cross-Validation

To evaluate the model (Decision Tree) would be to use the train_test_split function to split
the training set into a smaller training set and a validation set, then train models against the
smaller training set and evaluate them against the validation set.

K-Fold Cross-Validation
• A more efficient alternative is using Scikit-Learn’s K-fold cross-validation.
• This method splits the training set into n distinct subsets, called folds.
• The model is trained and evaluated k times, each time using a different fold for
evaluation and the remaining k-1 folds for training.
• This results in an array of k evaluation scores.

Image Source: https://docs.ultralytics.com/guides/kfold-cross-validation/#introduction

Insights from Cross-Validation


• The Decision Tree model might not perform as well as expected when using cross-
validation. For example, it may perform worse than the Linear Regression model.
• Cross-validation not only estimates the model’s performance but also gives a measure
of its precision (standard deviation).

Overfitting in Decision Tree Model


• If the Decision Tree performs worse than expected, it might be overfitting.
• Overfitting occurs when the model learns the training data too well, including noise
and outliers, which reduces its performance on new data.

Trying the RandomForestRegressor


• Random Forests train multiple Decision Trees on random subsets of features and
average their predictions. This technique, known as Ensemble Learning, often
enhances the performance of machine learning models.
• Although Random Forests show promising results, they can still overfit if the training
set score is much lower than the validation set score.

17 Deepak D, Asst. Prof., Dept. of AIML, Canara Engineering College, Mangaluru


Machine Learning 21AI63

To handle overfitting, consider the following:


• Simplify the model.
• Apply regularization techniques to constrain the model.
• Obtain more training data to improve the model's generalization.

Exploring More Models


• Before finalizing on Random Forests or any other model, experiment with a variety of
models.
• Try models from different categories of machine learning algorithms, such as: Support
Vector Machines with different kernels and Neural networks.

The goal is to identify a shortlist of two to five promising models without spending too much
time on hyperparameter tweaking. By following these steps, you can ensure a thorough
evaluation of machine learning models, leading to better performance and reliability in real-
world applications.

Fine-Tune Your Model

Grid Search
• Scikit-Learn’s GridSearchCV tell which hyperparameters you want it to experiment
with, and what values to try out, and it will evaluate all the possible combinations of
hyperparameter values, using cross-validation.
• For example, the following code searches for the best combination of hyperparameter
values for the RandomForestRegressor:

18 Deepak D, Asst. Prof., Dept. of AIML, Canara Engineering College, Mangaluru


Machine Learning 21AI63

This param_grid tells Scikit-Learn to first evaluate all 3 × 4 = 12 combinations of


n_estimators and max_features hyperparameter values specified in the first dict, then try all
2 × 3 = 6 combinations of hyperparameter values in the second dict, but this time with the
bootstrap hyperparameter set to False instead of True.
The grid search will explore 12 + 6 = 18 combinations of RandomForestRegressor
hyperparameter values, and it will train each model five times. In other words, there will be
18 × 5 = 90 rounds of training!

Randomized Search

• When the hyperparameter search space is large, it is often preferable to use


RandomizedSearchCV instead.
• It evaluates a given number of random combinations by selecting a random value for
each hyperparameter at every iteration.

Ensemble Methods

Another way to fine-tune your system is to try to combine the models that perform best. The
group (or “ensemble”) will often perform better than the best individual model, especially if
the individual models make very different types of errors.

Analyze the Best Models and Their Errors


The RandomForestRegressor can indicate the relative importance of each attribute for
making accurate predictions.

With this information, you may want to try dropping some of the less useful features. You
should also look at the specific errors that your system makes, then try to understand why it
makes them and what could fix the problem.

19 Deepak D, Asst. Prof., Dept. of AIML, Canara Engineering College, Mangaluru


Machine Learning 21AI63

Evaluate Your System on the Test Set


Get the predictors and the labels from test set, run your full_pipeline to transform the data,
and evaluate the final model on the test set.

Launch, Monitor, and Maintain Your System

• Production Readiness: Integrate the production input data sources into your system
and write necessary tests to ensure everything functions correctly.
• Performance Monitoring: Develop code to monitor your system’s live performance
regularly and trigger alerts if there is a performance drop, to catch both sudden
breakage and gradual performance degradation.
• Human Evaluation: Implement a pipeline for human analysis of your system’s
predictions, involving field experts or crowdsourcing platforms, to evaluate and
improve system accuracy.
• Input Data Quality Check: Regularly evaluate the quality of the system’s input data
to detect issues early, preventing minor problems from escalating and affecting system
performance.
• Automated Training: Automate the process of training models with fresh data
regularly to maintain consistent performance and save snapshots of the system's state
for easy rollback in online learning systems.

20 Deepak D, Asst. Prof., Dept. of AIML, Canara Engineering College, Mangaluru


Machine Learning 21AI63

Explanation of Grid Search (Additional Concept)

GridSearchCV: This is a tool from Scikit-Learn that performs an exhaustive search over
specified parameter values for an estimator. It helps in finding the best combination of
hyperparameters for a given model.

• param_grid: This is a list of dictionaries, where each dictionary defines a set of


hyperparameters to search over.
o n_estimators: This parameter specifies the number of trees in the forest.
o max_features: This parameter specifies the maximum number of features to
consider when looking for the best split.
o The first dictionary searches over different combinations of n_estimators
and max_features with the default setting of bootstrap=True.
o The second dictionary adds an additional setting to search over:
bootstrap=False, with its own combinations of n_estimators and
max_features

forest_reg: This creates an instance of the RandomForestRegressor, which is the


model we want to tune.

• grid_search: This initializes GridSearchCV with several parameters:


o forest_reg: The estimator (model) to be tuned.
o param_grid: The parameter grid defined earlier, specifying the
hyperparameters to search over.
o cv=5: This sets the cross-validation strategy to 5-fold cross-validation. This
means the data will be split into 5 parts, and the model will be trained and
validated 5 times, each time using a different part of the data for validation and
the remaining parts for training.

21 Deepak D, Asst. Prof., Dept. of AIML, Canara Engineering College, Mangaluru


Machine Learning 21AI63

o scoring='neg_mean_squared_error': This sets the scoring metric to


negative mean squared error. GridSearchCV will use this metric to evaluate
the performance of each combination of hyperparameters. The negative sign is
used because Scikit-Learn expects higher values to be better, but for mean
squared error, lower values are better.
o return_train_score=True: This ensures that the training scores for each
fold and parameter combination are stored in the results

fit: This method trains the GridSearchCV object using the prepared housing data
(housing_prepared) and the corresponding labels (housing_labels)

22 Deepak D, Asst. Prof., Dept. of AIML, Canara Engineering College, Mangaluru


Machine Learning 21AI63

Classification
MNIST
• MNIST dataset, which is a set of 70,000 small images of digits handwritten by high
school students and employees of the US Census Bureau.
• Each image is labeled with the digit it represents.
• This set has been studied so much that it is often called the “Hello World” of Machine
Learning: whenever people come up with a new classification algorithm, they are
curious to see how it will perform on MNIST.

The following code fetches the MNIST dataset

from sklearn.datasets import fetch_openml


# Fetch the MNIST dataset from OpenML
mnist = fetch_openml('mnist_784', version=1)
mnist.keys()

Output

dict_keys(['data', 'target', 'frame', 'categories',


'feature_names', 'target_names', 'DESCR', 'details', 'url'])

Datasets loaded by Scikit-Learn generally have a dictionary structure including:


• A DESCR key describing the dataset
• A data key containing an array with one row per instance and one column per feature
• A target key containing an array with the labels

X, y = mnist["data"], mnist["target"]
X.shape
y.shape

Output
(70000, 784)
(70000,)

There are 70,000 images, and each image has 784 features. This is because each image is
28×28 pixels, and each feature simply represents one pixel’s intensity, from 0 (white) to 255
(black).

23 Deepak D, Asst. Prof., Dept. of AIML, Canara Engineering College, Mangaluru


Machine Learning 21AI63

Let’s look at one digit from the dataset. Fetch an instance’s feature vector, reshape it to a
28×28 array, and display it using Matplotlib’s imshow() function:

# Convert the target to integers


y = y.astype(int)

# Select an instance (e.g., the first instance)


some_digit = X.iloc[0]

# Reshape the feature vector to a 28x28 array


some_digit_image = some_digit.values.reshape(28, 28)

# Display the digit using Matplotlib


plt.imshow(some_digit_image, cmap='gray')
plt.title(f"Label: {y[0]}")
plt.axis('off')
plt.show()
Output

The below figure shows a few more images from the MNIST dataset to give you a feel for the
complexity of the classification task.

The MNIST dataset is actually already split into a training set (the first 60,000 images) and a
test set (the last 10,000 images):

X_train, X_test, y_train, y_test = X[:60000], X[60000:],


y[:60000], y[60000:]

24 Deepak D, Asst. Prof., Dept. of AIML, Canara Engineering College, Mangaluru


Machine Learning 21AI63

Training a Binary Classifier

• To simplify the problem, focus on identifying a single digit, such as the number 5.
• This '5-detector' will serve as an example of a binary classifier, distinguishing
between two classes: 5 and not-5.
• Let's create the target vectors for this classification task:

y_train_5 = (y_train == 5) # True for all 5s, False for all other digits.
y_test_5 = (y_test == 5)

• Let’s pick a classifier and train it. Consider the Stochastic Gradient Descent (SGD)
classifier, using Scikit-Learn’s SGDClassifier class.
• This classifier has the advantage of being capable of handling very large datasets
efficiently.

from sklearn.linear_model import SGDClassifier


sgd_clf = SGDClassifier(random_state=42)
sgd_clf.fit(X_train, y_train_5)

Detect images of the number 5:


sgd_clf.predict([some_digit])

Output
array([ True])

The classifier guesses that this image represents a 5 (True)

25 Deepak D, Asst. Prof., Dept. of AIML, Canara Engineering College, Mangaluru


Machine Learning 21AI63

Performance Measures

Measuring Accuracy Using Cross-Validation

• Let’s use the cross_val_score() function to evaluate your SGDClassifier model using
K-fold cross-validation, with three folds.
• K-fold crossvalidation means splitting the training set into K-folds (three), then
making predictions and evaluating them on each fold using a model trained on the
remaining folds

from sklearn.model_selection import cross_val_score


cross_val_score(sgd_clf, X_train, y_train_5, cv=3, scoring="accuracy")

Output
array([0.95035, 0.96035, 0.9604 ])

• Sometimes accuracy is not the preferred performance measure for classifiers,


especially when dealing with skewed datasets, i.e., when some classes are much more
frequent than others.

Confusion Matrix

• It is a table that is used to evaluate the performance of a classification algorithm.


• The general idea is to count the number of times instances of class A are classified as
class B.
• It provides a comprehensive breakdown of the predictions made by the model and
compares them to the actual outcomes. The matrix helps to understand how well the
classifier is performing, especially in distinguishing between different classes.

Components of a Confusion Matrix


A confusion matrix has the following components for a binary classification problem:

1. True Positives (TP): The number of instances correctly predicted as positive.


2. True Negatives (TN): The number of instances correctly predicted as negative.
3. False Positives (FP): The number of instances incorrectly predicted as positive (Type
I error).
4. False Negatives (FN): The number of instances incorrectly predicted as negative
(Type II error).

26 Deepak D, Asst. Prof., Dept. of AIML, Canara Engineering College, Mangaluru


Machine Learning 21AI63

from sklearn.model_selection import cross_val_predict


y_train_pred = cross_val_predict(sgd_clf, X_train, y_train_5, cv=3)

from sklearn.metrics import confusion_matrix


confusion_matrix(y_train_5, y_train_pred)

Output
array([[53892, 687],
[ 1891, 3530]]

• Each row in a confusion matrix represents an actual class, while each column
represents a predicted class.
• The first row of this matrix considers non-5 images (the negative class): 53,892 of
them were correctly classified as non-5s (they are called true negatives), 687 were
wrongly classified as 5s (false positives).
• The second row considers the images of 5s (the positive class): 1,891 were wrongly
classified as non-5s (false negatives), while the remaining 3530 were correctly
classified as 5s (true positives).
• A perfect classifier would have only true positives and true negatives, so its confusion
matrix would have nonzero values only on its main diagonal (top left to bottom right)

y_train_perfect_predictions = y_train_5 # pretend we reached perfection


confusion_matrix(y_train_5, y_train_perfect_predictions)

Output
array([[54579, 0],
[ 0, 5421]]

27 Deepak D, Asst. Prof., Dept. of AIML, Canara Engineering College, Mangaluru


Machine Learning 21AI63

Precision

• Precision is a performance metric that measures the accuracy of positive predictions


made by the model
• Precision is defined as the ratio of true positive predictions to the total number of
positive predictions.

Where:
• TP (True Positives) is the number of correctly predicted positive instances.
• FP (False Positives) is the number of instances incorrectly predicted as positive.

Recall (Sensitivity or True Positive Rate)


• Recall is defined as the ratio of positive instances that are correctly detected by the
classifier.

Where:
• TP (True Positives) is the number of correctly predicted positive instances.
• FN (False Negatives) is the number of actual positive instances that were
incorrectly predicted as negative.

from sklearn.metrics import precision_score, recall_score


precision_score(y_train_5, y_train_pred)

Output
0.8370879772350012

precision_score(y_train_5, y_train_pred)

Output
0.6511713705958311

When it claims an image represents a 5, it is correct only 83.7% of the time. Moreover, it
only detects 65.1% of the 5s.

28 Deepak D, Asst. Prof., Dept. of AIML, Canara Engineering College, Mangaluru


Machine Learning 21AI63

F1 score
• The F1 score is a metric used to evaluate the performance of a binary classification
model.
• It is the harmonic mean of precision and recall, providing a single metric that balances
both the false positives and false negatives.
• The F1 score is useful when you need to take both precision and recall into account
and is helpful when dealing with imbalanced datasets.

from sklearn.metrics import f1_score


f1_score(y_train_5, y_train_pred)

Output
0.7325171197343846

The ROC Curve


• The Receiver operating characteristic (ROC) curve is another common tool used with
binary classifiers.
• The ROC curve plots the true positive rate (recall) against the false positive rate.
• The FPR is the ratio of negative instances that are incorrectly classified as positive.
FPR = 1 - TNR
• The True Negative Rate (TNR), which is the ratio of negative instances that are
correctly classified as negative. The TNR is also called specificity. Hence the ROC
curve plots sensitivity (recall) versus 1 – specificity.

29 Deepak D, Asst. Prof., Dept. of AIML, Canara Engineering College, Mangaluru


Machine Learning 21AI63

How to Read a ROC Curve


• Diagonal Line: A ROC curve that lies on the diagonal line (from bottom left to top
right) represents a classifier with no discriminative power, equivalent to random
guessing.
• Above the Diagonal: The area above the diagonal represents better-than-random
performance. The closer the ROC curve is to the top-left corner, the better the model is
at distinguishing between the positive and negative classes.
• Below the Diagonal: Curves below the diagonal indicate worse-than-random
performance.

Area under the curve (AUC) : To compare classifiers is to measure the area under the
curve. The AUC value ranges from 0 to 1.

• AUC = 1: Perfect classifier.


• AUC = 0.5: No discriminative power, equivalent to random guessing.
• AUC < 0.5: Indicates a model that is performing worse than random guessing.

Let’s consider RandomForestClassifier and compare its ROC curve and ROC AUC score to
the SGDClassifier

The RandomForestClassifier’s ROC curve looks much better than the SGDClassifier’s. It
comes much closer to the top-left corner.

30 Deepak D, Asst. Prof., Dept. of AIML, Canara Engineering College, Mangaluru


Machine Learning 21AI63

Multiclass Classification

• The binary classifiers distinguish between two classes. Multiclass classifiers


(Multinomial classifiers) can distinguish between more than two classes.
• Some algorithms are capable of handling multiple classes directly called Direct
Multiclass Algorithms. Ex: Random Forest, Naive Bayes.
• Some algorithms are strictly binary classifiers. Ex: Support Vector Machine (SVM),
Linear classifiers.

Consider a system that can classify the digit images into 10 classes (from 0 to 9). There are
Multiclass Strategies
One-versus-All (OvA) Strategy:
• Train 10 binary classifiers, one for each digit (0 to 9).
• Classify an image by selecting the class with the highest score.
• Example: Train a 0-detector, 1-detector, etc.
One-versus-One (OvO) Strategy:
• Train a binary classifier for every pair of digits. one to distinguish 0s and 1s,
another to distinguish 0s and 2s, another for 1s and 2s, and so on
• If there are N classes, you need
N×(N−1)/2 classifiers.
• For 10 classes, train 45 classifiers. Classify an image by determining which class
wins the most pairwise duels.

Algorithm Selection
• OvO Preferred: For algorithms like SVM that scale poorly with large training sets.
• OvA Preferred: For most binary classification algorithms.
• Scikit-Learn Default: Automatically applies OvA for binary classifiers, OvO for SVM.

31 Deepak D, Asst. Prof., Dept. of AIML, Canara Engineering College, Mangaluru


Machine Learning 21AI63

Error Analysis

Assume we have a promising model and aim to improve it by analysing the errors it makes.
Start by looking at the confusion matrix.

• Use cross_val_predict() to make predictions


• Generate the confusion matrix
• Convert the confusion matrix into an image for better visualization

This confusion matrix looks fairly good, since most images are on the main diagonal, which
means that they were classified correctly. The 5s look slightly darker than the other digits,
which could mean that there are fewer images of 5s in the dataset or that the classifier does
not perform as well on 5s as on other digits.

Error Rate Analysis


• Normalize the confusion matrix to compare error rates.
• Focus on the errors by filling the diagonal with zeros

32 Deepak D, Asst. Prof., Dept. of AIML, Canara Engineering College, Mangaluru


Machine Learning 21AI63

Insights from Error Analysis


• Rows represent actual classes, and columns represent predicted classes.
• Bright columns indicate frequent misclassifications into that class.
• For instance, '8s' are often misclassified, though actual '8s' are correctly identified.

Improving the Classifier


• Gather more training data for digits similar to '8' but not '8'.
• engineer new features that would help the classifier. for example, writing an algorithm
to count the number of closed loops in digits (e.g., 8 has two, 6 has one, 5 has none)
• preprocess the images (e.g., using Scikit-Image, Pillow, or OpenCV) to make some
patterns stand out more, such as closed loops.

Analyzing individual errors gain insights on what classifier is doing and why it is failing, but
it is more difficult and time-consuming.
Ex: let’s plot examples of 3s and 5s

The two 5×5 blocks on the left show digits classified as 3s, and the two 5×5 blocks on the
right show images classified as 5s. Some of the digits that the classifier gets wrong (i.e., in
the bottom-left and top-right blocks) are so badly written that even a human would have
trouble classifying them (e.g., the 5 on the 1st row and 2nd column truly looks like a badly
written 3).

Understanding the Classifier’s Errors


• Misclassifications might be due to badly written digits or similarities between digits.
• The linear model (SGDClassifier) assigns weights to pixels, making it sensitive to
image shifting and rotation.
• Preprocess images to ensure they are well-centred and aligned to reduce errors.

33 Deepak D, Asst. Prof., Dept. of AIML, Canara Engineering College, Mangaluru


Machine Learning 21AI63

Multilabel Classification
• Multilabel Classification is a type of classification where each instance can belong to
multiple classes simultaneously.
• For example, in a face-recognition system, a picture with multiple known faces
should result in multiple outputs. If a classifier can recognize Alice, Bob, and Charlie,
and sees a picture of Alice and Charlie, it should output [1, 0, 1], meaning "Alice yes,
Bob no, Charlie yes".

Example: Multilabel Classification with Digits


• To understand multilabel classification, let's consider example to classify digits based
on two labels: the first indicates whether or not the digit is large (7, 8, or 9) and the
second indicates whether or not it is odd.

Steps to Implement Multilabel Classification


1. Create Target Labels:
• y_train_large indicates if a digit is large (7, 8, or 9).
• y_train_odd indicates if a digit is odd.
• Combine these labels into a y_multilabel array.

from sklearn.neighbors import KNeighborsClassifier


y_train_large = (y_train >= 7)
y_train_odd = (y_train % 2 == 1)
y_multilabel = np.c_[y_train_large, y_train_odd]

2. Train the Classifier:


• Use a KNeighborsClassifier which supports multilabel classification

from sklearn.neighbors import KNeighborsClassifier


knn_clf = KNeighborsClassifier()
knn_clf.fit(X_train, y_multilabel)

3. Make Predictions:
• Predict using the trained classifier and output multiple labels.

knn_clf.predict([some_digit])

Output
array([[False, True]])
The digit 5 is indeed not large (False) and odd (True).

34 Deepak D, Asst. Prof., Dept. of AIML, Canara Engineering College, Mangaluru


Machine Learning 21AI63

Multioutput Classification
• Multioutput Classification (multioutput-multiclass classification) is a generalization of
multilabel classification. In this type of classification, each label can have multiple
values, not just binary options.
• For example, each label can represent different pixel intensities ranging from 0 to 255.

Example: Removing Noise from Images


• Illustrate this with an example where the goal is to remove noise from digit images.
• The input will be a noisy image, and the output will be a clean image of the digit.

Steps to Implement Multioutput Classification


1. Create Training and Test Sets:
• Add noise to the original MNIST digit images using NumPy's randint() function.
• The noisy images are the input, and the original images are the target

2. Visualize Noisy and Clean Images:


• Before training, visualize a noisy image and its corresponding clean image. This
step helps to understand the task visually

3. Train the Classifier, Make Predictions and Clean the Image:


• Use a KNeighborsClassifier to train on the noisy images and their clean
counterparts.

n-gl.com 35 Deepak D, Asst. Prof., Dept. of AIML, Canara Engineering College, Mangaluru

You might also like