Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
0% found this document useful (0 votes)
73 views

lecture2-supervised-learning slides

Uploaded by

hermano alphonse
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
73 views

lecture2-supervised-learning slides

Uploaded by

hermano alphonse
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 56

Lecture 2: Supervised Machine Learning

Applied Machine Learning


Volodymyr Kuleshov
Cornell Tech
Recall: Supervised Learning
The most common approach to machine learning is supervised learning.

1. First, we collect a dataset of labeled training examples.


2. We train a model to output accurate predictions on this dataset.
3. When the model sees new, similar data, it will also be accurate.
Part 1: A First Supervised Machine Learning Problem
Let’s start with a simple example of a supervised learning problem: predicting diabetes
risk.

Suppose we have a dataset of diabetes patients.

For each patient we have a access to measurements from their medical record and
an estimate of diabetes risk.
We are interested in understanding how the measurements affect an individual's
diabetes risk.
Three Components of A Supervised Machine Learning
Problem
At a high level, a supervised machine learning problem has the following structure:

Dataset + Algorithm → Predictive Model


The predictive model is chosen to model the relationship between inputs and targets. For
instance, it can predict future targets.
A Supervised Learning Dataset
Let's return to our example: predicting diabates risk. What would a dataset look like?

We will use the UCI Diabetes Dataset; it's a toy dataset that's often used to demonstrate
machine learning algorithms.

For each patient we have a access to a measurement of their body mass index (BMI)
and a quantiative diabetes risk score (from 0-400).
We are interested in understanding how BMI affects an individual's diabetes risk.
In [2]: import numpy as np
import pandas as pd
from sklearn import datasets

# Load the diabetes dataset


diabetes_X, diabetes_y = datasets.load_diabetes(return_X_y=True, as_frame=True)

# Use only the BMI feature


diabetes_X = diabetes_X.loc[:, ['bmi']]

# The BMI is zero-centered and normalized; we recenter it for ease of presentati


on
diabetes_X = diabetes_X * 30 + 25

# Collect 20 data points


diabetes_X_train = diabetes_X.iloc[-20:]
diabetes_y_train = diabetes_y.iloc[-20:]

# Display some of the data points


pd.concat([diabetes_X_train, diabetes_y_train], axis=1).head()

Out[2]: bmi target


422 27.335902 233.0
423 23.811456 91.0
424 25.331171 111.0
425 23.779122 152.0
426 23.973128 120.0
We can also visualize this two-dimensional dataset.

In [3]: %matplotlib inline


import matplotlib.pyplot as plt
plt.rcParams['figure.figsize'] = [12, 4]

plt.scatter(diabetes_X_train, diabetes_y_train, color='black')


plt.xlabel('Body Mass Index (BMI)')
plt.ylabel('Diabetes Risk')

Out[3]: Text(0, 0.5, 'Diabetes Risk')


A Supervised Learning Algorithm (Part 1)
What is the relationship between BMI and diabetes risk?

We could assume that risk is a linear function of BMI. In other words, for some unknown
𝜃0 , 𝜃1 ∈ ℝ , we have
𝑦 = 𝜃1 ⋅ 𝑥 + 𝜃0 ,
where 𝑥 is the BMI (also called the dependent variable), and 𝑦 is the diabetes risk score
(the independent variable).

Note that 𝜃1 , 𝜃0 are the slope and the intercept of the line relates 𝑥 to 𝑦 . We call them
parameters.
We can visualize this for a few values of 𝜃1 , 𝜃0 .

In [4]: theta_list = [(1, 2), (2,1), (1,0), (0,1)]


for theta0, theta1 in theta_list:
x = np.arange(10)
y = theta1 * x + theta0
plt.plot(x,y)
A Supervised Learning Algorithm (Part 2)
Assuming that 𝑥, 𝑦 follow the above linear relationship, the goal of the supervised
learning algorithm is to find a good set of parameters consistent with the data.

We will see many algorithms for this task. For now, let's call the
sklearn.linear_model library to find a 𝜃1 , 𝜃0 that fit the data well.
In [6]: from sklearn import linear_model
from sklearn.metrics import mean_squared_error

# Create linear regression object


regr = linear_model.LinearRegression()

# Train the model using the training sets


regr.fit(diabetes_X_train, diabetes_y_train.values)

# Make predictions on the training set


diabetes_y_train_pred = regr.predict(diabetes_X_train)

# The coefficients
print('Slope (theta1): \t', regr.coef_[0])
print('Intercept (theta0): \t', regr.intercept_)

Slope (theta1): 37.378842160517664


Intercept (theta0): -797.0817390342369
A Supervised Learning Model
The supervised learning algorithm gave us a pair of parameters 𝜃∗1 , 𝜃∗0 . These define the
predictive model 𝑓 ∗ , defined as
𝑓(𝑥) = 𝜃1∗ ⋅ 𝑥 + 𝜃0∗ ,
where again 𝑥 is the BMI, and 𝑦 is the diabetes risk score.
We can visualize the linear model that fits our data.

In [7]: plt.xlabel('Body Mass Index (BMI)')


plt.ylabel('Diabetes Risk')
plt.scatter(diabetes_X_train, diabetes_y_train)
plt.plot(diabetes_X_train, diabetes_y_train_pred, color='black', linewidth=2)

Out[7]: [<matplotlib.lines.Line2D at 0x1253f9240>]


Predictions Using Supervised Learning
Given a new dataset of patients with a known BMI, we can use this model to estimate their
diabetes risk.

Given a new 𝑥′ , we can output a predicted 𝑦′ as


𝑦′ = 𝑓(𝑥′ ) = 𝜃1∗ ⋅ 𝑥′ + 𝜃0 .
Let's start by loading more data. We will load three new patients (shown in red below) that
we haven't seen before.

In [8]: # Collect 3 data points


diabetes_X_test = diabetes_X.iloc[:3]
diabetes_y_test = diabetes_y.iloc[:3]

plt.scatter(diabetes_X_train, diabetes_y_train)
plt.scatter(diabetes_X_test, diabetes_y_test, color='red')
plt.xlabel('Body Mass Index (BMI)')
plt.ylabel('Diabetes Risk')
plt.legend(['Initial patients', 'New patients'])

Out[8]: <matplotlib.legend.Legend at 0x1259cd390>


Our linear model provides an estimate of the diabetes risk for these patients.

In [9]: # generate predictions on the new patients


diabetes_y_test_pred = regr.predict(diabetes_X_test)

# visualize the results


plt.xlabel('Body Mass Index (BMI)')
plt.ylabel('Diabetes Risk')
plt.scatter(diabetes_X_train, diabetes_y_train)
plt.scatter(diabetes_X_test, diabetes_y_test, color='red', marker='o')
plt.plot(diabetes_X_train, diabetes_y_train_pred, color='black', linewidth=1)
plt.plot(diabetes_X_test, diabetes_y_test_pred, 'x', color='red', mew=3, markers
ize=8)
plt.legend(['Model', 'Prediction', 'Initial patients', 'New patients'])

Out[9]: <matplotlib.legend.Legend at 0x125bfb048>


Why Supervised Learning?
Supervised learning can be useful in many ways.

Making predictions on new data.


Understanding the mechanisms through which input variables affect targets.
Applications of Supervised Learning
Many of the most important applications of machine learning are supervised:

Classifying medical images.


Translating between pairs of languages.
Detecting objects in a self-driving car.
Part 2: Anatomy of a Supervised Learning Problem:
Datasets
We have seen a simple example of a supervised machine learning problem and an
algorithm for solving this problem.

Let's now look at what a general supervised learning problem looks like.
Recall: Three Components of A Supervised Machine
Learning Problem
At a high level, a supervised machine learning problem has the following structure:

Dataset + Algorithm → Predictive Model


The predictive model is chosen to model the relationship between inputs and targets. For
instance, it can predict future targets.
A Supervised Learning Dataset
We are going to dive deeper into what's a supervised learning dataset. As an example,
consider the full version of the UCI Diabetes Dataset seen earlier.

Previsouly, we only looked at the patients' BMI, but this dataset actually records many
additional measurements.
The UCI dataset contains many additional data columns besides bmi , including age, sex,
and blood pressure. We can ask sklearn to give us more information about this dataset.

In [10]: import numpy as np


import pandas as pd
import matplotlib.pyplot as plt
plt.rcParams['figure.figsize'] = [12, 4]
from sklearn import datasets

# Load the diabetes dataset


diabetes = datasets.load_diabetes(as_frame=True)
print(diabetes.DESCR)

.. _diabetes_dataset:

Diabetes dataset
----------------

Ten baseline variables, age, sex, body mass index, average blood
pressure, and six blood serum measurements were obtained for each of n =
442 diabetes patients, as well as the response of interest, a
quantitative measure of disease progression one year after baseline.

**Data Set Characteristics:**

:Number of Instances: 442

:Number of Attributes: First 10 columns are numeric predictive values

:Target: Column 11 is a quantitative measure of disease progression one ye


ar after baseline

:Attribute Information:
- age age in years
- age age in years
- sex
- bmi body mass index
- bp average blood pressure
- s1 tc, T-Cells (a type of white blood cells)
- s2 ldl, low-density lipoproteins
- s3 hdl, high-density lipoproteins
- s4 tch, thyroid stimulating hormone
- s5 ltg, lamotrigine
- s6 glu, blood sugar level

Note: Each of these 10 feature variables have been mean centered and scaled
by the standard deviation times `n_samples` (i.e. the sum of squares of each
column totals 1).

Source URL:
https://www4.stat.ncsu.edu/~boos/var.select/diabetes.html

For more information see:


Bradley Efron, Trevor Hastie, Iain Johnstone and Robert Tibshirani (2004) "L
east Angle Regression," Annals of Statistics (with discussion), 407-499.
(https://web.stanford.edu/~hastie/Papers/LARS/LeastAngle_2002.pdf)
A Supervised Learning Dataset: Notation
We say that a training dataset of size 𝑛 (e.g., 𝑛 patients) is a set
 = {(𝑥(𝑖) , 𝑦(𝑖) ) ∣ 𝑖 = 1, 2, . . . , 𝑛}
Each 𝑥(𝑖) denotes an input (e.g., the measurements for patient 𝑖 ), and each 𝑦(𝑖) ∈  is a
target (e.g., the diabetes risk).

Together, (𝑥(𝑖) , 𝑦(𝑖) ) form a training example.


We can look at the diabetes dataset in this form.

In [11]: # Load the diabetes dataset


diabetes_X, diabetes_y = diabetes.data, diabetes.target

# Print part of the dataset


diabetes_X.head()

Out[11]: age sex bmi bp s1 s2 s3 s4 s5 s6


0 0.038076 0.050680 0.061696 0.021872 -0.044223 -0.034821 -0.043401 -0.002592 0.019908 -0.017646
1 -0.001882 -0.044642 -0.051474 -0.026328 -0.008449 -0.019163 0.074412 -0.039493 -0.068330 -0.092204
2 0.085299 0.050680 0.044451 -0.005671 -0.045599 -0.034194 -0.032356 -0.002592 0.002864 -0.025930
3 -0.089063 -0.044642 -0.011595 -0.036656 0.012191 0.024991 -0.036038 0.034309 0.022692 -0.009362
4 0.005383 -0.044642 -0.036385 0.021872 0.003935 0.015596 0.008142 -0.002592 -0.031991 -0.046641
Training Dataset: Inputs
More precisely, an input 𝑥(𝑖) ∈  is a 𝑑 -dimensional vector of the form
⎡ 𝑥(𝑖) ⎤
⎢ 1 ⎥
⎢ 𝑥(𝑖) ⎥
𝑥(𝑖) =⎢ 2 ⎥
⎢ ⋮ ⎥
⎢ (𝑖) ⎥
⎣ 𝑥𝑑 ⎦
For example, it could be the measurements the values of the 𝑑 features for patient 𝑖 .

The set  is called the feature space. Often, we have,  = ℝ𝑑 .


Let's look at data for one patient.

In [12]: diabetes_X.iloc[0]

Out[12]: age 0.038076


sex 0.050680
bmi 0.061696
bp 0.021872
s1 -0.044223
s2 -0.034821
s3 -0.043401
s4 -0.002592
s5 0.019908
s6 -0.017646
Name: 0, dtype: float64
Training Dataset: Attributes
We refer to the numerical variables describing the patient as attributes. Examples of
attributes include:

The age of a patient.


The patient's gender.
The patient's BMI.

Note that thes attributes in the above example have been mean-centered at zero and re-
scaled to have a variance of one.
Training Dataset: Features
Often, an input object has many attributes, and we want to use these attributes to define
more complex descriptions of the input.

Is the patient old and a man? (Useful if old men are at risk).
Is the BMI above the obesity threshold?

We call these custom attributes features.


Let's create an "old man" feature.

In [13]: diabetes_X['old_man'] = (diabetes_X['sex'] > 0) & (diabetes_X['age'] > 0.05)


diabetes_X.head()

Out[13]: age sex bmi bp s1 s2 s3 s4 s5 s6 old_man


0 0.038076 0.050680 0.061696 0.021872 -0.044223 -0.034821 -0.043401 -0.002592 0.019908 -0.017646 False
1 -0.001882 -0.044642 -0.051474 -0.026328 -0.008449 -0.019163 0.074412 -0.039493 -0.068330 -0.092204 False
2 0.085299 0.050680 0.044451 -0.005671 -0.045599 -0.034194 -0.032356 -0.002592 0.002864 -0.025930 True
3 -0.089063 -0.044642 -0.011595 -0.036656 0.012191 0.024991 -0.036038 0.034309 0.022692 -0.009362 False
4 0.005383 -0.044642 -0.036385 0.021872 0.003935 0.015596 0.008142 -0.002592 -0.031991 -0.046641 False
Training Dataset: Features
More formally, we can define a function 𝜙 :  → ℝ𝑝 that takes an input 𝑥(𝑖) ∈  and
outputs a 𝑝 -dimensional vector
⎡ 𝜙(𝑥(𝑖) )1 ⎤
⎢ (𝑖) ⎥
⎢ 𝜙(𝑥 )2 ⎥
𝜙(𝑥 ) = ⎢
(𝑖)

⎢ ⋮ ⎥
⎣ 𝜙(𝑥 )𝑝 ⎦
(𝑖)

We say that 𝜙(𝑥(𝑖) ) is a featurized input, and each 𝜙(𝑥(𝑖) )𝑗 is a feature.


Features vs Attributes
In practice, the terms attribute and features are often used interchangeably. Most authors
refer to 𝑥(𝑖) as a vector of features (i.e., they've been precomputed).

We will follow this convention and use attribute only when there is ambiguity between
features and attributes.
Features: Discrete vs. Continuous
Features can be either discrete or continuous. We will see later that they may be handled
differently by ML algorthims.
The BMI feature that we have seen earlier is an example of a continuous feature.

We can visualize its distribution.

In [14]: diabetes_X.loc[:, 'bmi'].hist()

Out[14]: <AxesSubplot:>
Other features take on one of a finite number of discrete values. The sex column is an
example of a categorical feature.

In this example, the dataset has been pre-processed such that the two values happen to be
0.05068012 and -0.04464164 .

In [15]: print(diabetes_X.loc[:, 'sex'].unique())


diabetes_X.loc[:, 'sex'].hist()

[ 0.05068012 -0.04464164]

Out[15]: <AxesSubplot:>
Training Dataset: Targets
For each patient, we are interested in predicting a quantity of interest, the target. In our
example, this is the patient's diabetes risk.

Formally, when (𝑥(𝑖) , 𝑦(𝑖) ) form a training example, each 𝑦(𝑖) ∈  is a target. We call  the
target space.
We plot the distirbution of risk scores below.

In [16]: plt.xlabel('Diabetes risk score')


plt.ylabel('Number of patients')
diabetes_y.hist()

Out[16]: <AxesSubplot:xlabel='Diabetes risk score', ylabel='Number of patients'>


Targets: Regression vs. Classification
We distinguish between two broad types of supervised learning problems that differ in the
form of the target variable.

1. Regression: The target variable 𝑦 is continuous. We are fitting a curve in a high-


dimensional feature space that approximates the shape of the dataset.
2. Classification: The target variable 𝑦 is discrete. Each discrete value corresponds to
a class and we are looking for a hyperplane that separates the different classes.
We can easily turn our earlier regression example into classification by discretizing the
diabetes risk scores into high or low.

In [17]: # Discretize the targets


diabetes_y_train_discr = np.digitize(diabetes_y_train, bins=[150])

# Visualize it
plt.scatter(diabetes_X_train[diabetes_y_train_discr==0], diabetes_y_train[diabet
es_y_train_discr==0], marker='o', s=80, facecolors='none', edgecolors='g')
plt.scatter(diabetes_X_train[diabetes_y_train_discr==1], diabetes_y_train[diabet
es_y_train_discr==1], marker='o', s=80, facecolors='none', edgecolors='r')
plt.legend(['Low-Risk Patients', 'High-Risk Patients'])

Out[17]: <matplotlib.legend.Legend at 0x125ffc240>

Let's try to generate predictions for this dataset.

In [18]: # Create logistic regression object (note: this is actually a classification alg
orithm!)
clf = linear_model.LogisticRegression()

# Train the model using the training sets


clf.fit(diabetes_X_train, diabetes_y_train_discr)

# Make predictions on the training set


diabetes_y_train_pred = clf.predict( )

# Visualize it
plt.scatter(diabetes_X_train[diabetes_y_train_discr==0], diabetes_y_train[diabet
es_y_train_discr==0], marker='o', s=140, facecolors='none', edgecolors='g')
plt.scatter(diabetes_X_train[diabetes_y_train_discr==1], diabetes_y_train[diabet
es_y_train_discr==1], marker='o', s=140, facecolors='none', edgecolors='r')
plt.scatter(diabetes_X_train[diabetes_y_train_pred==0], diabetes_y_train[diabete
s_y_train_pred==0], color='g', s=20)
plt.scatter(diabetes_X_train[diabetes_y_train_pred==1], diabetes_y_train[diabete
s_y_train_pred==1], color='r', s=20)
plt.legend(['Low-Risk Patients', 'High-Risk Patients', 'Low-Risk Predictions', '
High-Risk Predictions'])

Out[18]: <matplotlib.legend.Legend at 0x11847d320>


Part 3: Anatomy of a Supervised Learning Problem:
Learning Algorithm
Let's now look at what a general supervised learning algorithm looks like.
Recall: Three Components of A Supervised Machine
Learning Problem
At a high level, a supervised machine learning problem has the following structure:

Dataset + Algorithm → Predictive Model


The predictive model is chosen to model the relationship between inputs and targets. For
instance, it can predict future targets.
The Components of A Supervised Machine Learning
Algorithm
We can also define the high-level structure of a supervised learning algorithm as
consisting of three components:

A model class: the set of possible models we consider.


An objective function, which defines how good a model is.
An optimizer, which finds the best predictive model in the model class according to
the objective function
Let's look again at our diabetes dataset for an example.

In [19]: import numpy as np


import pandas as pd
from sklearn import datasets
import matplotlib.pyplot as plt
plt.rcParams['figure.figsize'] = [12, 4]

# Load the diabetes dataset


diabetes = datasets.load_diabetes(as_frame=True)
diabetes_X, diabetes_y = diabetes.data, diabetes.target

# Print part of the dataset


diabetes_X.head()

Out[19]: age sex bmi bp s1 s2 s3 s4 s5 s6


0 0.038076 0.050680 0.061696 0.021872 -0.044223 -0.034821 -0.043401 -0.002592 0.019908 -0.017646
1 -0.001882 -0.044642 -0.051474 -0.026328 -0.008449 -0.019163 0.074412 -0.039493 -0.068330 -0.092204
2 0.085299 0.050680 0.044451 -0.005671 -0.045599 -0.034194 -0.032356 -0.002592 0.002864 -0.025930
3 -0.089063 -0.044642 -0.011595 -0.036656 0.012191 0.024991 -0.036038 0.034309 0.022692 -0.009362
4 0.005383 -0.044642 -0.036385 0.021872 0.003935 0.015596 0.008142 -0.002592 -0.031991 -0.046641
Model: Notation
We'll say that a model is a function
𝑓:→
that maps inputs 𝑥 ∈  to targets 𝑦 ∈  .

Often, models have parameters 𝜃 ∈ Θ living in a set Θ . We will then write the model as
𝑓𝜃 :  → 
to denote that it's parametrized by 𝜃 .
Model Class: Notation
Formally, the model class is a set
 ⊆ {𝑓 ∣ 𝑓 :  → }
of possible models that map input features to targets.

When the models 𝑓𝜃 are paremetrized by parameters 𝜃 ∈ Θ living in some set Θ . Thus we
can also write
 = {𝑓𝜃 ∣ 𝑓 :  → ; 𝜃 ∈ Θ}.
Model Class: Example
One simple approach is to assume that 𝑥 and 𝑦 are related by a linear model of the form
𝑦 = 𝜃0 + 𝜃1 ⋅ 𝑥1 + 𝜃2 ⋅ 𝑥2 +. . . +𝜃𝑑 ⋅ 𝑥𝑑
where 𝑥 is a featurized output and 𝑦 is the target.

The 𝜃𝑗 are the parameters of the model.


Objectives: Notation
To capture this intuition, we define an objective function (also called a loss function)
𝐽(𝑓) :  → [0, ∞),
which describes the extent to which 𝑓 "fits" the data  = {(𝑥(𝑖) , 𝑦(𝑖) ) ∣ 𝑖 = 1, 2, . . . , 𝑛} .

When 𝑓 is parametrized by 𝜃 ∈ Θ , the objective becomes a function


𝐽(𝜃) : Θ → [0, ∞).
Objective: Examples
What would are some possible objective functions? We will see many, but here are a few
examples:

Mean squared error:


1 𝑛
( 𝜃 )
(𝑖) (𝑖) 2
2𝑛 ∑
𝐽(𝜃) = 𝑓 (𝑥 ) − 𝑦
𝑖=1
Absolute (L1) error:
1 𝑛 ∣
𝑓𝜃 (𝑥(𝑖) ) − 𝑦(𝑖) ∣∣
𝑛∑
𝐽(𝜃) = ∣
𝑖=1

These are defined for a dataset  = {(𝑥(𝑖) , 𝑦(𝑖) ) ∣ 𝑖 = 1, 2, . . . , 𝑛} .


In [60]: from sklearn.metrics import mean_squared_error, mean_absolute_error

y1 = np.array([1, 2, 3, 4])
y2 = np.array([-1, 1, 3, 5])

print('Mean squared error: %.2f' % mean_squared_error(y1, y2))


print('Mean absolute error: %.2f' % mean_absolute_error(y1, y2))

Mean squared error: 1.50


Mean absolute error: 1.00
Optimizer: Notation
At a high-level an optimizer takes an objective 𝐽 and a model class  and finds a model
𝑓 ∈  with the smallest value of the objective 𝐽 .
min 𝐽(𝑓)
𝑓∈

Intuitively, this is the function that bests "fits" the data on the training dataset.

When 𝑓 is parametrized by 𝜃 ∈ Θ , the optimizer minimizes a function 𝐽(𝜃) over all


𝜃 ∈ Θ.
Optimizer: Example
We will see that behind the scenes, the
sklearn.linear_models.LinearRegression algorithm optimizes the MSE loss.

1 𝑛
( 𝜃 )
(𝑖) (𝑖) 2
𝜃∈ℝ 2𝑛 ∑
min 𝑓 ( 𝑥 ) − 𝑦
𝑖=1

We can easily measure the quality of the fit on the training set and the test set.
Let's run the above algorithm on our diabetes dataset.

In [54]: # Collect 20 data points for training


diabetes_X_train = diabetes_X.iloc[-20:]
diabetes_y_train = diabetes_y.iloc[-20:]

# Create linear regression object


regr = linear_model.LinearRegression()

# Train the model using the training sets


regr.fit(diabetes_X_train, diabetes_y_train.values)

# Make predictions on the training set


diabetes_y_train_pred = regr.predict(diabetes_X_train)

# Collect 3 data points for testing


diabetes_X_test = diabetes_X.iloc[:3]
diabetes_y_test = diabetes_y.iloc[:3]

# generate predictions on the new patients


diabetes_y_test_pred = regr.predict(diabetes_X_test)

The algorithm returns a predictive model. We can visualize its predictions below.
In [55]: # visualize the results
plt.xlabel('Body Mass Index (BMI)')
plt.ylabel('Diabetes Risk')
plt.scatter(diabetes_X_train.loc[:, ['bmi']], diabetes_y_train)
plt.scatter(diabetes_X_test.loc[:, ['bmi']], diabetes_y_test, color='red', marke
r='o')
# plt.scatter(diabetes_X_train.loc[:, ['bmi']], diabetes_y_train_pred, color='bl
ack', linewidth=1)
plt.plot(diabetes_X_test.loc[:, ['bmi']], diabetes_y_test_pred, 'x', color='red'
, mew=3, markersize=8)
plt.legend(['Model', 'Prediction', 'Initial patients', 'New patients'])

Out[55]: <matplotlib.legend.Legend at 0x12f6a46a0>


Summary: Components of A Supervised Machine
Learning Problem
At a high level, a supervised machine learning problem has the following structure:

Dataset + Algorithm → Predictive Model



Model Class + Objective + Optimizer

The predictive model is chosen to model the relationship between inputs and targets. For
instance, it can predict future targets.

In [ ]:

You might also like