Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

AIML Manual

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 76

A Laboratory Manual for

ARTIFICIAL INTELLIGENCE AND


MACHINE LEARNING

(3161710)

B.E. Semester 6 (INSTRUMENATION


& CONTROL)

Directorate of Technical Education, Gandhinagar,


Gujarat
L. D. COLLEGE OF ENGINEERING, AHMEDABAD
Certificate
This is to certify that Mr./Ms. ___________________________________
________ Enrollment No. _______________ of B.E. Semester _____
Instrumentation & Control Engineering of this Institute (GTU Code: __ )
has satisfactorily completed the Practical / Tutorial work for the subject
Artificial Intelligence and Machine Learning (3161710) for the academic
year ________.

Place: __________
Date: __________

Name and Sign of Faculty member

Head of the Department


Artificial Intelligent and Machine Learning (3161710)

Preface

Main motto of any laboratory/practical/field work is for enhancing required skills as well as
creating ability amongst students to solve real time problem by developing relevant
competencies in psychomotor domain.By keeping in view, GTU has designed competency
focused outcome-based curriculum for engineering degree programs where sufficient weightage
is given to practical work. It shows importance of enhancement of skills amongst the students
and it pays attention to utilize every second of time allotted for practical amongst students,
instructors and faculty members to achieve relevant outcomes by performing the experiments
rather than having merely study type experiments. It is must for effective implementation of
competency focused outcome-basedcurriculum that every practical is keenly designed to serve
as a tool to develop and enhance relevant competency required by the various industry among
every student. These psychomotor skills are very difficult to develop through traditional chalk
and board content delivery method in the classroom. Accordingly, this lab manual is designed
to focus on the industry defined relevant outcomes, rather than old practice of conducting
practical to prove concept and theory.

By using this lab manual students can go through the relevant theory and procedure in advance
before the actual performance which createsan interest and students can have basic idea prior to
performance.This in turn enhances pre-determined outcomes amongst students. Each
experiment in this manual begins with competency, industry relevant skills, course outcomes as
well as practical outcomes (objectives). The students will also achieve safety and necessary
precautions to be taken while performing practical.

This manual also provides guidelines to faculty members to facilitate student centric lab
activities through each experiment by arranging and managing necessary resources in order that
the students follow the procedures with required safety and necessary precautions to achieve the
outcomes. It also gives an idea that how students will be assessed by providing rubrics.
Artificial Intelligent and Machine Learning (3161710)

Practical – Course Outcome matrix

Course Outcomes (COs):


After learning the course the students should be able to:
CO1 Understand Artificial Intelligence and its approaches
CO2 Solving problems using Artificial Intelligence
CO3 Understand Supervised, unsupervised and semi supervised machine learning algorithm
CO4 Apply Gradient descent and linear algebra for problem solving
CO5 Apply support vector machine in development of algorithm

C C C C C
Sr.
Objective(s) of Experiment O O O O O
No.
1 2 3 4 5

1. To learn the basics of MATLAB.

To implement linear regression using least square


2.
method/normal equation with single variable

To implement linear regression using least square


3.
method/normal equation with multiple variable

Implementation of gradient descent algorithm for


4.
single variable using MATLAB Software

Implementation of gradient descent algorithm for


5. multiple

variables using MATLAB Software

Implementation of logistic regression for binary


6.
classification

7. Implementation of Single layer ANN

Implementation of Normal Equation using MATLAB


8.
Software

To study neural network as sine wave function


9.
approximator

10. To study Neural Network based controller design.

Experiment-1
Artificial Intelligent and Machine Learning (3161710)

To learn the basics of MATLAB®

Date:

Competency and Practical Skills: This tutorial is designed for students and all enthusiastic
learners, who are willing to learn MATLAB ® for signals and systems in simple and easy steps.
This tutorial will give you understanding on basic commands for MATLAB ®. After completing
this tutorial, you will be at intermediate level of expertise from where you can take yourself to
higher level of expertise.

Prerequisites: Before proceeding with this tutorial, you must have a basic understanding of
matrices and adequate knowledge of mathematics.

Relevant CO: CO1

Objectives: (a) Understand the basics of MATLAB®


(b) Understand basic commands related to signals and systems.
(c) Understand operations on scalar, vector and matrix.
(d) Understand MATLAB® Simulink

Equipment/Instruments: MATLAB®

Theory:

1) Introduction:

1.1) MATLAB® (Matrix Laboratory):

MATLAB® is a software package of high-performance language for technical computing.


It integrates computation, visualization, and Programming in an easy-to-use environment where
problems and solutions are expressed in familiar mathematical notation. Typical uses include the
following:

⮚ Math and computation

⮚ Algorithm development

⮚ Data acquisition

⮚ Modelling, simulation, and prototyping

⮚ Data analysis, exploration, and visualization

⮚ Scientific and engineering graphics

⮚ Application development, including graphical user interface building

The name MATLAB® stands for matrix laboratory. In university environments, it is the
Artificial Intelligent and Machine Learning (3161710)

standard instructional tool for introductory and advanced courses in mathematics, engineering,
and science. In industry, MATLAB® is the tool of choice for high-productivity research,
development, and analysis.

At its core, MATLAB® is essentially a set (a -toolbox) of routines (called -m files or -mex
files) that installed on your computer and a window that allows you to create new variables with
names (e.g. voltage and time) and process those variables with any of those routines (e.g. plot
voltage against time, find the largest voltage, etc).

It also allows you to put a list of your processing requests together in a file and save that
combined list with a name so that you can run all of those commands in the same order at some
later time. Furthermore, it allows you to run such lists of commands such that you pass in data
and/or get data back out (i.e. the list of commands is like a function in most programming
languages). Once you save a function, it becomes part of your toolbox (i.e. it now looks to you as
if it were part of the basic toolbox that you started with). MATLAB ® runs as an interpretive
language. It simply reads through each line of the function, executes it, and then goes on to the
next line.

MATLAB® features a family of add-on application-specific solutions called toolboxes.


Very important to most users of MATLAB ®, toolboxes allow learning and applying specialized
technology. Toolboxes are comprehensive collections of MATLAB ® functions (M-files) that
extend the MATLAB® environment to solve particular classes of problems. Areas in which
toolboxes are available include Image processing, signal processing, control systems, neural
networks, fuzzy logic, wavelets, simulation, and many others.

1.2) Main features of MATLAB®

1. Advance algorithm for high performance numerical computation, especially in the Field of
matrix algebra.
2. A large collection of predefined mathematical functions and the ability to define one’s own
functions.
3. Two-and three dimensional graphics for plotting and displaying data.
4. A complete online help system.
5. Matrix or vector oriented high level programming language for individual applications.
6. Toolboxes available for solving advanced problems in several application areas.

1.3) MATLAB® Windows:


MATLAB® works through three basic windows

1. Command Window :
This is the main window. It is characterized by MATLAB ® command prompt >>. when
you launch the application, MATLAB® puts you in this window. All commands including
those for user-written PROGRAMs are typed in this window at the MATLAB® prompt .

2. Graphics window:
The OUTPUT of all graphics commands typed in the command window are flushed to the
graphics or figure window, a separate gray window with white background colour. The user
can create as many windows as the system memory will allow.
Artificial Intelligent and Machine Learning (3161710)

3. Edit window:
This is where you write, edit, create and save your own PROGRAMs in files called M-
files.

1.4) Common instructions:


● To invoke MATLAB® Double Click on the MATLAB® icon on the desktop.

● You will find a Command window where you can type the commands and see the
OUTPUT. For example if you type PWD in the command window, it will print current
working directory.
● If you want to create a directory type mkdir mydir in the command window, it will create
a directory called MYDIR.
● If you want to delete a directory type rmdir mydir in the command window.

● To open a file in MATLAB Go to File _ New_ M-File.

● Then type the PROGRAM in the file and save the file with an extension of .m. While
giving filename we should make sure that given file name should not be a command.
Note: Save all M files in the folder 'work' in the current directory. Otherwise you have to
locate the file during compiling.
● To run a MATLAB file Go to Debug->run.

● For any clarification regarding plot etc, which are built in functions type help topic.

● Typing quit in the command prompt >> quit, will close MATLAB®

● MATLAB® variable is a region of memory containing an array, which is known by a


user specified name. MATLAB® variable names must begin with a letter, followed by
any combination of letters, numbers, and the underscore (_) character.
● Spaces cannot be used in MATLAB® variable names, underscore letters can be
substituted to create meaningful names.
● It is important to include a data dictionary in the header of any PROGRAM that you
write. A data dictionary lists the definition of each variable used in a PROGRAM. The
definition should include both a description of the contents of the item and the units in
which it is measured.
● MATLAB® language is case-sensitive. It is customary to use lower-case letters for
ordinary variable names.
● The most common types of MATLAB® variables are double and char.

● Variables are not declared in a PROGRAM before it is used.

● The semicolon at the end of each assignment statement suppresses the automatic echoing
of values that normally occurs whenever an expression is evaluated in an assignment
statement.
● All PROGRAMs and commands can be entered either in the a) Command window b) As
Artificial Intelligent and Machine Learning (3161710)

an M file using MATLAB® editor.


● To add comments the "%" symbol is used.

● Help is provided by typing "help" or if you know the topic then "help
function_name" or "doc function_name".
● If you don't know the exact name of the topic, function or command, you are looking for
then type "lookfor keyword" (e.g. "lookfor regression").
● Three (3) dots "…" are used to continue a statement to next line (row).

● If after a statement ";" is used then MATLAB® will not display the result of that
statement, otherwise result would be displayed.
● Data Type:
The fundamental datatype in MATLAB® is the array. It encompasses several distinct
data objects- integers, real numbers, matrices, character strings, structures and cells. An
array is a collection of data values organized into rows and columns, and known by a
single name. There is no need to declare variables as real or complex, MATLAB ®
automatically sets the variable to be real.
● Dimensioning:
Dimensioning is automatic in MATLAB ®. No dimension statements are required for
vectors or arrays .we can find the dimensions of an existing matrix or a vector with the
size and length commands.
● Use the up-arrow to recall commands without retyping them and down-arrow to go
forward in commands.
● MATLAB® is case sensitive so "India" is not same as "INDIA".

1.5) Basic commands in MATLAB®:

i. Defining a Scalar:
>> X=1
X=1
ii. Defining a Column Vector:
>> v = [1;2;3]
v=
1
2
3
iii. Defining a Row Vector:
>> w = [ 1 0 1]
w=
1 0 1
iv. Transpose of a Vector:
>> W = w'
W=
1
Artificial Intelligent and Machine Learning (3161710)

0
1
v. Define a Range for a Vector:
>> x = 1: 0.5: 5
x=
Columns 1 through 8
1.0000 1.5000 2.0000 2.5000 3.0000 3.5000 4.0000 4.5000
Column 9
5.0000
vi. Empty Vector:
>> Y = []
Y=
[]
vii. Defining a Matrix:
>> M = [1 2 3; 3 2 1]
M=
1 2 3
3 2 1
viii. Zero Matrix:
>> N = zeros(2,3) %1st Parameter is row & 2nd parameter is col.
N=
0 0 0
0 0 0
ix. Ones Matrix:
>> m = ones (3,4)
m=
1 1 1 1
1 1 1 1
1 1 1 1
x. The identity Matrix:
>> I = eye(3)
I=
1 0 0
0 1 0
0 0 1
xi. Defining a Random Matrix or Vector:
>> R = rand(1,3)
R=
0.8147 0.9058 0.1270
xii. Access a Vector or a Matrix:
>> R(3)
ans =
0.1270
[OR]
>> R(1,2)
ans =
0.9058
xiii. Access a Row or a Column of a Matrix:
>> I(2, :) %2nd row
ans =
0 1 0
[OR]
Artificial Intelligent and Machine Learning (3161710)

>> I(:, 2) % 2nd col.


ans =
0
1
0
[OR]
>> I(1:2, 1:3)
ans =
1 0 0
0 1 0
xiv. Size and Length:
>> size(I)
ans =
3 3
[OR]
>> length(I)
ans =
3
xv. Already Defined Variables by a User:
>> who
Your variables are:
I M N R W X Y ans m v w x

1.6) Operations on Vectors and Matrices in MATLAB®:

i. Arithmetic Operators:
MATLAB® utilizes the following arithmetic operators:

+ Addition
- Subtraction
* Multiplication
/ Division
^ Power Operator
' Transpose
ii. Some Built-in Functions in MATLAB®:
"abs" magnitude of a number (absolute value for real numbers)
"angle" angle of a complex number (in radians)
"cos" cosine function, assumes arguments in radians
"sin" sine function, assumes arguments in radians
"exp" exponential function
iii. Arithmetic Operations:

>> x = [1 2 3 4 5]
x=
1 2 3 4 5
>> x = 2 * x
x=
2 4 6 8 10
>> x = x / 2
x=
Artificial Intelligent and Machine Learning (3161710)

1 2 3 4 5
>> y = [1 2 3 4 5]
y=
1 2 3 4 5
>> z = x + y
z=
2 4 6 8 10

[For point by point Multiplication and Division use "."]

>> w = x.*y
w=
1 4 9 16 25
[MATLAB® has a large number of built-in function, some operate on each point of a vector
or matrix.]
>> log([1 2 3 4])
ans =
0 0.6931 1.0986 1.3863
>> round([1.5 2; 2.2 3.1])
ans =
2 2
2 3
>> a = [1 4 6 3]
a=
1 4 6 3
>> sum(a)
ans =
14
>> mean(a)
ans =
3.5000
>> std(a)
ans =
2.0817
>> max(a)
ans =
6
>> a = [1 2 3; 4 5 6]
a=
1 2 3
4 5 6
>> mean(a) % Mean of each column
ans =
2.5000 3.5000 4.5000
>> max(a) % Mean of each column
ans =
4 5 6
>> max(max([1 2 3; 4 5 6])) % Max of the matrix
ans =
6
Artificial Intelligent and Machine Learning (3161710)

iv. Rational Operators in MATLAB®:


< Less than
<= Less than or equal to
> Greater than
>= Greater than or equal to
== Equal to
~= Not equal to

>> a = [1 1 3 4 1]
a=
1 1 3 4 1
>> ind = (a == 1)
ind =
1 1 0 0 1
>> ind = (a < 1)
ind =
0 0 0 0 0
>> ind = (a > 1)
ind =
0 0 1 1 0
>> ind = (a <= 1)
ind =
1 1 0 0 1
>> ind = (a >= 1)
ind =
1 1 1 1 1
>> ind = (a ~= 1)
ind =
0 0 1 1 0

1.7) General MATLAB® Commands:

>>Subplot
This function divides the figure window into rows and columns.

Subplot (2 2 1)divides the figure window Subplot (3 1 2) divides the figure window into 3
into 2 rows and 2 columns. 1 represents rows and 1 column. 2 represent the figure
the figure number. number.
Artificial Intelligent and Machine Learning (3161710)

>> Conv
Syntax: w = conv(u,v)
Description: w = conv(u,v)
convolves vectors u and v. Algebraically, convolution is the same operation as multiplying
the polynomials whose coefficients are the elements of u and v.

>> Disp
Syntax: disp(X)
Description: disp(X)
Displays an array, without printing the array name. If X contains a text string, the string is
displayed. Another way to display an array on the screen is to type its name, but this prints a
leading "X=," which is not always desirable.
Note: disp does not display empty arrays.

>> xlabel
Syntax: xlabel('string')
Description: xlabel('string')
labels the x-axis of the current axes.
>> ylabel
Syntax : ylabel('string')
Description: ylabel('string')
labels the y-axis of the current axes.
>> Title
Syntax : title('string')
Description: title('string')
Outputs the string at the top and in the center of the current axes.

>> grid on
Syntax : grid on
Description: grid on
adds major grid lines to the current axes.

>> FFT
Discrete Fourier transforms. FFT(X) is the discrete Fourier transform (DFT) of vector X.
For matrices, the FFT operation is applied to each column. For N-D arrays, the FFT
operation operates on the first non-singleton dimension. FFT(X,N) is the N-point FFT,
padded with zeros if X has less than N points and truncated if it has more.

>> ABS
Absolute value.
ABS(X) is the absolute value of the elements of X. When X is complex, ABS(X) is the
complex modulus (magnitude) of the elements of X.

>> ANGLE
Phase angle.
ANGLE (H) returns the phase angles, in radians, of a matrix with complex elements.

>> INTERP
Resample data at a higher rate using lowpass interpolation.
Y = INTERP(X,L) resamples the sequence in vector X at L times the original sample
Artificial Intelligent and Machine Learning (3161710)

rate. The resulting resampled vector Y is L times longer, LENGTH(Y) = L*LENGTH(X).

>> DECIMATE
Resample data at a lower rate after low pass filtering.

>> Y = DECIMATE(X, M)
resample the sequence in vector X at 1/M times the original sample rate. The resulting
resample vector Y is M times shorter, i.e., LENGTH(Y) = CEIL (LENGTH(X)/M). By
default, DECIMATE filters the data with an 8th order Chebyshev Type I low pass filter with
cutoff frequency .8*(Fs/2)/R, before resampling.

1.8) MATLAB® Simulink:

SIMULINK is a part of MATLAB that can be used to simulate dynamic systems. To facilitate model
definition, SIMULINK adds a new class of windows called block diagram windows. In these windows,
models are created and edited primarily by mouse-driven commands. Part of mastering SIMULINK is to
become familiar with manipulating model components within these windows.
1. Start Matlab and then the Simulink environment by typing simulink to the matlab
prompter.

2. Open a new Simulink model window from File🡪New🡪Model


Artificial Intelligent and Machine Learning (3161710)

3. You can construct your block diagram by drag-and-dropping the appropriate blocks from
the main Simulink widow. Some of the most commonly used blocks:
From the “Continuous” blocks (double click on the “Continuous” button) you can use the
typical blocks to construct dynamic systems (e.g. transfer function, time delay, etc.).

From the “Sink” we often use the “Scope” block to plot the results
Artificial Intelligent and Machine Learning (3161710)

From the “Sources” the “Step” function is used to simulate step changes in the input:

From the “Signal Routing” blocks the “Mux” block is often used to concatenate signals into a
“bus” e.g. for plotting multiple signals in “Scope”.

The “Math Operations” set of blocks provides the usual mathematical operations:

EXERCISE 1. Typical open-loop


dynamic responses of second order systems
E1 – Step 1. Start the Simulink environment
by typing “simulink” to the matlab prompter.
Artificial Intelligent and Machine Learning (3161710)

E1 – Step 2. Open a new simulink mode from the File🡪New🡪Model


E1 – Step 3. Drag and drop the blocks below from the Simulink windows into the model window.
E1 – Step 4. Double click the blocks the set up different second order transfer functions.
Define one overdamped, one critically damped and one under damped system.

E1 – Step 5. Connect the blocks together as shown below:


Artificial Intelligent and Machine Learning (3161710)

E1 – Step 6. Set the simulation time to 30 sec from the menu “Simulation” 🡪“Configuration parameters”

E1 – Step 7. Simulate the process by pressing the “Run” button and then show the results by double
clicking on the “Scope” block:
Artificial Intelligent and Machine Learning (3161710)

Quiz:

V1 = [1 2 3 4 5 6 7 8 9 0]
V2 = [0.3 1.2 0.5 2.1 0.1 0.4 3.6 4.2 1.7 0.9]
V3 = [4 4 4 4 3 3 2 2 2 1]

a) Calculate respectively the sum of all the elements in vectors V1, V2 and V3.
b) How to get the value of the 5th element of each vector?
c) What happens if we execute the command V1(0) and V1(11)? Remember that
if a vector has "N" element, their subscripts are from 1 to N.
d) Generate a new vector V4 from V2 consisting of the first five elements of V2.
e) Generate a new vector V5 from V2 consisting of the last five elements of V2.
f) Generate a new vector V6 from V2 with its 6th element omitted.
g) Generate a new vector V7 from V2 with its 7th element changed to 1.4.
h) Generate a new vector V8 from V2 whose elements are the 1st, 3rd, 7th and 9th elements of
V2.
i) What are the results of:
Sr. No. Equation Sr. No. Equation

1 9 – V1 9 V1 == V3

2 V1 * 5 10 V1 > 6

3 V1+V2 11 V1 > V3

4 V1 – V3 12 V3 – (V1 > 2)

5 V1.* V2 13 (V1 > 2) & (V1 < 6)

6 V1 * V2 14 (V1 > 2) | (V1 < 6)

7 V1. ^ 2 15 any(V1)

8 V1. ^ V3 16 all(V1)
Artificial Intelligent and Machine Learning (3161710)

Suggested Reference :
https://www.mathworks.com/

References used by the students:

Rubric wise marks obtained:

Rubrics 1 2 3 4 5 Total

Marks

Rubrics:
®
1. Group work while writing MATLAB Commands on computer.
2. Understand the concept while performing coding.
®
3. Individual work done while Executing MATLAB Commands.
4. Analysis and comparison with the software and manual steps.
5. Quiz answers and submission in time.
Artificial Intelligent and Machine Learning (3161710)

EXPERIMENT-2

To implement linear regression using least square method/normal equation


with single variable
Date:

Relevant CO: CO2

Objectives: To understand the linear regression method

Theory:
Linear regression analysis is used to predict the value of a variable based on the value of
another variable. The variable you want to predict is called the dependent variable. The variable
you are using to predict the other variable's value is called the independent variable.

This form of analysis estimates the coefficients of the linear equation, involving one or
more independent variables that best predict the value of the dependent variable. Linear regression
fits a straight line or surface that minimizes the discrepancies between predicted and actual output
values. There are simple linear regression calculators that use a “least squares” method to discover
the best-fit line for a set of paired data.
Types of Regression Analysis Techniques

There are many types of regression analysis techniques depends upon the number of
factors. These factors include the type of target variable, shape of the regression line, and the
number of independent variables.

Below are the different regression techniques:


1. Linear Regression
2. Logistic Regression
3. Ridge Regression
4. Lasso Regression
5. Polynomial Regression
6. Bayesian Linear Regression

Linear Regression:

Linear regression is one of the most basic linear regression model consists of a predictor
variable and linearly to each other. In case the data involves more than one independent variable,
Artificial Intelligent and Machine Learning (3161710)

then linear regression is called multiple linear regression models.

The below-given equation is used to denote the linear regression model: where m is the slope of
the line, c is an intercept, and e represents the error in the model.
y= mx + c + e
The best fit line is determined by varying the values of m and c. The predictor error is the difference
between the observed values and the predicted value. The values of m & c gets selected in such a way
that it gives the minimum predictor error. It is important to note that a simple linear regression model
is susceptible to outliers. Therefore, it should not be used in case of big size data.

Fig:1- Best Fit Line for a Linear Regression Model [1]

Safety and necessary Precautions:


1. Do not start the computer without instructions

Problem Statement: Below table represents current home prices in some township based on
square feet area.
Artificial Intelligent and Machine Learning (3161710)

Given above data build a machine learning model that can predict home prices based on square
feet area.

Python program & Results:

df = pd.read_csv('homeprices.csv')
df

Result:

%matplotlib inline
plt.xlabel('area')
plt.ylabel('price')
plt.scatter(df.area,df.price,color='red',marker='+')

Result:

new_df = df.drop('price',axis='columns')
new_df

Result:

price = df.price
price

Result:

# Create linear regression object


reg = linear_model.LinearRegression()
reg.fit(new_df,price)
Result:

reg.predict([[3300]])
Result:

reg.coef_
Result:

reg.intercept_
Artificial Intelligent and Machine Learning (3161710)

Result:

area_df = pd.read_csv("areas.csv")
area_df.head(3)

p = reg.predict(area_df)
p

Result:

area_df['prices']=p
area_df
Result:

area_df.to_csv("prediction.csv")
Result:

Conclusion:

Quiz:

1. What is least square method in linear regression machine learning?


2. What is the difference between linear regression and logistic regression?

Suggested Reference:
https://www.spiceworks.com/tech/artificial-intelligence/articles/what-is-linear-regression/

References used by the students:

Rubric wise marks obtained:

Rubrics 1 2 3 4 5 Total

Marks
Artificial Intelligent and Machine Learning (3161710)

Rubrics:
1. Group work while performing experiment on computer.
2. Understand the concept while performing coding.
3. Individual work done while performing coding.
4. Analysis of the method and program.
5. Quiz answers and submission in time
Artificial Intelligent and Machine Learning (3161710)

EXPERIMENT-3

To implement linear regression using least square method/normal equation


with multiple variable
Date:

Competency and Practical skills: This experiment aims to predict the continuous output variable
based on the one or more predictor variables using least square method of linear regression.

Relevant CO: CO2

Objectives: Prediction of the output based on multiple system variables

Theory:

Regression analysis is a statistical method to model the relationship between a dependent


(target) and independent (predictor) variables with one or more independent variables. More
specifically, Regression analysis helps us to understand how the value of the dependent variable is
changing corresponding to an independent variable when other independent variables are held
fixed.

Regression is a supervised learning technique which helps in finding the correlation


between variables and enables us to predict the continuous output variable based on the one or
more predictor variables. It is mainly used for prediction, forecasting, time series modeling, and
determining the causal-effect relationship between variables.

In Regression, we plot a graph between the variables which best fits the given datapoints,
using this plot, the machine learning model can make predictions about the data. In simple words,
"Regression shows a line or curve that passes through all the datapoints on target-predictor graph
in such a way that the vertical distance between the datapoints and the regression line is
minimum." The distance between datapoints and line tells whether a model has captured a strong
relationship or not.

There may be various cases in which the response variable is affected by more than one
predictor variable; for such cases, the Multiple Linear Regression algorithm is used.

Moreover, Multiple Linear Regression is an extension of Simple Linear regression as it


takes more than one predictor variable to predict the response variable.
Artificial Intelligent and Machine Learning (3161710)

Some key points about MLR:

● For MLR, the dependent or target variable(Y) must be the continuous/real, but the
predictor or independent variable may be of continuous or categorical form.

● Each feature variable must model the linear relationship with the dependent variable.

● MLR tries to fit a regression line through a multidimensional space of data-points.

Assumptions for Multiple Linear Regression:

● A linear relationship should exist between the Target and predictor variables.

● The regression residuals must be normally distributed.

● MLR assumes little or no multicollinearity (correlation between the independent variable) in data.

Safety and necessary Precautions:


Do not start the computer without instructions

PROBLEM STATEMENT:

Below is the table containing home prices in X township. Here price depends on area (square
feet), bed rooms and age of the home (in years). Given these prices we have to predict prices of
new homes based on area, bed rooms and age.

Given above data build a machine learning model that can predict home prices based on
square feet area. Use linear regression with multiple variables and predict the model. Price can be
calculated using following equation,
Artificial Intelligent and Machine Learning (3161710)

Procedure:
1. Importing libraries: Firstly we will import the library which will help in building the model. Below
is the code for it:
# importing libraries
import numpy as nm
import matplotlib.pyplot as mtp
import pandas as pd

2. Importing dataset:
#importing datasets
data_set= pd.read_csv('homeprice.csv')

3. Extracting dependent and independent Variables:


x= data_set.iloc[:, :-1].values
y= data_set.iloc[:, 4].values

3. Encoding Dummy Variables:


#Catgorical data
from sklearn.preprocessing import LabelEncoder, OneHotEncoder
labelencoder_x= LabelEncoder()
x[:, 3]= labelencoder_x.fit_transform(x[:,3])
onehotencoder= OneHotEncoder(categorical_features= [3])
x= onehotencoder.fit_transform(x).toarray()
#avoiding the dummy variable trap:
x = x[:, 1:]

4. Splitting the dataset into training and test set.

from sklearn.model_selection import train_test_split


x_train, x_test, y_train, y_test= train_test_split(x, y, test_size= 0.2, random_state=0)

2. Fitting our MLR model to the Training set:


from sklearn.linear_model import LinearRegression
regressor= LinearRegression()
regressor.fit(x_train, y_train)

3. Prediction of Test set results:


y_pred= regressor.predict(x_test)
By executing the above lines of code, a new vector will be generated under the variable explorer
option.
We can test our model by comparing the predicted values and test set values.
Artificial Intelligent and Machine Learning (3161710)

Result:

1. Check and attach the output of all the steps of the program.

2. check the score for training dataset and test dataset.

print('Train Score: ', regressor.score(x_train, y_train))


print('Test Score: ', regressor.score(x_test, y_test))

Conclusion:

Quiz:

1. State any two applications of multiple linear regression.


2. What are the disadvantages of multiple linear regression method?

Suggested Reference:
1. https://www.javatpoint.com/multiple-linear-regression-in-machine-learning

References used by the students:

Rubric wise marks obtained:

Rubrics 1 2 3 4 5 Total

Marks

Rubrics:
1. Group work while performing experiment on computer.
2. Understand the concept while performing coding.
3. Individual work done while performing coding.
4. Analysis of the method and program.
5. Quiz answers and submission in time
Artificial Intelligent and Machine Learning (3161710)

Experiment-4

Implementation of gradient descent algorithm for single variable using


MATLAB Software

Date:

Competency and Practical Skills: To develop machine learning and deep learning Skills

Prerequisites: Before proceeding with this tutorial, you must have a basic understanding of
Gradient descent algorithm and adequate knowledge of mathematics.

Relevant CO: CO4

Objectives: (a) Understand the basics of MATLAB®


(b) Understand basic commands

Equipment/Instruments: MATLAB®

Theory:

1) Introduction:
Artificial Intelligent and Machine Learning (3161710)

Gradient Descent is an iterative optimization algorithm that tries to find the


optimum value (Minimum/Maximum) of an objective function. It is one of the most used
optimization techniques in machine learning projects for updating the parameters of a
model in order to minimize a cost function.
The main aim of gradient descent is to find the best parameters of a model which
gives the highest accuracy on training as well as testing datasets. In gradient descent, The
gradient is a vector that points in the direction of the steepest increase of the function at a
specific point. Moving in the opposite direction of the gradient allows the algorithm to
gradually descend towards lower values of the function, and eventually reaching to the
minimum of the function.
A gradient is nothing but a derivative that defines the effects on outputs of the
function with a little bit of variation in inputs. Gradient Descent (GD) is a widely used
optimization algorithm in deep learning that is used to minimize the cost function of a
neural network model during training. It works by iteratively adjusting the weights or
parameters of the model in the direction of the negative gradient of the cost function until
the minimum of the cost function is reached.

2) Different Variants of Gradient Descent:


There are several variants of gradient descent that differ in the way the step size or
learning rate is chosen and the way the updates are made. Here are some popular variants:

● Batch Gradient Descent: In batch gradient descent, To update the model parameter values

like weight and bias, the entire training dataset is used to compute the gradient and update
the parameters at each iteration. This can be slow for large datasets but may lead to a more
accurate model. It is effective for convex or relatively smooth error manifolds because it
moves directly toward an optimal solution by taking a large step in the direction of the
negative gradient of the cost function. However, it can be slow for large datasets because it
computes the gradient and updates the parameters using the entire training dataset at each
iteration. This can result in longer training times and higher computational costs.

● Stochastic Gradient Descent (SGD): In SGD, only one training example is used to compute

the gradient and update the parameters at each iteration. This can be faster than batch
gradient descent but may lead to more noise in the updates.
Artificial Intelligent and Machine Learning (3161710)

● Mini-batch Gradient Descent: In Mini-batch gradient descent a small batch of training

examples is used to compute the gradient and update the parameters at each iteration. This
can be a good compromise between batch gradient descent and Stochastic Gradient Descent,
as it can be faster than batch gradient descent and less noisy than Stochastic Gradient
Descent.

● Momentum-based Gradient Descent: In momentum-based gradient descent, Momentum is a

variant of gradient descent that incorporates information from the previous weight updates
to help the algorithm converge more quickly to the optimal solution. Momentum adds a
term to the weight update that is proportional to the running average of the past gradients,
allowing the algorithm to move more quickly in the direction of the optimal solution. The
updates to the parameters are based on the current gradient and the previous updates. This
can help prevent the optimization process from getting stuck in local minima and reach the
global minimum faster.

● Nesterov Accelerated Gradient (NAG): NAG is an extension of Momentum Gradient

Descent. It evaluates the gradient at a hypothetical position ahead of the current position
based on the current momentum vector, instead of evaluating the gradient at the current
position. This can result in faster convergence and better performance.

● Adagrad: In this variant, the learning rate is adaptively adjusted for each parameter based on

the historical gradient information. This allows for larger updates for infrequent parameters
and smaller updates for frequent parameters.

● RMSprop: In this variant, the learning rate is adaptively adjusted for each parameter based

on the moving average of the squared gradient. This helps the algorithm to converge faster
in the presence of noisy gradients.
Artificial Intelligent and Machine Learning (3161710)

● Adam: Adam stands for adaptive moment estimation, it combines the benefits of

Momentum-based Gradient Descent, Adagrad, and RMSprop the learning rate is adaptively
adjusted for each parameter based on the moving average of the gradient and the squared
gradient, which allows for faster convergence and better performance on non-convex
optimization problems. It keeps track of two exponentially decaying averages the first-
moment estimate, which is the exponentially decaying average of past gradients, and the
second-moment estimate, which is the exponentially decaying average of past squared
gradients. The first-moment estimate is used to calculate the momentum, and the second-
moment estimate is used to scale the learning rate for each parameter. This is one of the
most popular optimization algorithms for deep learning.

3) Advantages of gradient descent and its variants:

1. Widely used: Gradient descent and its variants are widely used in machine learning and
optimization problems because they are effective and easy to implement.

2. Convergence: Gradient descent and its variants can converge to a global minimum or a good
local minimum of the cost function, depending on the problem and the variant used.

3. Scalability: Many variants of gradient descent can be parallelized and are scalable to large
datasets and high-dimensional models.

4. Flexibility: Different variants of gradient descent offer a range of trade-offs between accuracy
and speed, and can be adjusted to optimize the performance of a specific problem.

dataSet=[1 2;2 4;2 5;3 6;3 7;4 8;5 10;6 12;6 13;6 13;8 16;10 17];

>> x = dataSet(:, 1);

>> y = dataSet(:, 2);

>> m=0;

>> c=0;

>> n=length(x);

>> L=0.0001;

>> for i=1:5000

y_pred = m*x + c;

D_m = (-2/n) * sum(x .* (y- y_pred));


Artificial Intelligent and Machine Learning (3161710)

D_c = (-2/n) * sum(y - y_pred);

m = m - L * D_m;

c = c - L * D_c;

end

>> scatter(x,y)

>> hold on

>> plot(x,y_pred)
Artificial Intelligent and Machine Learning (3161710)

Conclusion:

Quiz:

j) What is the fastest gradient descent algorithm?


k) What is the gradient descent algorithm gda?
l) Why is gradient descent used?
m) Which is an example of gradient descent algorithm?

Suggested Reference:

https://www.mathworks.com/

References used by the students: (Sufficient space to be provided)


Artificial Intelligent and Machine Learning (3161710)

Rubric wise marks obtained:

Rubrics 1 2 3 4 5 Total

Marks

Rubrics:

1. Group work while writing MATLAB ® Commands on computer.


2. Understand the concept while performing coding.
3. Individual work done while Executing MATLAB ® Commands.
4. Analysis and comparison with the software and manual steps.
5. Quiz answers and submission in time.

Experiment-5

Implementation of gradient descent algorithm for Multiple variable using


MATLAB Software

Date:

Competency and Practical Skills: To develop machine learning and deep learning Skills

Prerequisites: Before proceeding with this tutorial, you must have a basic understanding of
Gradient descent algorithm and adequate knowledge of mathematics.

Relevant CO: CO4

Objectives: (a) Understand the basics of MATLAB®


(b) Understand basic commands

Equipment/Instruments: MATLAB®

Theory:

1) Introduction:
Artificial Intelligent and Machine Learning (3161710)

Gradient Descent is an iterative optimization algorithm that tries to find the


optimum value (Minimum/Maximum) of an objective function. It is one of the most used
optimization techniques in machine learning projects for updating the parameters of a
model in order to minimize a cost function.
The main aim of gradient descent is to find the best parameters of a model which
gives the highest accuracy on training as well as testing datasets. In gradient descent, The
gradient is a vector that points in the direction of the steepest increase of the function at a
specific point. Moving in the opposite direction of the gradient allows the algorithm to
gradually descend towards lower values of the function, and eventually reaching to the
minimum of the function.
A gradient is nothing but a derivative that defines the effects on outputs of the
function with a little bit of variation in inputs. Gradient Descent (GD) is a widely used
optimization algorithm in deep learning that is used to minimize the cost function of a
neural network model during training. It works by iteratively adjusting the weights or
parameters of the model in the direction of the negative gradient of the cost function until
the minimum of the cost function is reached.

2) Different Variants of Gradient Descent:


There are several variants of gradient descent that differ in the way the step size or
learning rate is chosen and the way the updates are made. Here are some popular variants:

● Batch Gradient Descent: In batch gradient descent, To update the model parameter values

like weight and bias, the entire training dataset is used to compute the gradient and update
the parameters at each iteration. This can be slow for large datasets but may lead to a more
accurate model. It is effective for convex or relatively smooth error manifolds because it
moves directly toward an optimal solution by taking a large step in the direction of the
negative gradient of the cost function. However, it can be slow for large datasets because it
computes the gradient and updates the parameters using the entire training dataset at each
iteration. This can result in longer training times and higher computational costs.

● Stochastic Gradient Descent (SGD): In SGD, only one training example is used to compute

the gradient and update the parameters at each iteration. This can be faster than batch
gradient descent but may lead to more noise in the updates.
Artificial Intelligent and Machine Learning (3161710)

● Mini-batch Gradient Descent: In Mini-batch gradient descent a small batch of training

examples is used to compute the gradient and update the parameters at each iteration. This
can be a good compromise between batch gradient descent and Stochastic Gradient Descent,
as it can be faster than batch gradient descent and less noisy than Stochastic Gradient
Descent.

● Momentum-based Gradient Descent: In momentum-based gradient descent, Momentum is a

variant of gradient descent that incorporates information from the previous weight updates
to help the algorithm converge more quickly to the optimal solution. Momentum adds a
term to the weight update that is proportional to the running average of the past gradients,
allowing the algorithm to move more quickly in the direction of the optimal solution. The
updates to the parameters are based on the current gradient and the previous updates. This
can help prevent the optimization process from getting stuck in local minima and reach the
global minimum faster.

● Nesterov Accelerated Gradient (NAG): NAG is an extension of Momentum Gradient

Descent. It evaluates the gradient at a hypothetical position ahead of the current position
based on the current momentum vector, instead of evaluating the gradient at the current
position. This can result in faster convergence and better performance.

● Adagrad: In this variant, the learning rate is adaptively adjusted for each parameter based on

the historical gradient information. This allows for larger updates for infrequent parameters
and smaller updates for frequent parameters.

● RMSprop: In this variant, the learning rate is adaptively adjusted for each parameter based

on the moving average of the squared gradient. This helps the algorithm to converge faster
in the presence of noisy gradients.
Artificial Intelligent and Machine Learning (3161710)

● Adam: Adam stands for adaptive moment estimation, it combines the benefits of

Momentum-based Gradient Descent, Adagrad, and RMSprop the learning rate is adaptively
adjusted for each parameter based on the moving average of the gradient and the squared
gradient, which allows for faster convergence and better performance on non-convex
optimization problems. It keeps track of two exponentially decaying averages the first-
moment estimate, which is the exponentially decaying average of past gradients, and the
second-moment estimate, which is the exponentially decaying average of past squared
gradients. The first-moment estimate is used to calculate the momentum, and the second-
moment estimate is used to scale the learning rate for each parameter. This is one of the
most popular optimization algorithms for deep learning.

3) Advantages of gradient descent and its variants:

2. Widely used: Gradient descent and its variants are widely used in machine learning and
optimization problems because they are effective and easy to implement.

3. Convergence: Gradient descent and its variants can converge to a global minimum or a good
local minimum of the cost function, depending on the problem and the variant used.

4. Scalability: Many variants of gradient descent can be parallelized and are scalable to large
datasets and high-dimensional models.

5. Flexibility: Different variants of gradient descent offer a range of trade-offs between accuracy
and speed, and can be adjusted to optimize the performance of a specific problem.

clear all;
close all;
clc;
%dataSet = load('TestDataSet.txt');
dataSet=[1 1 1 2;2 2 2 4;3 3 3 6;4 4 4 8;5 5 5 10]
x = dataSet(:, 1:3);
y = dataSet(:, 4);
L = 0.01;
x = [ones(length(x), 1) x];
c=[0 0 0 0];
costHistory = zeros(5000, 1);
for i=1:5000
Artificial Intelligent and Machine Learning (3161710)

h=c*x';
c(:,1)=c(:,1)-L/5*sum((h-y')*x(:,1));
c(:,2)=c(:,2)-L/5*sum((h-y')*x(:,2));
c(:,3)=c(:,3)-L/5*sum((h-y')*x(:,3));
c(:,4)=c(:,4)-L/5*sum((h-y')*x(:,4));
costHistory(i) = (x * c' - y)' * (x * c' - y) / (2 * length(y));
end

figure;
plot(1:5000, costHistory);
inputs = [1;1; 2; 3];
output = c * inputs;
disp(output);

Conclusion:

Quiz:

a) What is the range of the output values for a sigmoid function?


b) Problems, where discrete-valued outputs are predicted, are called?
c) How are the parameters updated during Gradient Descent process?
d) What is the advantage of using an iterative algorithm like gradient descent?
Artificial Intelligent and Machine Learning (3161710)

Suggested Reference:

https://www.mathworks.com/

References used by the students: (Sufficient space to be provided)

Rubric wise marks obtained:

Rubrics 1 2 3 4 5 Total

Marks

Rubrics:

1. Group work while writing MATLAB ® Commands on computer.


2. Understand the concept while performing coding.
3. Individual work done while Executing MATLAB ® Commands.
4. Analysis and comparison with the software and manual steps.
5. Quiz answers and submission in time.

EXPERIMENT-6

Implementation of logistic regression for binary classification


Date:

Relevant CO: CO2

Objectives: To understand the types of regression techniques for machine learning

Theory:
Artificial Intelligent and Machine Learning (3161710)

Logistic regression is one of the most popular Machine Learning algorithms, which comes under
the Supervised Learning technique. It is used for predicting the categorical dependent variable using a
given set of independent variables.
Logistic regression predicts the output of a categorical dependent variable. Therefore the outcome
must be a categorical or discrete value. It can be either Yes or No, 0 or 1, true or False, etc. but instead of
giving the exact value as 0 and 1, it gives the probabilistic values which lie between 0 and 1.
Logistic Regression is much similar to the Linear Regression except that how they are used. Linear
Regression is used for solving Regression problems, whereas Logistic regression is used for solving the
classification problems.
In Logistic regression, instead of fitting a regression line, we fit an "S" shaped logistic function,
which predicts two maximum values (0 or 1).
The curve from the logistic function indicates the likelihood of something such as whether the cells
are cancerous or not, a mouse is obese or not based on its weight, etc.
Steps in Logistic Regression: To implement the Logistic Regression using Python, we will use the same
steps as we have done in previous topics of Regression. Below are the steps:

● Data Pre-processing step

● Fitting Logistic Regression to the Training set

● Predicting the test result

● Test accuracy of the result(Creation of Confusion matrix)

● Visualizing the test set result.

Problem Statement:

Apply binary logistic regression and predict if person buys insurance or he/she doesn't.
Consider Insurance data for training as below.

Age Bought_insurance

22 0

25 0

47 1

52 0

46 1
Artificial Intelligent and Machine Learning (3161710)

56 1

55 0

60 1

62 1

61 1

18 0

28 0

27 0

29 0

49 1

55 1

25 1

58 1

19 0

18 0

21 0

26 0

40 1

45 1

50 1

54 1

23 0

Python Code & Results:

import pandas as pd
from matplotlib import pyplot as plt
%matplotlib inline
Result:
df = pd.read_csv("insurance_data.csv")
df.head()
Result:

plt.scatter(df.age,df.bought_insurance,marker='+',color='red')
Artificial Intelligent and Machine Learning (3161710)

Result:

from sklearn.model_selection import train_test_split


X_train, X_test, y_train, y_test = train_test_split(df[['age']],df.bought_insurance,train_size=0.8)
X_test
from sklearn.linear_model import LogisticRegression
model = LogisticRegression()
Result:

model.fit(X_train, y_train)
Result:

X_test
Result:

y_predicted = model.predict(X_test)
Result:

model.predict_proba(X_test)
Result:

model.score(X_test,y_test)
Result:

y_predicted
Result:

X_test
Result:

model.coef_
Result:

model.intercept_
Result:

import math
def sigmoid(x):
return 1 / (1 + math.exp(-x))
Result:

def prediction_function(age):
z = 0.042 * age - 1.53 # 0.04150133 ~ 0.042 and -1.52726963 ~ -1.53
y = sigmoid(z)
return y
age = 35
prediction_function(age)
Result:

age = 43
prediction_function(age)
Artificial Intelligent and Machine Learning (3161710)

Result:

Result:

*Attach above stepwise results separately.

Conclusion:

Quiz:

1.What is logistic regression?


2.What are three types of logistic regression?
3. What is the difference between linear regression and logistic regression?

Suggested Reference:
1. https://www.javatpoint.com/multiple-linear-regression-in-machine-learning

References used by the students:

Rubric wise marks obtained:

Rubrics 1 2 3 4 5 Total

Marks

Rubrics:
1. Group work while performing experiment on computer.
2. Understand the concept while performing coding.
3. Individual work done while performing coding.
4. Analysis of the method and program.
5. Quiz answers and submission in time
Artificial Intelligent and Machine Learning (3161710)

EXPERIMENT-7

Implementation of Single layer ANN


Date:

Relevant CO: CO3

Objectives: To study the coding for generation of Artificial Neural Network

Theory:

Neural networks also known as neural nets is a type of algorithm in machine learning and
artificial intelligence that works the same as the human brain operates. The artificial neurons in
the neural network depict the same behavior of neurons in the human brain. Neural networks are
used in risk analysis of business, forecasting the sales, and many more. Neural networks are
adaptable to changing inputs so that there is no need for designing the algorithm again based on
the inputs.

Types of Neural Networks

Neural Networks can be classified into multiple types based on their depth activation filters,
Structure, Neurons used, Neuron density, data flow, and so on. The types of Neural Networks are
as follows:

● Perceptron

● Feed Forward Neural Networks

● Convolutional Neural Networks

● Radial Basis Function Neural Networks


Artificial Intelligent and Machine Learning (3161710)

● Recurrent Neural Networks

● Sequence to Sequence Model

● Modular Neural Network

Depending upon the number of layers, there are two types of neural networks:

1. Single Layered Neural Network: A single layer neural network contains input and output
layer. The input layer receives the input signals and the output layer generates the output
signals accordingly.
2. Multilayer Neural Network: Multilayer neural network contains input, output and one or
more than one hidden layer. The hidden layers perform intermediate computations before
directing the input to the output layer.

Single Layered Neural Network

A single-layered neural network often called perceptrons is a type of feed-forward neural network
made up of input and output layers. Inputs provided are multi-dimensional. Perceptrons are
acyclic in nature. The sum of the product of weights and the inputs is calculated in each node. The
input layer transmits the signals to the output layer. The output layer performs computations.
Perceptron can learn only a linear function and requires less training output. The output can be
represented in one or two values(0 or 1).

Problem Statement:
Create Single Layer Neural Network With Python as per the following diagram. Assume x1 x2
(Inputs), w1,w2 (Weights), Activation function=Sigmoid, Target output y0=1. Compute the error.
Artificial Intelligent and Machine Learning (3161710)

Python Code & Results:

x1 = 0.3
x2 = 0.5
w1 = 0.2
w2 = 0.1
y=1
z = w1*x1 + w2*x2
print("z : ", z)
import math
def sigmoid(x):
return 1/(1 + math.exp(-z))
a = sigmoid(z)
print("a : ", a)
y_head = a
error = 0.5*(y - y_head)
print("error : ", error)

Output:

Z (weighted sum of inputs) =

Y_head (actual output) =

Error=

Conclusion:
Artificial Intelligent and Machine Learning (3161710)

Quiz:
1.What is the difference between Deep Learning and Artificial Neural Network?
2. What is the difference between forward propagation and backward propagation?
3. How do Neural networks gets the optimal weights and bias values?

Suggested Reference:
https://www.geeksforgeeks.org/single-layered-neural-networks-in-r-programming/

References used by the students:

Rubric wise marks obtained:

Rubrics 1 2 3 4 5 Total

Marks

Rubrics:
1. Group work while performing experiment on computer.
2. Understand the concept while performing coding.
3. Individual work done while performing coding.
4. Analysis of the method and program.
5. Quiz answers and submission in time
Artificial Intelligent and Machine Learning (3161710)

Experiment-8

Implementation of Normal Equation using MATLAB Software

Date:

Competency and Practical Skills: To develop model development and evaluation skills,
problem-solving and critical thinking Skills.

Prerequisites: Before proceeding with this tutorial, you must have a basic understanding of the
linear model in regression and adequate knowledge of mathematics.

Relevant CO: CO2

Objectives: (a) Understand the basics of MATLAB®


(b) Understand basic commands

Equipment/Instruments: MATLAB®

Theory:

1) Introduction:

● Normal Equation is an analytical approach to Linear Regression with a Least Square Cost Function. We can use the

normal equation to directly compute the parameters of a model that minimizes the Sum of the squared difference
between the actual term and the predicted term.
Artificial Intelligent and Machine Learning (3161710)

● This method is quite useful when the dataset is small. However, with a large dataset, it may not be able to give us

the best parameter of the model.


The normal Equation is as follows:

Where
X = input feature value
y = output value
If the term X^T X is non-invertible or singular, then we can use regularization.

close all;

clc;

dataSet = load('TestDataSetMV.txt');

x = dataSet(:, 1:3);

y = dataSet(:, 4);

x = [ones(length(x), 1) x];

c=pinv(x'*x)*(x'*y)

inputs = [1;12;12;12];

output = c' * inputs;

disp(output);

Output:
Artificial Intelligent and Machine Learning (3161710)

Conclusion:

Quiz:

n) Which of the following metrics can be used for evaluating regression models?
i) R Squared
ii) Adjusted R Squared
iii) F Statistics
iv) RMSE / MSE / MAE
o) How many coefficients do you need to estimate in a simple linear regression
model (One independent variable)?
i. 1
ii. 2
iii. 3
iv. 4

c) Function used for linear regression in R is __________


i. lm(formula, data)
ii. lr(formula, data)
iii. lrm(formula, data)
iv. regression.linear(formula, data)

d) In syntax of linear model lm(formula,data,..), data refers to ______


i. Matrix
ii. Vector
iii. Array
iv. List

Suggested Reference:

https://www.mathworks.com/
Artificial Intelligent and Machine Learning (3161710)

References used by the students: (Sufficient space to be provided)

Rubric wise marks obtained:

Rubrics 1 2 3 4 5 Total

Marks

Rubrics:

1. Group work while writing MATLAB ® Commands on computer.


2. Understand the concept while performing coding.
3. Individual work done while Executing MATLAB ® Commands.
4. Analysis and comparison with the software and manual steps.
5. Quiz answers and submission in time.

Experiment-9

To study neural network as sine wave function approximator.

Date:

Competency and Practical Skills: To develop Programming languages and Mathematics and
statistics skills.

Prerequisites: Before proceeding with this tutorial, you must have a


Artificial Intelligent and Machine Learning (3161710)

1) basic knowledge of mathematics, including algebra, calculus, and probability.


2) Proficiency in programming, especially in languages such as Python or R.
3) Familiarity with machine learning concepts, including supervised and unsupervised
learning, overfitting, and bias/variance trade-off.
4) A strong foundation in computer science principles, such as data structures and algorithms.
Relevant CO: CO3

Objectives: (a) Understand the basics of MATLAB®


(b) Understand basic commands

Equipment/Instruments: MATLAB®

Theory:

A neural system consolidates a few preparing layers, utilizing straightforward components


working in equal and motivated by organic sensory systems. It comprises of an info layer, at least
one shrouded layers, and a yield layer. The layers are interconnected by means of hubs, or
neurons, with each layer utilizing the yield of the past layer as its information.

Techniques Used with Neural Networks

Normal AI systems for planning neural system applications incorporate managed and unaided
learning, arrangement, relapse, design acknowledgment, and grouping.

1) Supervised Learning
Administered neural networks are prepared to create wanted yields in light of test inputs,
making them especially appropriate for displaying and controlling powerful frameworks,
arranging boisterous information, and anticipating future events. Deep Learning
Toolbox™ includes four kinds of directed systems: feedforward, spiral premise, dynamic,
and learning vector quantization.
2) Regression

Regression models describe the relationship between a response (output)


variable and one or more predictor (input) variables.

3) Pattern Recognition
Example acknowledgment is a significant part of neural system applications in PC vision,
radar handling, discourse acknowledgment, and content arrangement. It works by
Artificial Intelligent and Machine Learning (3161710)

grouping input information into articles or classes dependent on key highlights, utilizing
either regulated or unaided characterization.
For instance, in PC vision, regulated example acknowledgment systems are utilized for
optical character acknowledgment (OCR), face discovery, face acknowledgment, object
location, and item characterization. In picture preparing and PC vision, unaided example
acknowledgment systems are utilized for object discovery and picture division.
4) Unsupervised Learning
Unsupervised neural networks are trained by letting the neural network continually adjust
itself to new inputs. They are used to draw inferences from data sets consisting of input
data without labeled responses. You can use them to discover natural distributions,
categories, and category relationships within data.
5) Deep Learning Toolbox includes two types unsupervised networks: competitive layers and
self-organizing maps.
6) Clustering
Bunching is an unaided learning approach in which neural systems can be utilized for
exploratory information examination to discover shrouded examples or groupings in
information. This procedure includes gathering information by comparability.
Applications for cluster analysis include quality succession examination, statistical
surveying, and article acknowledgment.

WHAT IS BACK PROPAGATION IN NEURAL NETWORK?

Back-propagation is the substance of neural net preparing. It is the strategy for calibrating
the loads of a neural net dependent on the mistake rate got in the past age (i.e., cycle). Appropriate
tuning of the loads permits you to lessen blunder rates and to make the model dependable by
expanding its speculation.
Back propagation is a short structure for “in reverse spread of blunders.” It is a standard technique
for preparing fake neural systems. This technique assists with figuring the inclination of a
misfortune work as for all the loads in the system.
Artificial Intelligent and Machine Learning (3161710)

HOW BACK PROPAGATION WORKS?

1. Data sources X, show up through the pre connected way


2. Information is demonstrated utilizing genuine loads W. The loads are normally
arbitrarily chosen.
3. Compute the yield for each neuron from the information layer, to the shrouded layers,
to the yield layer.
4. Figure the mistake in the yields
5. Travel once again from the yield layer to the shrouded layer to modify the loads with
the end goal that the mistake is diminished.

Why We Need Backpropagation?

● Backpropagation is quick, straightforward and simple to program

● It has no parameters to tune separated from the quantities of information

● It is an adaptable strategy as it doesn’t require earlier information about the system

● It is a standard strategy that by and large functions admirably

● It needn’t bother with any exceptional notice of the highlights of the capacity to be
educated.

What is a Feed Forward Network?


Artificial Intelligent and Machine Learning (3161710)

A feedforward neural system is a counterfeit neural system where the hubs never structure
a cycle. This sort of neural system has an info layer, concealed layers, and a yield layer. It is the
first and most straightforward kind of counterfeit neural system.

Kinds of Backpropagation Networks

Two Types of Backpropagation Networks are:


1) Static Back-proliferation
It is one sort of backpropagation arrange which delivers a mapping of a static contribution
for static yield. It is helpful to understand static characterization issues like optical
character acknowledgment.
2) Intermittent Backpropagation:
Intermittent backpropagation is taken care of forward until a fixed worth is accomplished.
From that point onward, the blunder is processed and engendered in reverse.

The principle distinction between both of these techniques is: that the mapping is fast in static
back-spread while it is nonstatic in intermittent backpropagation.

Using NNTOOL for Creating Neural Network in MATLAB


The tool used in matlab is nntool for creating your neural network and training it .
Artificial Intelligent and Machine Learning (3161710)

WHICH TRAINING FUNCTION YOU SHOULD USE?


Matlab gives us the number of training functions which are stated below.

1. trainlm

Levenberg-Marquardt backpropagation

Syntax

net.trainFcn = 'trainlm'
[net,tr] = train(net,...)

Limitations

This function uses the Jacobian for calculations, which assumes that performance is a
mean or sum of squared errors. Therefore, networks trained with this function must use
either the mse or sse performance function.
Artificial Intelligent and Machine Learning (3161710)

2. trainbfg

BFGS quasi-Newton backpropagation

Syntax

net.trainFcn = 'trainbfg'
[net,tr] = train(net,...)

3. trainrp

Resilient backpropagation

Syntax

net.trainFcn = 'trainrp'
[net,tr] = train(net,...)

4. trainscg

Scaled conjugate gradient backpropagation

Syntax

net.trainFcn = 'trainscg'
[net,tr] = train(net,...)

5. traingdx

Gradient descent with momentum and adaptive learning rate backpropagation

Syntax

net.trainFcn = 'traingdx'
[net,tr] = train(net,...)

STEPS INVOLVE TO CREATE A NEURAL NETWORK IN MATLAB

1. Open Matlab.
Artificial Intelligent and Machine Learning (3161710)

2. In the workspace create input and upload your input data as shown in the video.After
uploading do transpose .

3.Now create target in workspace and upload your target data and then transpose.

4. Now create sample and upload those data for which you want to predict your answers.

5.In the command window write nntool and press enter, neural network dialogue box will get
open

6.Import your data as shown in the video below.

7.Now create your neural network. select feed forward backpropogation in network type,in input
data select input and in target data select target, use trainlm as your training functionand learngdm
as your learning function.

8.select transfer function as Tansig for both layers .

9. now train your curve .

10. Try to bring your regression curve coinciding with other lines with R= 0.98 to 0.99.

11. Once regression curve has come nicely go for simulation

12. While simulating in input give sample and simulate your answers will get predicted.
Artificial Intelligent and Machine Learning (3161710)

● Subscript:

load sin_nn
n = 2*pi;
x = 0:0.1*pi:n;
y = sin(x);
y1 = network1(x);
plot(y)
hold on
plot(y1)

● Nntool:
Artificial Intelligent and Machine Learning (3161710)
Artificial Intelligent and Machine Learning (3161710)
Artificial Intelligent and Machine Learning (3161710)

Regression is about 0.99 which is acceptable.


● Actual output versus trained network output plot:

Conclusion:

Quiz:

p) Why do we need biological neural networks?


i. to solve tasks like machine vision & natural language processing
ii. to apply heuristic search methods to find solutions of problem
Artificial Intelligent and Machine Learning (3161710)

iii. to make smart human interactive & user friendly system


iv. all of the mentioned

q) What’s the main point of difference between human & machine intelligence?
i. human perceive everything as a pattern while machine perceive it merely as data
ii. human have emotions
iii. human have more IQ & intellect
iv. human have sense organs

r) Does pattern classification belongs to category of non-supervised learning?


i. yes
ii. no

s) What is plasticity in neural networks?


i. input pattern keeps on changing
ii. input pattern has become static
iii. output pattern keeps on changing
iv. output is static

Suggested Reference:

https://www.mathworks.com/

References used by the students: (Sufficient space to be provided)

Rubric wise marks obtained:

Rubrics 1 2 3 4 5 Total

Marks

Rubrics:

6. Group work while writing MATLAB ® Commands on computer.


7. Understand the concept while performing coding.
8. Individual work done while Executing MATLAB ® Commands.
9. Analysis and comparison with the software and manual steps.
10. Quiz answers and submission in time.
Artificial Intelligent and Machine Learning (3161710)

Experiment-10

To study Neural Network based controller design.

Date:

Competency and Practical Skills: To develop model development and evaluation skills,
problem-solving and critical thinking Skills, Programming languages and Mathematics and
statistics skills.
Prerequisites: Before proceeding with this tutorial, you must have a strong foundation in
computer science principles, such as data structures and algorithms.
Relevant CO: CO3

Objectives: (a) Understand the basics of MATLAB®


(b) Understand basic commands

Equipment/Instruments: MATLAB®

Theory:

A proportional-integral-derivative (PID) controller is a control loop mechanism widely used in industrial control
systems and other applications that require continuously modulated control.

As a feedback controller, the PID controller’s core purpose is to force feedback to match a set
point, for instance, a climate control system that forces a heating or cooling unit to turn on or off
based on a set ambient temperature. It is also used to regulate other process variables such as
velocity, pressure, and flow.
PID control can be said to work much in the same way as the cruise control system on a car,
where external influences such as steep hills would decrease speed. In this application, a PID
Artificial Intelligent and Machine Learning (3161710)

system would use an algorithm to restore the vehicle to its set point speed by controlling the
power output of the vehicle’s engine as it travels up and over the hill.
Before the invention of microprocessors, PID control was achieved by analog electronic
components. Today, however, all PID controllers are handled by microprocessors.
The working principles of the PID controller:
The working principles of the PID controller are best understood by breaking it down and
analyzing each element individually.
Proportional gain—This gives an output that is proportional to current error (the difference
between what you want and what you have). If, for example, a room’s temperature set point is
22°C, but the room is actually at 24°C (an error of 2°C) the controller applies a corrective action
that is proportional to the error—in this scenario, it may turn the heat to a medium setting,
whereas it would set it to a high setting if the room’s temperature was say 15°C.
Integral gain—This is multiplied by the integral (or sum) of the error by looking at its history.
Returning back to the temperature example, the system may detect that 10 minutes were spent at
24°C, 10 at 23°C, and 10 at 22°C as the room cools down. If a system is heading too slowly to a
set point, the integral error begins to increase and makes the correction stronger, in this case,
perhaps by turning the heater down to a lower level.
Derivative—This is how fast the error is changing. With the temperature example, the room is
cooled by 1°C every 10 minutes. This means that the error is shrinking by 1°C, even 10 minutes.
By keeping track of this, overshoot—such as making the room too cold—can be avoided,
improving system efficiency.
PID tuning in MATLAB:
PID Tuner provides a fast and widely applicable single-loop PID tuning method for the
Simulink® PID Controller blocks. With this method, you can tune PID controller parameters to
achieve a robust design with the desired response time.
A typical design workflow with the PID Tuner involves the following tasks:
(1) Launch the PID Tuner. When launching, the software automatically computes a linear
plant model from the Simulink model and designs an initial controller.
(2) Tune the controller in the PID Tuner by manually adjusting design criteria in two
design modes. The tuner computes PID parameters that robustly stabilize the system.
(3) Export the parameters of the designed controller back to the PID Controller block and
verify controller performance in Simulink.
To launch the PID Tuner, double-click the PID Controller block to open its block dialog.
In the Main tab, click Tune.
Artificial Intelligent and Machine Learning (3161710)

Initial PID Design


When the PID Tuner launches, the software computes a linearized plant model seen by
the controller. The software automatically identifies the plant input and output, and uses the
current operating point for the linearization. The plant can have any order and can have time
delays.
The PID Tuner computes an initial PI controller to achieve a reasonable tradeoff between
performance and robustness. By default, step reference tracking performance displays in the plot.
The following figure shows the PID Tuner dialog with the initial design:

Display PID Parameters


Click Show parameters to view controller parameters P and I, and a set of performance and
robustness measurements. In this example, the initial PI controller design gives a settling time of 2
seconds, which meets the requirement.
Artificial Intelligent and Machine Learning (3161710)

Using NNTOOL for Creating Neural Network in MATLAB


The tool used in matlab is nntool for creating your neural network and training it.

WHICH TRAINING FUNCTION YOU SHOULD USE?


Matlab gives us the number of training functions which are stated below.
Artificial Intelligent and Machine Learning (3161710)

6. trainlm

Levenberg-Marquardt backpropagation

Syntax

net.trainFcn = 'trainlm'
[net,tr] = train(net,...)

Limitations

This function uses the Jacobian for calculations, which assumes that performance is a
mean or sum of squared errors. Therefore, networks trained with this function must use
either the mse or sse performance function.

7. trainbfg

BFGS quasi-Newton backpropagation

Syntax

net.trainFcn = 'trainbfg'
[net,tr] = train(net,...)

8. trainrp

Resilient backpropagation

Syntax

net.trainFcn = 'trainrp'
[net,tr] = train(net,...)

9. trainscg

Scaled conjugate gradient backpropagation

Syntax

net.trainFcn = 'trainscg'
[net,tr] = train(net,...)
Artificial Intelligent and Machine Learning (3161710)

10. traingdx

Gradient descent with momentum and adaptive learning rate backpropagation

Syntax

net.trainFcn = 'traingdx'
[net,tr] = train(net,...)

STEPS INVOLVE TO CREATE A NEURAL NETWORK IN MATLAB

1. Open Matlab.

2. In the workspace create input and upload your input data as shown in the video.After
uploading do transpose .

3.Now create target in workspace and upload your target data and then transpose.

4. Now create sample and upload those data for which you want to predict your answers.

5.In the command window write nntool and press enter, neural network dialogue box will get
open

6.Import your data as shown in the video below.

7.Now create your neural network. select feed forward backpropogation in network type,in input
data select input and in target data select target, use trainlm as your training functionand learngdm
as your learning function.

8.select transfer function as Tansig for both layers .

9. now train your curve .

10. Try to bring your regression curve coinciding with other lines with R= 0.98 to 0.99.

11. Once regression curve has come nicely go for simulation

12. While simulating in input give sample and simulate your answers will get predicted.
Artificial Intelligent and Machine Learning (3161710)

Implementation:

● A second order system is assumed and a PID controller is auto tuned for the system’s control.

● System’s error and controller’s output, both the data are imported in workspace.

● Neural network is tuned from this and the trained network is imported in simulink.
Artificial Intelligent and Machine Learning (3161710)

● PID controller output versus trained network output plot:

Conclusion:

Quiz:
1. What are pros of neural networks over computers?
2. For what purpose, hamming network is suitable?
3. Can you briefly explain the function of a PID controller?
4. Why is tuning important in a PID controller?
Artificial Intelligent and Machine Learning (3161710)

5. How is a PID controller implemented in software?

Suggested Reference:

https://www.mathworks.com/

References used by the students: (Sufficient space to be provided)

Rubric wise marks obtained:

Rubrics 1 2 3 4 5 Total

Marks

Rubrics:

1. Group work while writing MATLAB ® Commands on computer.


2. Understand the concept while performing coding.
3. Individual work done while Executing MATLAB ® Commands.
4. Analysis and comparison with the software and manual steps.
5. Quiz answers and submission in time.
Artificial Intelligent and Machine Learning (3161710)

Lab Manual

ARTIFICIAL INTELLIGENCE AND MACHINE


LEARNING
(3161710)

B.E. Semester 6
Prepared by:
Prof. K. S. Vashishtha
Assistant Professor
Instrumentation & control Department
Government Engineering College, Sector-28, Gandhinagar

Prof. Sweta K. Parmar


Assistant Professor
Instrumentation & control Department
Government Engineering College, Rajkot

Branch Coordinator:

Prof. Manish Thakkar


Professor
Instrumentation & control Department
L. D. College of Engineering, Ahmedabad

Committee Chairman:
Dr N M Bhatt
Professor
Mechanical Engineering Department
L. E. College, Morbi
Artificial Intelligent and Machine Learning (3161710)

You might also like