Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
100% found this document useful (1 vote)
390 views

Introduction To Machine Learning - Unit 4 - Week 2

This document is an assignment for an online machine learning course on NPTEL. It contains 8 multiple choice questions covering topics like linear regression, correlation coefficients, parameter selection, and forward stepwise selection. Students are asked to select the best answer for each question. Their submissions will be considered for grading.

Uploaded by

Dhivya
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
100% found this document useful (1 vote)
390 views

Introduction To Machine Learning - Unit 4 - Week 2

This document is an assignment for an online machine learning course on NPTEL. It contains 8 multiple choice questions covering topics like linear regression, correlation coefficients, parameter selection, and forward stepwise selection. Students are asked to select the best answer for each question. Their submissions will be considered for grading.

Uploaded by

Dhivya
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 3

Introduction to Machine Learning - - Unit 4 - Week 2 https://onlinecourses.nptel.ac.in/noc22_cs73/unit?

unit=32&assessment=164

(https://swayam.gov.in) (https://swayam.gov.in/nc_details/NPTEL)

dhivs37@gmail.com 

NPTEL (https://swayam.gov.in/explorer?ncCode=NPTEL) » Introduction to Machine Learning (course)


Register for
Certification
exam
(https://examform.nptel.ac.in
Week 2 : Assignment 2
/2022_04 Your last recorded submission was on 2022-08-08, 12:53 IST Due date: 2022-08-10, 23:59 IST.
/exam_form
1) The parameters obtained in linear regression 1 point
/dashboard)
can take any value in the real space
are strictly integers
Course always lie in the range [0,1]
outline can take only non-zero values

2) Suppose that we have N independent variables (X1 , X2 , . . . Xn ) and the dependent 1 point
How does an variable is Y . Now imagine that you are applying linear regression by fitting the best fit line using
NPTEL online the least square error on this data. You found that the correlation coefficient for one of its variables
course work?
(Say X1 ) with Y is -0.005.
()

Week 0 () Regressing Y on X1 mostly does not explain away Y .

Week 1 () Regressing Y on X1 explains away Y .

Week 2 () The given data is insufficient to determine if regressing Y on X1 explains away Y or not.

3) Consider the following five training examples 2 points


Linear
Regression
(unit?unit=32&
lesson=33)

Multivariate
Regression
(unit?unit=32&
lesson=34)

1 of 3 08-08-2022, 16:06
Introduction to Machine Learning - - Unit 4 - Week 2 https://onlinecourses.nptel.ac.in/noc22_cs73/unit?unit=32&assessment=164

Subset
Selection 1
(unit?unit=32&
lesson=35)

Subset
Selection 2
(unit?unit=32&
lesson=36)

Shrinkage
Methods We want to learn a function f(x) of the form f(x) = ax + b which is parameterised by
(unit?unit=32& (a, b).Using mean squared error as the loss function, which of the following parameters would you
lesson=37) use to model this function to get a solution with the minimum loss?
Principal
(4, 3)
Components
(1, 4)
Regression
(unit?unit=32& (4, 1)
lesson=38) (3, 4)

Partial Least 4) The relation between studying time (in hours) and grade on the final examination 1 point
Squares (0-100) in a random sample of students in the Introduction to Machine Learning Class was found to
(unit?unit=32&
be:
lesson=39)
Grade = 30.5 + 15.2 (h)
Practice: Week
2 : Assignment How will a student’s grade be affected if she studies for four hours?
2 (Non Graded)
(assessment?name=137) It will go down by 30.4 points.
It will go down by 30.4 points.
Quiz: Week 2 :
Assignment 2 It will go up by 60.8 points.
(assessment?name=164) The grade will remain unchanged.

Week 2
It cannot be determined from the information given
Feedback Form 5) Which of the statements is/are True? 1 point
: Introduction to
Machine Ridge has sparsity constraint, and it will drive coefficients with low values to 0.
Learning
Lasso has a closed form solution for the optimization problem, but this is not the case for
(unit?unit=32&
Ridge.
lesson=149)
Ridge regression does not reduce the number of variables since it never leads a coefficient
Week 3 () to zero but only minimizes it.
If there are two or more highly collinear variables, Lasso will select one of them randomly.
Text
6) Consider the following statements: 1 point
Transcripts ()

Assertion(A): Orthogonalization is applied to the dimensions in linear regression.


Download
Videos () Reason(R): Orthogonalization makes univariate regression possible in each orthogonal dimension
separately to produce the coefficients.
Books ()
Both A and R are true, and R is the correct explanation of A.
Both A and R are true, but R is not the correct explanation of A.
Problem

2 of 3 08-08-2022, 16:06
Introduction to Machine Learning - - Unit 4 - Week 2 https://onlinecourses.nptel.ac.in/noc22_cs73/unit?unit=32&assessment=164

Solving A is true, but R is false.


Session () A is false, but R is true.
Both A and R are false.

7) Consider the following statements: 1 point

Statement A: In Forward stepwise selection, in each step, that variable is chosen which has the
maximum correlation with the residual, then the residual is regressed on that variable, and it is
added to the predictor.

Statement B: In Forward stagewise selection, the variables are added one by one to the previously
selected variables to produce the best fit till then

Both the statements are True.


Statement A is True, and Statement B is False
Statement A if False and Statement B is True
Both the statements are False.

8) The linear regression model y = a0 + a1 x1 + a2 x2 +. . . +ap xp is to be fitted to a 2 points


set of N training data points having p attributes each. Let X be N × (p + 1) vectors of input
values (augmented by 1‘s), Y be N × 1 vector of target values, and θ be (p + 1) × 1 vector of
parameter values (a0 , a1 , a2 , . . . , ap ) . If the sum squared error is minimized for obtaining the
optimal regression model, which of the following equation holds?

X T X = XY

Xθ = X T Y

X T Xθ = Y

X T Xθ = X T Y
You may submit any number of times before the due date. The final submission will be considered
for grading.
Submit Answers

3 of 3 08-08-2022, 16:06

You might also like