Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Unit 5 - Week 4: Assignment 4

Download as pdf or txt
Download as pdf or txt
You are on page 1of 4

25/05/2018 Deep Learning For Visual Computing - - Unit 5 - Week 4

reviewer1@nptel.iitm.ac.in ▼

Courses » Deep Learning For Visual Computing

Announcements Course Ask a Question Progress Mentor

Unit 5 - Week 4

Course
outline Assignment 4
The due date for submitting this assignment has passed. Due on 2018-02-21, 23:59 IST.
How to access
the portal Submitted assignment

Week 1 1) Given that En and Dn are encoder and decoder blocks of nth autoencoder in SAE (Stacked 1 point
autoencoder). What is the sequence of blocks during end-to-end training of SAE (with n = 2)
Week 2

I nput −
→ E1 −
→ D1 −
→ E2 − → I nput
→ D2 −
Week 3

Week 4 I nput −
→ E1 −
→ E2 −
→ D1 − → I nput
→ D2 −

Lecture 16:
I nput −
→ E1 −
→ E2 −
→ D2 − → I nput
→ D1 −
Stacked
Autoencoders Any of the above

Lecture 17: No, the answer is incorrect.


MNIST and Score: 0
Fashion MNIST
with Stacked Accepted Answers:
Autoencoders I nput −
→ E1 −
→ E2 −
→ D2 − → I nput
→ D1 −

Lecture 18: 2) Given a trained SAE (n = 2 ), how should the blocks be arranged for weight refinement 1 point
Denoising and (classification task)
Sparse
Autoencoders

Lecture 19: I nput −


→ E1 −
→ E2 −
→ D1 −
→ D2 −
→ Logistic regression
Sparse
Autoencoders I nput − Logistic regression
→ E1 −
→ E2 −

for MNIST
classification
I nput −
→ E1 −
→ E2 −
→ D1 −
→ D2 −
→ Logistic regression
Lecture 20:
Denoising
Any of the above
Autoencoders
No, the answer is incorrect.
for MNIST
classification Score: 0
Accepted Answers:
Feedback for
Week 4 I nput −
→ E1 −
→ E2 −
→ Logistic regression

Week 4: Lecture 3) Given input x and linear autoencoder (no bias) with random weights (W for encoder and 1 point
Slides W for decoder), what mathematical form is minimized to achieve optimal weights

Quiz :
Assignment 4 ′
|x − (W ⋅ W ⋅ x)|

Week 4:
Assignment ′
|x − (W ⋅ Nf l (W ⋅ x))|
Solutions

Week 5 |x − Nf l (W

⋅ Nf l (W ⋅ x))|

None of the above


Week 6

https://onlinecourses.nptel.ac.in/noc18_ee08/unit?unit=8&assessment=89 1/4
25/05/2018 Deep Learning For Visual Computing - - Unit 5 - Week 4
No, the answer is incorrect.
Week 7 Score: 0
Accepted Answers:
Week 8 ′
|x − (W ⋅ W ⋅ x)|

Week 9 4) Given an linear autoencoder which encodes input x to z . For learning hierarchically high- 1 point
level representation what should be the learning arrangement of second linear autoencoder (with
Week 10 weights W2 and W2′ )

Week 11

Week 12 ′
|x − (W ⋅ W2 ⋅ x)|
2

DOWNLOAD ′
VIDEOS |z − (W
2
⋅ W2 ⋅ z)|

|z − (W2 ⋅ W1 ⋅ x)|

None of the above

No, the answer is incorrect.


Score: 0
Accepted Answers:

|z − (W ⋅ W2 ⋅ z)|
2

5) In a de-noising autoencoder, noise is added to input for 1 point

Avoiding overfitting
Robust feature extraction
Data augmentation
All of the above

No, the answer is incorrect.


Score: 0
Accepted Answers:
All of the above

6) In a linear autoencoder (without regularizer), if hidden layer perceptrons are equal to input 1 point
layer perceptron then encoder and decoder weights are indulged to learn

Optimal representations
Identity matrix
Sparse representations
None of the above

No, the answer is incorrect.


Score: 0
Accepted Answers:
Identity matrix

7) The role of regularizer in cost function is to 1 point

Avoid overfitting
Induce sparsity
Simpler hypothesis
All of the above

No, the answer is incorrect.


Score: 0
Accepted Answers:
All of the above

8) Given feature vector X and corresponding label (y), logistic regression relates X and y in the 1 point
form of (B is parameters to be learned)

https://onlinecourses.nptel.ac.in/noc18_ee08/unit?unit=8&assessment=89 2/4
25/05/2018 Deep Learning For Visual Computing - - Unit 5 - Week 4
y
log( ) = BX
1−y

y = BX

1
y =
BX
1+e

None of the above

No, the answer is incorrect.


Score: 0
Accepted Answers:
y
log( ) = BX
1−y

9) Given feature vector X (with dimension j ), corresponding label y (binary class) and weights ( 1 point
) of logistic regressor, z be output (expected) of logistic regressor. What is the loss function
b1 , b2 , . . , bk

(L) and gradient computed to correct the bk based on chain rule

ylog(z) and (y − z)xk

∂z
y − log(z) and ∂b
k

∂z
ylog(z) + (1 − y)(1 − log(z)) and ∂L

∂z ∂bk

ylog(z) + (1 − y)(1 − log(z)) and (y + z)xk

No, the answer is incorrect.


Score: 0
Accepted Answers:
∂z
ylog(z) + (1 − y)(1 − log(z)) and ∂L

∂z ∂bk

10)What are the advantages of initializing MLP with pretrained autoencoder weights 1 point
(i) Faster Convergence
(ii) Avoid overfitting
(iii) Simpler hypothesis

(i) and (ii)


(ii) and (iii)
(i) and (iii)
All of the above

No, the answer is incorrect.


Score: 0
Accepted Answers:
(i) and (ii)

Previous Page End

https://onlinecourses.nptel.ac.in/noc18_ee08/unit?unit=8&assessment=89 3/4
25/05/2018 Deep Learning For Visual Computing - - Unit 5 - Week 4

© 2014 NPTEL - Privacy & Terms - Honor Code - FAQs -


A project of In association with

Funded by

Powered by

https://onlinecourses.nptel.ac.in/noc18_ee08/unit?unit=8&assessment=89 4/4

You might also like