Unit 5 - Week 4: Assignment 4
Unit 5 - Week 4: Assignment 4
Unit 5 - Week 4: Assignment 4
reviewer1@nptel.iitm.ac.in ▼
Unit 5 - Week 4
Course
outline Assignment 4
The due date for submitting this assignment has passed. Due on 2018-02-21, 23:59 IST.
How to access
the portal Submitted assignment
Week 1 1) Given that En and Dn are encoder and decoder blocks of nth autoencoder in SAE (Stacked 1 point
autoencoder). What is the sequence of blocks during end-to-end training of SAE (with n = 2)
Week 2
I nput −
→ E1 −
→ D1 −
→ E2 − → I nput
→ D2 −
Week 3
Week 4 I nput −
→ E1 −
→ E2 −
→ D1 − → I nput
→ D2 −
Lecture 16:
I nput −
→ E1 −
→ E2 −
→ D2 − → I nput
→ D1 −
Stacked
Autoencoders Any of the above
Lecture 18: 2) Given a trained SAE (n = 2 ), how should the blocks be arranged for weight refinement 1 point
Denoising and (classification task)
Sparse
Autoencoders
Week 4: Lecture 3) Given input x and linear autoencoder (no bias) with random weights (W for encoder and 1 point
Slides W for decoder), what mathematical form is minimized to achieve optimal weights
′
Quiz :
Assignment 4 ′
|x − (W ⋅ W ⋅ x)|
Week 4:
Assignment ′
|x − (W ⋅ Nf l (W ⋅ x))|
Solutions
Week 5 |x − Nf l (W
′
⋅ Nf l (W ⋅ x))|
https://onlinecourses.nptel.ac.in/noc18_ee08/unit?unit=8&assessment=89 1/4
25/05/2018 Deep Learning For Visual Computing - - Unit 5 - Week 4
No, the answer is incorrect.
Week 7 Score: 0
Accepted Answers:
Week 8 ′
|x − (W ⋅ W ⋅ x)|
Week 9 4) Given an linear autoencoder which encodes input x to z . For learning hierarchically high- 1 point
level representation what should be the learning arrangement of second linear autoencoder (with
Week 10 weights W2 and W2′ )
Week 11
Week 12 ′
|x − (W ⋅ W2 ⋅ x)|
2
DOWNLOAD ′
VIDEOS |z − (W
2
⋅ W2 ⋅ z)|
|z − (W2 ⋅ W1 ⋅ x)|
Avoiding overfitting
Robust feature extraction
Data augmentation
All of the above
6) In a linear autoencoder (without regularizer), if hidden layer perceptrons are equal to input 1 point
layer perceptron then encoder and decoder weights are indulged to learn
Optimal representations
Identity matrix
Sparse representations
None of the above
Avoid overfitting
Induce sparsity
Simpler hypothesis
All of the above
8) Given feature vector X and corresponding label (y), logistic regression relates X and y in the 1 point
form of (B is parameters to be learned)
https://onlinecourses.nptel.ac.in/noc18_ee08/unit?unit=8&assessment=89 2/4
25/05/2018 Deep Learning For Visual Computing - - Unit 5 - Week 4
y
log( ) = BX
1−y
y = BX
1
y =
BX
1+e
9) Given feature vector X (with dimension j ), corresponding label y (binary class) and weights ( 1 point
) of logistic regressor, z be output (expected) of logistic regressor. What is the loss function
b1 , b2 , . . , bk
∂z
y − log(z) and ∂b
k
∂z
ylog(z) + (1 − y)(1 − log(z)) and ∂L
∂z ∂bk
∂z ∂bk
10)What are the advantages of initializing MLP with pretrained autoencoder weights 1 point
(i) Faster Convergence
(ii) Avoid overfitting
(iii) Simpler hypothesis
https://onlinecourses.nptel.ac.in/noc18_ee08/unit?unit=8&assessment=89 3/4
25/05/2018 Deep Learning For Visual Computing - - Unit 5 - Week 4
Funded by
Powered by
https://onlinecourses.nptel.ac.in/noc18_ee08/unit?unit=8&assessment=89 4/4