Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
0% found this document useful (0 votes)
6 views

Auto Encoder

Uploaded by

tanvi29khanna
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
6 views

Auto Encoder

Uploaded by

tanvi29khanna
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 12

Autoencoders: Unsupervised Representation Learning

Department of Management

September 19, 2024

(Department of Management) Autoencoders: Unsupervised Representation Learning September 19, 2024 1 / 12


What are Autoencoders?
Autoencoders are neural networks used for unsupervised learning,
specifically designed to compress input data into a latent space and
then reconstruct it.
Consist of two parts: Encoder and Decoder.
End goal: Minimize the difference between the input and the output.

(Department of Management) Autoencoders: Unsupervised Representation Learning September 19, 2024 2 / 12


Motivation for Autoencoders

Dimensionality Reduction: Compress large datasets into


manageable latent spaces.
Data Compression: Similar to JPEG for images, but learns
non-linear relationships.
Noise Removal: Useful in denoising tasks.
Anomaly Detection: Identify anomalies by evaluating reconstruction
error.

(Department of Management) Autoencoders: Unsupervised Representation Learning September 19, 2024 3 / 12


Architecture of Autoencoders
Encoder: Transforms input into a compressed latent space.
Latent Space: The bottleneck layer where compressed information is
stored.
Decoder: Reconstructs the input from the latent space.

(Department of Management) Autoencoders: Unsupervised Representation Learning September 19, 2024 4 / 12


Mathematical Formulation

The goal is to minimize the reconstruction loss:


n
1X
L(x, x̂) = (xi − x̂i )2
n
i=1

x is the input and x̂ is the reconstructed input.


The network learns to compress and reconstruct by minimizing this
error.

(Department of Management) Autoencoders: Unsupervised Representation Learning September 19, 2024 5 / 12


Types of Autoencoders

Vanilla Autoencoder: Basic encoder-decoder structure.


Sparse Autoencoder: Enforces sparsity in the hidden layers to
reduce overfitting.
Denoising Autoencoder: Learns to reconstruct clean data from
noisy input.
Variational Autoencoder (VAE): Uses probabilistic distributions for
generating new data.

(Department of Management) Autoencoders: Unsupervised Representation Learning September 19, 2024 6 / 12


Sparse Autoencoders
Sparse Autoencoders enforce sparsity by applying L1 regularization to
the latent space.
They are useful for feature extraction in high-dimensional data.

(Department of Management) Autoencoders: Unsupervised Representation Learning September 19, 2024 7 / 12


Denoising Autoencoders
The goal is to train an autoencoder to remove noise from the input.
Process:
Input is a noisy version of the data.
The autoencoder learns to reconstruct the clean version.

(Department of Management) Autoencoders: Unsupervised Representation Learning September 19, 2024 8 / 12


Variational Autoencoders (VAEs)

VAEs differ from standard autoencoders by learning to generate data


from a latent space sampled from a distribution.
The encoder outputs a distribution (mean, variance) instead of a fixed
vector.
The loss function combines reconstruction loss and KL divergence:
LVAE = Reconstruction Loss + KL Divergence Loss

(Department of Management) Autoencoders: Unsupervised Representation Learning September 19, 2024 9 / 12


Applications of Autoencoders

Dimensionality Reduction: Like PCA but non-linear.


Image Generation: Using Variational Autoencoders.
Anomaly Detection: High reconstruction error indicates anomalies.
Pre-training Deep Networks: Useful for initializing weights.

(Department of Management) Autoencoders: Unsupervised Representation Learning September 19, 2024 10 / 12


Limitations of Autoencoders

Autoencoders can overfit if the latent space is too large.


May struggle with highly complex data distributions.
Linear dimensionality reduction (e.g., PCA) might be more efficient
for certain tasks.

(Department of Management) Autoencoders: Unsupervised Representation Learning September 19, 2024 11 / 12


Conclusion

Autoencoders are versatile tools for unsupervised learning, with


applications in dimensionality reduction, data denoising, and anomaly
detection.
Different variants, like sparse, denoising, and variational
autoencoders, provide tailored solutions for various tasks.

(Department of Management) Autoencoders: Unsupervised Representation Learning September 19, 2024 12 / 12

You might also like