Neural Network Unsupervised Machine Learning: What Are Autoencoders?
Neural Network Unsupervised Machine Learning: What Are Autoencoders?
• Image Coloring
• Autoencoders are used for converting any black and white picture
into a colored image. Depending on what is in the picture, it is
possible to tell what the color should be
Applications of Autoencoders
• Feature variation
• It extracts only the required features of an image and generates the
output by removing any noise or unnecessary interruption.
Applications of Autoencoders
• Dimensionality Reduction
• The reconstructed image is the same as our input but with reduced
dimensions. It helps in providing the similar image with a reduced
pixel value.
Applications of Autoencoders
• Denoising Image
• The input seen by the autoencoder
is not the raw input but a
stochastically corrupted version.
A denoising autoencoder is thus
trained to reconstruct the original input from the noisy version.
Applications of Autoencoders
• Watermark Removal
• It is also used for removing watermarks from images or to remove any
object while filming a video or a movie.
• Now that you have an idea of the different industrial applications of
Autoencoders, let’s continue our Autoencoders Tutorial Blog and
understand the complex architecture of Autoencoders.
Architecture of Autoencoders
Architecture of Autoencoders
• Encoder: This part of the network compresses the input into a latent
space representation. The encoder layer encodes the input image as
a compressed representation in a reduced dimension. The
compressed image is the distorted version of the original image.
• Code: This part of the network represents the compressed input
which is fed to the decoder.
• Decoder: This layer decodes the encoded image back to the original
dimension. The decoded image is a lossy reconstruction of the
original image and it is reconstructed from the latent space
representation.
Architecture of Autoencoders
Convolution Autoencoders
Autoencoders in their traditional formulation does not take into account the fact that a signal can be seen as a
sum of other signals. Convolutional Autoencoders use the convolution operator to exploit this observation.
They learn to encode the input in a set of simple signals and then try to reconstruct the input from them,
modify the geometry or the reflectance of the image.
Use cases of CAE:
•Image Reconstruction
•Image Colorization
•latent space clustering
•generating higher resolution images
Deep Autoencoders
The extension of the simple Autoencoder is the Deep Autoencoder. The first layer of the Deep
Autoencoder is used for first-order features in the raw input. The second layer is used for second-order
features corresponding to patterns in the appearance of first-order features. Deeper layers of the Deep
Autoencoder tend to learn even higher-order features.
A deep autoencoder is composed of two, symmetrical deep-belief networks-
1.First four or five shallow layers representing the encoding half of the net.
2.The second set of four or five layers that make up the decoding half.
Use cases of Deep Autoencoders
•Image Search
•Data Compression
•Topic Modeling & Information Retrieval (IR)
Denoising Autoencoders
• There is another way to force the autoencoder to learn useful
features, which is adding random noise to its inputs and making it
recover the original noise-free data.
• This way the autoencoder can’t simply copy the input to its output
because the input also contains random noise.
• We are asking it to subtract the noise and produce the underlying
meaningful data. This is called a denoising autoencoder.
The top row contains the original images. We add random Gaussian noise to them
and the noisy data becomes the input to the autoencoder. The autoencoder doesn’t
see the original image at all. But then we expect the autoencoder to regenerate the
noise-free original image.
Sparse Autoencoders