Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Unit - 3

Download as pdf or txt
Download as pdf or txt
You are on page 1of 42

Encoder: it works as a compression until that compresses the input data.

Decoder: it decompresses the compressed input by reconstructing it.


In an Autoencoder both Encoder and Decoder are made up of a
combination of NN (Neural Networks) layers, which helps to reduce the size
of the input image by recreating it. In the case of CNN Autoencoder, these
layers are CNN layers (Convolutional, Max Pool, Flattening, etc.) while in the
case of RNN/LSTM their respective layers are used.
ARCHITECTURE
Autoencoders are a type of artificial neural network used for unsupervised
learning. They consist of an encoder and a decoder, and their primary
purpose is to learn efficient representations of data, typically for tasks such
as data compression, denoising, or feature learning. Here's a breakdown of
the architecture of autoencoders:

1. Encoder:
- The encoder is the first part of the autoencoder and is responsible for
mapping the input data to a lower-dimensional representation. This lower-
dimensional representation is often called the "encoding" or "latent space."
- The encoder consists of one or more layers of neurons, and each layer
applies linear transformations (weighted sum of inputs) followed by a non-
linear activation function (such as sigmoid, tanh, or ReLU). These
transformations progressively reduce the dimensionality of the input data.

2. Latent Space:
- The output of the encoder is the compressed representation of the input
data in the latent space. This space should capture essential features of the
input data in a compact form. The size of the latent space is a crucial
hyperparameter that influences the trade-off between compression and
information retention.
3. Decoder:
- The decoder takes the compressed representation from the latent space
and attempts to reconstruct the original input data.
- Similar to the encoder, the decoder consists of one or more layers of
neurons. Each layer applies linear transformations followed by non-linear
activation functions.
- The final layer of the decoder produces the reconstructed output, which
ideally should closely resemble the input data.

4. Loss Function:
- The autoencoder is trained to minimize a loss function, which measures
the difference between the input data and the reconstructed output. The
choice of loss function depends on the nature of the data (e.g., mean
squared error for continuous data, binary cross-entropy for binary data).

5. Training:
- Autoencoders are trained using backpropagation and gradient descent or
variants like Adam. The training process aims to optimize the weights of the
encoder and decoder to minimize the reconstruction error.
Keras is a high-level neural networks API (Application Programming Interface) written
in Python. It's designed to be user-friendly, modular, and extensible, making it a
popular choice for building and experimenting with deep learning models.

Here's a simplified breakdown of Keras:

1. Neural Networks:
• Keras is primarily used for building neural networks. Neural networks
are computational models inspired by the way the human brain works.
They consist of layers of interconnected nodes (neurons) that process
and transform input data to produce an output.
2. High-Level API:
• Keras provides a high-level interface for defining, training, and
evaluating neural network models. This means you can create powerful
deep learning models with relatively simple and concise code.
3. Modularity:
• Keras models are built using a modular approach. You start by defining
the structure of your model as a sequence of layers. Each layer
performs a specific operation, like input processing, feature extraction,
or output generation. This modularity makes it easy to construct
complex models by stacking and connecting simpler building blocks.
4. Ease of Use:
• One of Keras' key advantages is its user-friendly syntax. It abstracts
away many of the complexities of neural network implementation,
making it accessible to both beginners and experienced deep learning
practitioners. You can quickly prototype and experiment with different
architectures.
5. Backend Support:
• Keras is designed to be backend-agnostic, which means it can run on
top of various computational backends, such as TensorFlow, Theano, or
Microsoft Cognitive Toolkit (CNTK). TensorFlow is the default backend
for Keras since version 2.3.
6. TensorFlow Integration:
• Although Keras can work with different backends, it is often used with
TensorFlow, one of the most widely used deep learning frameworks.
This integration allows users to leverage the capabilities of both Keras
and TensorFlow seamlessly.
7. Training and Evaluation:
• Keras simplifies the training and evaluation process. You compile your
model with a specified optimizer, loss function, and metrics, then train
it on your data. The training process involves adjusting the model's
weights based on the provided data to minimize the specified loss.
After training, you can evaluate the model's performance on new,
unseen data.

Keras user experience


1. Keras is an API designed for humans
Best practices are followed by Keras to decrease cognitive load, ensures
that the models are consistent, and the corresponding APIs are simple.
2. Not designed for machines
Keras provides clear feedback upon the occurrence of any error that
minimizes the number of user actions for the majority of the common
use cases.
3. Easy to learn and use.
4. Highly Flexible
Keras provide high flexibility to all of its developers by integrating low-
level deep learning languages such as TensorFlow or Theano, which
ensures that anything written in the base language can be
implemented in Keras.

How Keras support the claim of being multi-


backend and multi-platform?
Keras can be developed in R as well as Python, such that the code can be run
with TensorFlow, Theano, CNTK, or MXNet as per the requirement. Keras can be
run on CPU, NVIDIA GPU, AMD GPU, TPU, etc. It ensures that producing models
with Keras is really simple as it totally supports to run with TensorFlow serving,
GPU acceleration (WebKeras, Keras.js), Android (TF, TF Lite), iOS (Native
CoreML) and Raspberry Pi.
Keras Backend
Keras being a model-level library helps in developing deep learning models by
offering high-level building blocks. All the low-level computations such as products
of Tensor, convolutions, etc. are not handled by Keras itself, rather they depend on a
specialized tensor manipulation library that is well optimized to serve as a backend
engine. Keras has managed it so perfectly that instead of incorporating one single
library of tensor and performing operations related to that particular library, it offers
plugging of different backend engines into Keras.

Keras consist of three backend engines, which are as follows:

o TensorFlow
TensorFlow is a Google product, which is one of the most famous deep
learning tools widely used in the research area of machine learning and deep
neural network. It came into the market on 9th November 2015 under the
Apache License 2.0. It is built in such a way that it can easily run on multiple
CPUs and GPUs as well as on mobile operating systems. It consists of various
wrappers in distinct languages such as Java, C++, or Python.

o Theano
Theano was developed at the University of Montreal, Quebec, Canada, by the
MILA group. It is an open-source python library that is widely used for
performing mathematical operations on multi-dimensional arrays by
incorporating scipy and numpy. It utilizes GPUs for faster computation and
efficiently computes the gradients by building symbolic graphs automatically.
It has come out to be very suitable for unstable expressions, as it first observes
them numerically and then computes them with more stable algorithms.

o CNTK
Microsoft Cognitive Toolkit is deep learning's open-source framework. It
consists of all the basic building blocks, which are required to form a neural
network. The models are trained using C++ or Python, but it incorporates C#
or Java to load the model for making predictions.
TensorFlow Technical Architecture:

o Sources create loaders for Servable Versions, and then loaders are sent as
Aspired versions to the Manager, which will load and serve them to client
requests.
o The Loader contains metadata, and it needs to load the servable.
o The source uses a callback to convey the Manager of Aspired version.
o The Manager applies the effective version policy to determine the next action
to take.
o If the Manager determines that it gives the Loader to load a new version,
clients ask the Manager for the servable, and specifying a version explicitly or
requesting the current version. The Manager returns a handle for servable. The
dynamic Manager applies the version action and decides to load the newer
version of it.
o The dynamic Manager commands the Loader that there is enough memory.
o A client requests a handle for the latest version of the model, and dynamic
Manager returns a handle to the new version of servable.

Advantages of TensorFlow
1) Graphs:

TensorFlow has better computational graph visualizations. Which are inherent when
compared to other libraries like Torch and Theano.

2) Library management:

Google backs it. And has the advantages of seamless performance, quick updates,
and frequent new releases with new features.

3) Debugging:

It helps us execute subpart of a graph which gives it an upper hand as we can


introduce and retrieve discrete data
4) Scalability:

The libraries are deployed on a hardware machine, which is a cellular device to the
computer with a complex setup.

5) Pipelining:

TensorFlow is designed to use various backend software (GPUs, ASIC), etc. and also
highly parallel.

6) It has a unique approach that allows monitoring the training progress of our
models and tracking several metrics.

7) TensorFlow has excellent community support.

8) Its performance is high and matching the best in the industry.

Disadvantages of TensorFlow

1) Missing Symbolic loops:


When we say about the variable-length sequence, the feature is more required.
Unfortunately, TensorFlow does not offer functionality, but finite folding is the right
solution to it.

2) No supports for windows:

There is a wide variety of users who are comfortable in a window environment rather
than Linux, and TensorFlow doesn't satisfy these users. But we need not worry about
that if we are a window user we can also install it through conda or python package
library (pip).

3) Benchmark tests:

TensorFlow lacks in both speed and usage when it is compared to its competitors.

4) No GPU support for Nvidia and only language support:

Currently, the single supported GPUs are NVIDIA and the only full language support
of Python, which makes it a drawback as there is a hike of other languages in deep
learning as well as the Lau.

5) Computation Speed:

This is a field where TF is delaying behind, but we focus on the production


environment ratherish than the performance, it is still the right choice.

6) No support for OpenCL.

7) It requires fundamental knowledge of advanced calculus and linear algebra along


with a good understanding of machine learning also.

8) TensorFlow has a unique structure, so it's hard to find an error and difficult to
debug.

9) There is no need for any super low-level matter.

10) It is a very low level with a steep learning curve.

What is a neural network?

Neural networks, also known as artificial neural networks (ANNs) or simulated neural
networks (SNNs), are a subset of machine learning and are at the heart of deep
learning algorithms. Their name and structure are inspired by the human brain,
mimicking the way that biological neurons signal to one another.

Artificial neural networks (ANNs) are comprised of a node layers, containing an input
layer, one or more hidden layers, and an output layer. Each node, or artificial neuron,
connects to another and has an associated weight and threshold. If the output of any
individual node is above the specified threshold value, that node is activated, sending
data to the next layer of the network. Otherwise, no data is passed along to the next
layer of the network.

Neural networks rely on training data to learn and improve their accuracy over time.
However, once these learning algorithms are fine-tuned for accuracy, they are powerful
tools in computer science and artificial intelligence, allowing us to classify and cluster
data at a high velocity. Tasks in speech recognition or image recognition can take
minutes versus hours when compared to the manual identification by human experts.
One of the most well-known neural networks is Google’s search algorithm.
Summation wi * xi + b

1. Image Recognition:
• Neural networks are widely used in image recognition tasks.
Convolutional Neural Networks (CNNs), a specialized type of neural
network, have shown remarkable success in tasks such as object
detection and facial recognition. They can learn hierarchical
representations of visual features, enabling accurate pattern
recognition in images.
2. Speech Recognition:
• Neural networks play a crucial role in speech recognition systems.
Recurrent Neural Networks (RNNs) and Long Short-Term Memory
(LSTM) networks are commonly used to model temporal dependencies
in audio sequences, making them effective in recognizing spoken
words and phrases.
3. Handwriting Recognition:
• Neural networks are applied in recognizing handwritten characters and
text. They can learn to identify patterns in various handwriting styles,
making them useful in applications like optical character recognition
(OCR).
4. Biometric Identification:
• Neural networks are employed in biometric recognition systems, such
as fingerprint recognition, iris recognition, and face recognition. They
can learn unique patterns and features from biometric data, facilitating
accurate and secure identification.
5. Gesture Recognition:
• Neural networks can be used to recognize gestures in applications like
sign language interpretation or human-computer interaction. They
learn to identify patterns associated with different gestures and
interpret them accordingly.
6. Medical Image Analysis:
• Neural networks are utilized in the analysis of medical images for tasks
like tumor detection, organ segmentation, and disease diagnosis. Deep
learning models, including convolutional neural networks, excel in
learning intricate patterns within medical images.
7. Natural Language Processing (NLP):
• In NLP applications, neural networks are employed for tasks such as
sentiment analysis, text classification, and named entity recognition.
Recurrent and transformer-based architectures can capture sequential
and contextual patterns in language data.
8. Financial Pattern Recognition:
• Neural networks are applied in financial markets for recognizing
patterns in stock prices, predicting market trends, and identifying
potential trading opportunities. They can learn from historical data to
make predictions about future market behavior.
9. Quality Control in Manufacturing:
• Neural networks are used in quality control processes to recognize
patterns associated with defects in manufactured products. They can
analyze sensor data and images to identify anomalies and ensure the
quality of the production process.

Pattern Recognition

Pattern is everything around in this digital world. A pattern can either be seen
physically or it can be observed mathematically by applying algorithms.

Example: The colors on the clothes, speech pattern, etc. In computer science,
a pattern is represented using vector feature values.
What is Pattern Recognition?

Pattern recognition is the process of recognizing patterns by using a machine


learning algorithm. Pattern recognition can be defined as the classification of
data based on knowledge already gained or on statistical information
extracted from patterns and/or their representation. One of the important
aspects of pattern recognition is its application potential.

Examples: Speech recognition, speaker identification, multimedia document


recognition (MDR), automatic medical diagnosis.
In a typical pattern recognition application, the raw data is processed and
converted into a form that is amenable for a machine to use. Pattern
recognition involves the classification and cluster of patterns.

• In classification, an appropriate class label is assigned to a pattern


based on an abstraction that is generated using a set of training
patterns or domain knowledge. Classification is used in supervised
learning.
• Clustering generated a partition of the data which helps decision
making, the specific decision-making activity of interest to us.
Clustering is used in unsupervised learning.

Features may be represented as continuous, discrete, or discrete binary


variables. A feature is a function of one or more measurements, computed so
that it quantifies some significant characteristics of the object.

Example: consider our face then eyes, ears, nose, etc are features of the face.
A set of features that are taken together, forms the features vector.

Example: In the above example of a face, if all the features (eyes, ears, nose,
etc) are taken together then the sequence is a feature vector([eyes, ears,
nose]). The feature vector is the sequence of a feature represented as a d-
dimensional column vector. In the case of speech, MFCC (Mel-frequency
Cepstral Coefficient) is the spectral feature of the speech. The sequence of
the first 13 features forms a feature vector.

Pattern recognition possesses the following features:

• Pattern recognition system should recognize familiar patterns quickly


and accurate
• Recognize and classify unfamiliar objects
• Accurately recognize shapes and objects from different angles
• Identify patterns and objects even when partly hidden
• Recognize patterns quickly with ease, and with automaticity.

Training and Learning in Pattern Recognition


Learning is a phenomenon through which a system gets trained and
becomes adaptable to give results in an accurate manner. Learning is the
most important phase as to how well the system performs on the data
provided to the system depends on which algorithms are used on the data.
The entire dataset is divided into two categories, one which is used in training
the model i.e. Training set, and the other that is used in testing the model after
training, i.e. Testing set.

• Training set:
The training set is used to build a model. It consists of the set of images
that are used to train the system. Training rules and algorithms are used
to give relevant information on how to associate input data with output
decisions. The system is trained by applying these algorithms to the
dataset, all the relevant information is extracted from the data, and
results are obtained. Generally, 80% of the data of the dataset is taken
for training data.
• Testing set:
Testing data is used to test the system. It is the set of data that is used
to verify whether the system is producing the correct output after being
trained or not. Generally, 20% of the data of the dataset is used for
testing. Testing data is used to measure the accuracy of the system. For
example, a system that identifies which category a particular flower
belongs to is able to identify seven categories of flowers correctly out of
ten and the rest of others wrong, then the accuracy is 70 %
Real-time Examples and Explanations:
A pattern is a physical object or an abstract notion. While talking about the
classes of animals, a description of an animal would be a pattern. While
talking about various types of balls, then a description of a ball is a pattern. In
the case balls considered as pattern, the classes could be football, cricket ball,
table tennis ball, etc. Given a new pattern, the class of the pattern is to be
determined. The choice of attributes and representation of patterns is a very
important step in pattern classification. A good representation is one that
makes use of discriminating attributes and also reduces the computational
burden in pattern classification.

An obvious representation of a pattern will be a vector. Each element of the


vector can represent one attribute of the pattern. The first element of the
vector will contain the value of the first attribute for the pattern being
considered.

Example: While representing spherical objects, (25, 1) may be represented as


a spherical object with 25 units of weight and 1 unit diameter. The class label
can form a part of the vector. If spherical objects belong to class 1, the vector
would be (25, 1, 1), where the first element represents the weight of the object,
the second element, the diameter of the object and the third element
represents the class of the object.

Advantages:
• Pattern recognition solves classification problems
• Pattern recognition solves the problem of fake biometric detection.
• It is useful for cloth pattern recognition for visually impaired blind
people.
• It helps in speaker diarization.
• We can recognize particular objects from different angles.

Disadvantages:
• The syntactic pattern recognition approach is complex to implement
and it is a very slow process.
• Sometimes to get better accuracy, a larger dataset is required.
• It cannot explain why a particular object is recognized.
Example: my face vs my friend’s face.
Applications
• Image processing, segmentation, and analysis
Pattern Recognition is efficient enough to give machines human
recognition intelligence. This is used for image processing,
segmentation, and analysis. For example, computers can detect
different types of insects better than humans.

• Computer Vision
Using a pattern recognition system one can extract important features
from the images and videos. This is helpful in computer vision which is
applied in different fields’, especially biomedical imaging.

• Seismic Analysis
Decision-theoretic and syntactic pattern recognition techniques are
employed to detect the physical anomalies (bright spots) and to recognize
the structural seismic patterns in two-dimensional seismograms. Here,
decision-theoretic methods include Bayes classification, linear and quadratic
classifications, tree classification, partitioning-method, and tree classification,
and sequential classification [5].

• Radar Signal Classification


Pattern recognition and signal processing methods are used in a large
dataset to find similar characteristics like amplitude, frequencies, type of
modulation, scanning type, pulse repetition intervals, etc. Basically, it helps to
classify the radio signals, and based upon their class the conversion to digital
form is accomplished.

• Speech Recognition
All of us have heard the names Siri, Alexa, and Cortona. These are all the
applications of speech recognition. Pattern recognition plays a huge part in
this technique.

• Fingerprint Identification
Many recognition approaches are there to perform Fingerprint Identification.
But pattern recognition system is the most used approach.
• Medical Diagnosis
Algorithms of pattern recognition deal with real data. It has been found that
pattern recognition has a huge role in today’s medical diagnosis. From breast
cancer detection to covid-19 checking algorithms are giving results with more
than 90% accuracy.

• Stock Market Analysis


Patterns are everywhere and nobody can ignore that. Though the stock
market is hard to predict still some AI-based applications are there which are
doing using a pattern recognition approach to predict the market. Example:
Blumberg, Tinkoff, SofiWealth, and Kosho.

What is speech recognition?

Speech recognition, or speech-to-text, is the ability of a machine or program to identify words


spoken aloud and convert them into readable text. Rudimentary speech
recognition software has a limited vocabulary and may only identify words and phrases when
spoken clearly. More sophisticated software can handle natural speech, different accents and
various languages.

Speech recognition uses a broad array of research in computer science, linguistics and
computer engineering. Many modern devices and text-focused programs have speech
recognition functions in them to allow for easier or hands-free use of a device.

Speech recognition and voice recognition are two different technologies and should not be
confused:

• Speech recognition is used to identify words in spoken language.


• Voice recognition is a biometric technology for identifying an individual's voice.

How does speech recognition work?


Speech recognition systems use computer algorithms to process and interpret spoken words
and convert them into text. A software program turns the sound a microphone records into
written language that computers and humans can understand, following these four steps:

1. analyze the audio;


2. break it into parts;
3. digitize it into a computer-readable format; and
4. use an algorithm to match it to the most suitable text representation.

Speech recognition software must adapt to the highly variable and context-specific nature of
human speech. The software algorithms that process and organize audio into text are trained
on different speech patterns, speaking styles, languages, dialects, accents and phrasings. The
software also separates spoken audio from background noise that often accompanies the
signal.

To meet these requirements, speech recognition systems use two types of models:

• Acoustic models. These represent the relationship between linguistic units of speech and
audio signals.
• Language models. Here, sounds are matched with word sequences to distinguish between
words that sound similar.

What applications is speech recognition used for?

Speech recognition systems have quite a few applications. Here is a sampling of them.

Mobile devices. Smartphones use voice commands for call routing, speech-to-text
processing, voice dialing and voice search. Users can respond to a text without looking at
their devices. On Apple iPhones, speech recognition powers the keyboard and Siri, the virtual
assistant. Functionality is available in secondary languages, too. Speech recognition can also
be found in word processing applications like Microsoft Word, where users can dictate words
to be turned into text.

Education. Speech recognition software is used in language instruction. The


software hears the user's speech and offers help with pronunciation.

Customer service. Automated voice assistants listen to customer queries and


provides helpful resources.

Healthcare applications. Doctors can use speech recognition software to


transcribe notes in real time into healthcare records.

Disability assistance. Speech recognition software can translate spoken


words into text using closed captions to enable a person with hearing loss to
understand what others are saying. Speech recognition can also enable those
with limited use of their hands to work with computers, using voice commands
instead of typing.

Court reporting. Software can be used to transcribe courtroom proceedings,


precluding the need for human transcribers.

Emotion recognition. This technology can analyze certain vocal


characteristics to determine what emotion the speaker is feeling. Paired with
sentiment analysis, this can reveal how someone feels about a product or
service.

What are the features of speech recognition systems?


Good speech recognition programs let users customize them to their needs. The features that
enable this include:

• Language weighting. This feature tells the algorithm to give special attention to certain
words, such as those spoken frequently or that are unique to the conversation or subject. For
example, the software can be trained to listen for specific product references.
• Acoustic training. The software tunes out ambient noise that pollutes spoken audio.
Software programs with acoustic training can distinguish speaking style, pace and volume
amid the din of many people speaking in an office.
• Speaker labeling. This capability enables a program to label individual participants and
identify their specific contributions to a conversation.
• Profanity filtering. Here, the software filters out undesirable words and language.

What are the different speech recognition algorithms?


The power behind speech recognition features comes from a set of algorithms and
technologies. They include the following:

• Hidden Markov model. HMMs are used in autonomous systems where a state is partially
observable or when all of the information necessary to make a decision is not immediately
available to the sensor (in speech recognition's case, a microphone). An example of this is in
acoustic modeling, where a program must match linguistic units to audio signals using
statistical probability.
• Natural language processing. NLP eases and accelerates the speech recognition process.
• N-grams. This simple approach to language models creates a probability distribution for a
sequence. An example would be an algorithm that looks at the last few words spoken,
approximates the history of the sample of speech and uses that to determine the probability
of the next word or phrase that will be spoken.
• Artificial intelligence. AI and machine learning methods like deep learning and neural
networks are common in advanced speech recognition software. These systems use
grammar, structure, syntax and composition of audio and voice signals to process speech.
Machine learning systems gain knowledge with each use, making them well suited for
nuances like accents.

What are the advantages of speech recognition?


There are several advantages to using speech recognition software, including the following:

• Machine-to-human communication. The technology enables electronic devices to


communicate with humans in natural language or conversational speech.
• Readily accessible. This software is frequently installed in computers and mobile devices,
making it accessible.
• Easy to use. Well-designed software is straightforward to operate and often runs in the
background.
• Continuous, automatic improvement. Speech recognition systems that incorporate AI
become more effective and easier to use over time. As systems complete speech recognition
tasks, they generate more data about human speech and get better at what they do.

What are the disadvantages of speech recognition?


While convenient, speech recognition technology still has a few issues to work through.
Limitations include:

• Inconsistent performance. The systems may be unable to capture words accurately because
of variations in pronunciation, lack of support for some languages and inability to sort
through background noise. Ambient noise can be especially challenging. Acoustic training
can help filter it out, but these programs aren't perfect. Sometimes it's impossible to isolate
the human voice.
• Speed. Some speech recognition programs take time to deploy and master. The speech
processing may feel relatively slow.
• Source file issues. Speech recognition success depends on the recording equipment used,
not just the software.

Computer Vision Introduction


Computer vision is a subfield of artificial intelligence that deals with acquiring, processing,
analyzing, and making sense of visual data such as digital images and videos. It is one of
the most compelling types of artificial intelligence that we regularly implement in our daily
routines.

Computer vision helps to understand the complexity of the human vision system and trains
computer systems to interpret and gain a high-level understanding of digital images or videos. In the
early days, developing a machine system having human-like intelligence was just a dream, but with
the advancement of artificial intelligence and machine learning, it also became possible. Similarly,
such intelligent systems have been developed that can "see" and interpret the world around them,
similar to human eyes. The fiction of yesterday has become the fact of today.

Computer vision is one of the most important fields of artificial intelligence (AI) and
computer science engineering that makes computer systems capable of extracting
meaningful information from visual data like videos and images. Further, it also helps to
take appropriate actions and make recommendations based on the extracted information.

Further, Artificial intelligence is the branch of computer science that primarily deals with
creating a smart and intelligent system that can behave and think like the human brain. So, we
can say if artificial intelligence enables computer systems to think intelligently, computer
vision makes them capable of seeing, analyzing, and understanding.

History of Computer Vision

How does Computer Vision Work?


Computer vision is a technique that extracts information from visual data, such as images and
videos. Although computer vision works similarly to human eyes with brain work, this is
probably one of the biggest open questions for IT professionals: How does the human brain
operate and solve visual object recognition?

On a certain level, computer vision is all about pattern recognition which includes the
training process of machine systems for understanding the visual data such as images and
videos, etc.

Firstly, a vast amount of visual labeled data is provided to machines to train it. This labeled
data enables the machine to analyze different patterns in all the data points and can relate to
those labels. E.g., suppose we provide visual data of millions of dog images. In that case, the
computer learns from this data, analyzes each photo, shape, the distance between each shape,
color, etc., and hence identifies patterns similar to dogs and generates a model. As a result,
this computer vision model can now accurately detect whether the image contains a dog or
not for each input image.

Task Associated with Computer Vision


Although computer vision has been utilized in so many fields, there are a few common tasks
for computer vision systems. These tasks are given below:
• Object classification: Object classification is a computer vision technique/task used to
classify an image, such as whether an image contains a dog, a person's face, or a banana. It
analyzes the visual content (videos & images) and classifies the object into the defined
category. It means that we can accurately predict the class of an object present in an image
with image classification.
• Object Identification/detection: Object identification or detection uses image classification
to identify and locate the objects in an image or video. With such detection and
identification technique, the system can count objects in a given image or scene and
determine their accurate location and labeling. For example, in a given image, one dog, one
cat, and one duck can be easily detected and classified using the object detection technique.
• Object Verification: The system processes videos, finds the objects based on search criteria,
and tracks their movement.
• Object Landmark Detection: The system defines the key points for the given object in the
image data.
• Image Segmentation: Image segmentation not only detects the classes in an image as image
classification; instead, it classifies each pixel of an image to specify what objects it has. It
tries to determine the role of each pixel in the image.
• Object Recognition: In this, the system recognizes the object's location with respect to the
image.

How to learn computer Vision?


Although, computer vision requires all basic concepts of machine learning, deep learning,
and artificial intelligence. But if you are eager to learn computer vision, then you must follow
below things, which are as follows:

1. Build your foundation:


o Before entering this field, you must have strong knowledge of advanced
mathematical concepts such as Probability, statistics, linear algebra, calculus, etc.
o The knowledge of programming languages like Python would be an extra advantage
to getting started with this domain.
2. Digital Image Processing:
It would be best if you understood image editing tools and their functions, such as histogram
equalization, median filtering, etc. Further, you should also know about compressing images
and videos using JPEG and MPEG files. Once you know the basics of image processing and
restoration, you can kick-start your journey into this domain.
3. Machine learning understanding
To enter this domain, you must deeply understand basic machine learning concepts such
as CNN, neural networks, SVM, recurrent neural networks, generative adversarial neural
networks, etc.
4. Basic computer vision: This is the step where you need to decrypt the mathematical models
used in visual data formulation.

These are a few important prerequisites that are essentially required to start your career in
computer vision technology. Once you are prepared with the above prerequisites, you can
easily start learning and make a career in Computer vision.

Applications of computer vision


Computer vision is one of the most advanced innovations of artificial intelligence and
machine learning. As per the increasing demand for AI and Machine Learning technologies,
computer vision has also become a center of attraction among different sectors. It greatly
impacts different industries, including retail, security, healthcare, automotive, agriculture, etc.

1. Image and Video Analysis:


- Object Recognition and Classification: Identifying and classifying
objects within images or video frames.
- Image Segmentation: Dividing an image into segments to understand
and analyze its content.

2. Augmented Reality (AR) and Virtual Reality (VR):


- Object Recognition in AR: Recognizing real-world objects for overlaying
digital information in augmented reality applications.
- Immersive Experiences: Enhancing virtual reality experiences through
real-time image and gesture recognition.

3. Robotics:
- Object Manipulation: Enabling robots to recognize and manipulate
objects based on visual input.
- Navigation: Providing robots with the ability to navigate and
understand their environment using visual information.

4. Security and Surveillance:


- Facial Recognition: Identifying and verifying individuals based on facial
features.
- Anomaly Detection: Detecting unusual activities or behaviors in
surveillance footage.

5. Agriculture:
- Crop Monitoring: Analyzing images to assess the health of crops and
detect diseases or pests.
- Harvesting Robots: Enabling robots to identify and harvest crops using
computer vision.

6. Entertainment:
- Gesture Recognition: Interacting with devices and games through
gestures captured by cameras.
- Content Tagging: Automatically tagging and categorizing multimedia
content based on visual features.

7. Healthcare:
- Biometric Authentication: Using facial or iris recognition for secure
access to medical records.
- Rehabilitation: Assisting in rehabilitation exercises by providing real-
time feedback based on visual analysis.

8. Environmental Monitoring:
- Wildlife Conservation: Monitoring wildlife populations and behaviors
through camera traps.
- Climate Analysis: Analyzing satellite imagery for climate and
environmental studies.
COMPUTER DEVICES
Characterized by the relationships between deep neural network instances (I)
and compute devices (D), DL computation paradigms can be classified into
three new categories beyond single instance single device (SISD), namely
multi-instance single device (MISD), single-instance multi-device (SIMD), and
multi-instance multi-device (MIMD), as shown in Figure 1.

Single Instance Single Device (SISD)


SISD focuses on single-model performance. This is the usual traditional
approach to optimizing DL Systems. It improves the model’s end-to-end
performance (e.g., latency) on the target hardware device. This includes
Optimization on every level from compiler-level to algorithm-level. [1]

Multi Instance Single Device (MISD)


Co-locating multiple DNN Instances in one high performance Machine.
Optimization will happen mainly through compute scaling, so better
hardware. This is more an optimization of cost efficiency, as the best Hardware
costs a lot of money, but can potentially handle multiple DNNs at once. The
goals of MISD are enhancing serving throughput on the shared hardware and
reducing power and infrastructural costs. MISD can be optimized by Workload
scheduling, which avoids the inference of jobs from different DNN Instances. [1]

Multi Instance Multi Device (MIMD)


An example for MIMD would be an architecture where service routers are used
to route the interference requests of multiple models to multiple devices and
manage computation via job scheduling. This mainly lies on Data Center
Management for optimal infrastructure. MIMD is still a rather uninvestigated
approach, so there is only limited public work available. [1]

Single Instance Multi Device (SIMD)


SIMD optimization includes model parallelization, Data parallelization and
pipeline parallelization. As ultra-large model size has shown to have a large
impact on the models performance, models that take in billions of parameters
as training data have come to be valuable for state-of-the-art industrial
models. This scaling volume and complexity of computations bring current
hardware to its boundaries. The idea is to distribute models and data to
multiple devices. In 2012 Google researchers published a paper on Large Scale
Distributed Deep Networks, which featured their new LDL-
Framework DistBelief that is based on Model Parallelism and Data Parallelism.
[1]
Model parallelism

Figure 2: Model distribution [2]

A Data scientist defines the layers of a neural network with feedforward and
probably backward connections. For large models, this net may be partitioned
across multiple machines like shown in Figure 2. A framework that supports
model parallelism automatically parallelized the computations in each
machine using CPU and GPU of that machine. Googles DistBelief also
manages communication, synchronization, and data transfer between the
machines during both training and inference phase.

The number of machines to distribute the model for better performance


depends on the complexity of the model and number of
parameters. DistBelief runs Models with a very large number of parameters,
with up to 144 partitions.

Problem: The typical cause of less-than-ideal speedups is variance in


processing times across the different machines, leading to many machines
waiting for the single slowest machine to finish a given phase of computation.
[2]

Data parallelism

Figure 3: Data parallelism [2]


In order to optimize the training of parallelized models, Figure 3 describes two
procedures that not only parallelize the computation within a single instance
of a model, but distribute the training across the multiple instances and
replicas of the model. [2]

Figure 3, Left: Downpour SGD (stochastic gradient Descent):


Online learning method. SGD is inherently sequential and therefore impractical
for very large datasets. Downpour SGD is a variant of asynchronous stochastic
gradient descent. It uses multiple replicas of a single DistBelief model. Training
data is divided into multiple data shards that are each fed into one replica of
the model. The Model Replicas share one Parameter Server, that holds the
current state of the best model parameters. [2]

Figure 3, Right: Sandblaster L-BFGS (Limited Memory Broyden–Fletcher–


Goldfarb–Shanno)
Batch learning. While Inherent batch learning methods work well on smaller
models and datasets, Sandblaster optimizes batch learning for large deep
networks. It also uses multiple model replica that share one parameter
storage. A Coordinator writes out operations that can be computed by each
parameter server shard (batch) independently. Data is stored in one place,
but distributed to multiple Machines, each responsible for computing the
parameters for a subset of data.
Linear factor models
Linear factor models are more commonly associated with traditional
statistical methods and factor analysis, rather than deep learning. Deep
learning models are typically non-linear and involve neural networks with
multiple layers, allowing them to learn complex patterns and
representations from data.

However, it's possible to draw connections between linear factor models


and some aspects of deep learning, particularly in the context of neural
networks. Here's a perspective on how linear factor models relate to
certain concepts in deep learning:

1. Linear Layers in Neural Networks:


- While deep learning models are generally non-linear, they often
include linear operations in the form of linear layers (also known as fully
connected or dense layers). These layers apply linear transformations to
the input data using weight matrices.
- The output of a linear layer is a linear combination of the input
features, similar to the structure of linear factor models.

2. Embeddings:
- In natural language processing and recommendation systems,
embeddings are often used to represent categorical variables.
Embeddings can be seen as a form of factorization, capturing latent
factors in the data. While the transformation may be non-linear, the idea
of capturing underlying factors is similar.
3. Principal Component Analysis (PCA) as a Linear Factor Model:
- PCA is a linear technique often used for dimensionality reduction. In
the context of deep learning, autoencoders (a type of neural network) can
be seen as non-linear extensions of PCA. Both methods aim to capture
the most important features or factors in the data.

4. Interpretability:
- Linear factor models are known for their interpretability, as the factor
loadings directly indicate the contribution of each variable to the common
factors. In deep learning, interpretability is often a challenge due to the
complex and non-linear nature of the models. Techniques like attention
mechanisms are introduced to enhance interpretability.

It's important to note that the primary strength of deep learning lies in its
ability to model highly complex and non-linear relationships in data,
which linear factor models might struggle to capture. While some
connections exist, deep learning models are generally more powerful and
flexible, often making them the preferred choice for tasks involving
intricate patterns and representations.

Large-scale deep learning


Large-scale deep learning refers to the application of deep learning
techniques to massive datasets, often involving extensive computational
resources distributed across multiple devices or servers. This approach is
necessary when dealing with datasets that are too large to fit into the
memory of a single machine or when training deep neural networks with
a vast number of parameters. Large-scale deep learning is commonly
employed in various domains, including computer vision, natural language
processing, speech recognition, and reinforcement learning. Here are key
aspects associated with large-scale deep learning:
1. Massive Datasets:
- Large-scale deep learning typically involves training models on datasets
that are extensive, possibly ranging from terabytes to petabytes of data.
These datasets often require distributed storage and processing
frameworks to handle the sheer volume of information.

2. Distributed Computing:
- To efficiently process large datasets and train complex models,
distributed computing frameworks are often used. Technologies like
Apache Spark, TensorFlow's distributed computing capabilities, and
Apache Hadoop facilitate parallel processing across multiple machines or
clusters.

3. Parallelism and GPU Acceleration:


- Deep learning frameworks leverage parallelism to accelerate training
on graphics processing units (GPUs) or tensor processing units (TPUs).
These devices are particularly well-suited for the matrix and tensor
operations that are fundamental to deep learning.

4. Model Parallelism and Data Parallelism:


- In large-scale deep learning, model parallelism involves distributing
different parts of a neural network across different devices or servers.
Data parallelism, on the other hand, involves distributing batches of data
across multiple devices for parallel training.

5. Distributed Training:
- Large-scale models are often trained using distributed training
methods, where different portions of the model or subsets of the data are
processed simultaneously across multiple devices or nodes. This helps
reduce training time significantly.
6. Batch and Stochastic Gradient Descent:
- Batch gradient descent involves updating model parameters based on
the average gradient computed over the entire dataset, while stochastic
gradient descent (SGD) updates parameters based on a single or a few
random samples. Large-scale deep learning often employs SGD and its
variants due to their scalability and efficiency.

7. Model Compression:
- Given the large size of models, model compression techniques are
often applied to reduce the memory and computation requirements,
making them more feasible for deployment on resource-constrained
devices.

8. Transfer Learning and Pre-trained Models:


- Large-scale deep learning often benefits from transfer learning, where
models pre-trained on massive datasets (e.g., ImageNet) are fine-tuned
for specific tasks. This approach leverages the knowledge gained from the
large pre-training dataset.

9. Scalability Challenges:
- Managing the scalability of infrastructure, handling communication
overhead, and ensuring efficient data distribution are challenges in large-
scale deep learning that require careful consideration.

Applications of large-scale deep learning span various industries, including


healthcare, finance, autonomous vehicles, natural language processing,
and more. The ability to process and learn from massive datasets has
contributed to significant advancements in the performance of deep
learning models across diverse domains.
Application of NN in pR

1. Image and Object Recognition:


Application: Identifying objects or patterns within images.
Use Case: Image classification, object detection, and image segmentation
in fields like computer vision, autonomous vehicles, and surveillance.
2. Speech Recognition:
Application: Converting spoken language into text.
Use Case: Virtual assistants (e.g., Siri, Google Assistant), transcription
services, voice-activated systems, and voice-controlled devices.
3. Handwriting Recognition:
Application: Recognizing and interpreting handwritten characters.
Use Case: Optical character recognition (OCR), digitizing handwritten
documents, and automated form processing.
4. Face Recognition:
Application: Identifying and verifying individuals based on facial features.
Use Case: Security systems, access control, surveillance, and user
authentication on devices.
5. Biometric Recognition:
Application: Identifying individuals based on unique biological or
behavioral traits.
Use Case: Fingerprint recognition, iris scanning, and gait analysis for
secure access and identification.
6. Medical Image Analysis:
Application: Analyzing medical images for diagnosis and treatment
planning.
Use Case: Detecting tumors in radiology images, segmenting organs, and
predicting disease outcomes.
7. Gesture Recognition:
Application: Interpreting hand or body movements as commands.
Use Case: Human-computer interaction, sign language interpretation, and
virtual reality applications.
8. Financial Fraud Detection:
Application: Identifying fraudulent activities in financial transactions.
Use Case: Credit card fraud detection, anomaly detection in banking
transactions, and anti-money laundering.
9. Natural Language Processing (NLP):
Application: Understanding and processing human language.
Use Case: Sentiment analysis, machine translation, chatbots, and
language understanding in search engines.
10. Recommendation Systems:
Application: Predicting and suggesting items of interest to users.
Use Case: Movie recommendations, personalized product suggestions in
e-commerce, and content recommendations in streaming platforms.
11. Cybersecurity:
Application: Detecting and preventing cyber threats.
Use Case: Intrusion detection systems, malware detection, and anomaly
detection in network traffic.
12. Quality Control in Manufacturing:
Application: Ensuring product quality and detecting defects.
Use Case: Automated visual inspection, defect detection in manufacturing
processes, and quality assurance.
13. Emotion Recognition:
Application: Analyzing facial expressions or physiological signals to infer
emotions.
Use Case: Human-computer interaction, customer sentiment analysis, and
mental health applications.
14. Game AI:
Application: Creating intelligent opponents or characters in video games.
Use Case: Adaptive game difficulty, character behavior modeling, and
player experience enhancement.

You might also like