LSTM
LSTM
LSTM
Discover the power of Long Short Term Memory (LSTM) and how it
revolutionizes sequential data processing. Unlock the potential of
artificial neural networks.
by Abhishek kr
What is Long Short Term
Memory (LSTM)?
Long Short-Term Memory Networks is a deep learning, sequential neural network that allows
information to persist. It is a special type of Recurrent Neural Network which is capable of handling
the vanishing gradient problem faced by RNN. LSTM was designed by Hochreiter and Schmidhuber
that resolves the problem caused by traditional RNNs and machine learning algorithms. LSTM can
be implemented in Python using the Keras library.
Let’s say while watching a video, you remember the previous scene, or while reading a book, you
know what happened in the earlier chapter. RNNs work similarly; they remember the previous
information and use it for processing the current input. The shortcoming of RNN is they cannot
remember long-term dependencies due to vanishing gradient. LSTMs are explicitly designed to
avoid long-term dependency problems.
LSTM (Long Short-Term Memory) is a recurrent neural network (RNN) architecture widely used in
Deep Learning. It excels at capturing long-term dependencies, making it ideal for sequence
prediction tasks.
Unlike traditional neural networks, LSTM incorporates feedback connections, allowing it to process
entire sequences of data, not just individual data points. This makes it highly effective in
understanding and predicting patterns in sequential data like time series, text, and speech.
Architecture and Structure of LSTMs
Forget Gate 1
Decides which information to discard
from the cell state.
2 Input Gate
Determines which new information to
update the cell state with.
Output Gate 3
Controls the output of the LSTM cell.
Forget Gate
In a cell of the LSTM neural network, the first step is to decide whether we should keep the information
from the previous time step or forget it. Here is the equation for forget gate.
“Bob knows swimming. He told me over the phone that he had served the navy for four long years.”
So, in both these sentences, we are talking about Bob. However, both give different kinds of information about Bob. In the
first sentence, we get the information that he knows swimming. Whereas the second sentence tells, he uses the phone and
served in the navy for four years.
Now just think about it, based on the context given in the first sentence, which information in the second sentence is
critical? First, he used the phone to tell, or he served in the navy. In this context, it doesn’t matter whether he used the
phone or any other medium of communication to pass on the information. The fact that he was in the navy is important
information, and this is something we want our model to remember for future computation. This is the task of the Input
gate.
Here,
New Information
Now the new information that needed to be passed to the cell state is a function of a hidden state at the previous timestamp t-1 and input x
at timestamp t. The activation function here is tanh. Due to the tanh function, the value of new information will be between -1 and 1. If the
value of Nt is negative, the information is subtracted from the cell state, and if the value is positive, the information is added to the cell
state at the current timestamp.
However, the Nt won’t be added directly to the cell state. Here comes the updated equation:
Here, Ct-1 is the cell state at the current timestamp, and the others are the values we have calculated previously.
Output Gate
Now consider this sentence.
“Bob single-handedly fought the enemy and died for his country. For his contributions, brave______.”
During this task, we have to complete the second sentence. Now, the minute we see the word brave, we know that we are talking about a person. In the sentence,
only Bob is brave, we can not say the enemy is brave, or the country is brave. So based on the current expectation, we have to give a relevant word to fill in the
blank. That word is our output, and this is the function of our Output gate.
Here is the equation of the Output gate, which is pretty similar to the two previous gates.
Its value will also lie between 0 and 1 because of this sigmoid function. Now to calculate the current hidden state, we will use Ot and tanh of the
updated cell state. As shown below.
It turns out that the hidden state is a function of Long term memory (Ct) and the current output. If you need to take the output of the
current timestamp, just apply the SoftMax activation on hidden state Ht.
Here the token with the maximum score in the output is the prediction.
LTSM vs RNN
What are Bidirectional LSTMs?
Bidirectional LSTMs (Long Short-Term Memory) are a type of recurrent neural network (RNN)
architecture that processes input data in both forward and backward directions. In a traditional LSTM, the
information flows only from past to future, making predictions based on the preceding context. However,
in bidirectional LSTMs, the network also considers future context, enabling it to capture dependencies in
both directions.
The bidirectional LSTM comprises two LSTM layers, one processing the input sequence in the forward
direction and the other in the backward direction. This allows the network to access information from past
and future time steps simultaneously. As a result, bidirectional LSTMs are particularly useful for tasks that
require a comprehensive understanding of the input sequence, such as natural language processing
tasks like sentiment analysis, machine translation, and named entity recognition.
By incorporating information from both directions, bidirectional LSTMs enhance the model’s ability to
capture long-term dependencies and make more accurate predictions in complex sequential data.
Applications of LSTMs
Advantages
Can handle long-term dependencies, process variable-length sequences, and are effective in
capturing temporal patterns.
Limitations
Can be computationally expensive, require significant training data, and are susceptible to
overfitting.
Recent Research and Developments in
LSTMs
1 Attention 2 Peephole 3 Improved
Mechanism Connections Regularization
Techniques
Improves the model's Enhances the LSTM's
ability to focus on capability to access the Address the issues of
relevant information. cell state. overfitting and training
instability.
Conclusion and Key Takeaways
Long Short Term Memory (LSTM) networks have revolutionized sequential data
processing, enabling applications in natural language processing, speech
recognition, and time series prediction. Understanding the architecture, working
mechanism, and recent developments is crucial to harnessing their power.
THANK YOU