Deep Learning Notes
Deep Learning Notes
1. **Neural Networks**:
2. **Layers**:
- **Input Layer**: The first layer, where data (like images or text) is fed into
the network.
- **Output Layer**: The final layer that produces the output, such as a
classification label or a predicted value.
3. **Activation Function**:
4. **Backpropagation**:
- **Backpropagation** is the process by which deep learning models adjust
their weights and biases by calculating the gradient of the error with respect
to each weight. This is done through a method called **gradient descent**,
which helps minimize the loss function (or error) over time.
- **Epochs**: The number of times the model processes the entire training
dataset.
- **CNNs** are designed to process grid-like data (such as images) and are
especially good at feature extraction and recognition.
- **RNNs** are designed for sequential data (e.g., text, speech, time-series
data) and can maintain memory of previous inputs via loops in the network.
- **Long Short-Term Memory (LSTM)** and **Gated Recurrent Units
(GRUs)** are types of RNNs that address the issue of vanishing gradients,
allowing the model to remember longer sequences.
5. **Autoencoders**:
6. **Transformer Networks**:
2. **Data Augmentation**:
3. **Regularization**:
4. **Batch Processing**:
- Instead of training the model on the entire dataset at once, deep learning
models often use **mini-batch training**. The data is divided into smaller
batches, and the model is updated after each batch is processed.
5. **Hyperparameter Tuning**:
1. **Computer Vision**:
- **Image Classification**: Deep learning models can classify images into
categories (e.g., detecting objects in images).
3. **Autonomous Vehicles**:
4. **Reinforcement Learning**:
5. **Recommendation Systems**:
- GANs and other deep learning models are used for generating new data,
such as realistic images, art, or even music.
2. **Computational Resources**:
- Training deep learning models, especially with many layers and large
datasets, requires powerful hardware (like GPUs) and can be computationally
expensive.
3. **Interpretability**:
4. **Overfitting**:
5. **Bias**:
- If the data used to train deep learning models contains biases, the model
may also inherit and propagate these biases in its predictions.
### **Conclusion**: