Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
0% found this document useful (0 votes)
2 views

Neural Network example and Activation Functions Summary

The document discusses various activation functions used in neural networks, including Linear, Sigmoid, Tanh, ReLU, Leaky ReLU, and Softmax, highlighting their applications and limitations. It emphasizes the importance of non-linearity introduced by these functions, which allows neural networks to model complex patterns in real-world data. The text also explains how non-linear activation functions facilitate the learning of hierarchical representations essential for tasks like stock price prediction and image classification.

Uploaded by

extra875213
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views

Neural Network example and Activation Functions Summary

The document discusses various activation functions used in neural networks, including Linear, Sigmoid, Tanh, ReLU, Leaky ReLU, and Softmax, highlighting their applications and limitations. It emphasizes the importance of non-linearity introduced by these functions, which allows neural networks to model complex patterns in real-world data. The text also explains how non-linear activation functions facilitate the learning of hierarchical representations essential for tasks like stock price prediction and image classification.

Uploaded by

extra875213
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 2

Neural Network E

Example 2

Show the updated weights and biases after one iteration.


Activation Functions:

1. Linear Activation Function: Rarely used in modern neural networks because it cannot model
non-linear relationships, which limits the network’s ability to solve complex tasks.
2. Sigmoid Function: Commonly used in the output layer of binary classification problems.
However, it has drawbacks, including the vanishing gradient problem, which can hinder learning
in deep networks.
3. Tanh (Hyperbolic Tangent) Function: Often used in hidden layers, as it outputs zero-centered
values, which can lead to faster convergence during training compared to Sigmoid.
4. ReLU (Rectified Linear Unit): The most popular activation function for hidden layers in deep
networks due to its simplicity and computational efficiency. However, it can suffer from the
“dying ReLU” problem, where neurons output zero for all inputs, effectively stopping learning.
5. Leaky ReLU and Parametric ReLU: Addresses the dying ReLU problem by allowing a small,
non-zero gradient for negative inputs.
6. Softmax Function: Used in the output layer for multi-class classification problems to represent
probabilities.

The Role of Activation Functions in Non-Linearity


Without activation functions, neural networks would behave like linear models, regardless of the
number of layers. Activation functions introduce non-linearity, which enables the network to
learn and model complex patterns in data.

But why does non-linearity matter? It matters because real-world data is often non-linear. For
instance:
1. Predicting stock prices involves non-linear dependencies among market factors.
2. Classifying images requires modelling intricate patterns like edges, shapes, and textures.
Non-linear activation functions allow layers to interact and combine features in sophisticated
ways, which enables the network to learn hierarchical representations.

You might also like