MachineLeanrning With Python
MachineLeanrning With Python
BCA V M2
09413702022
ASSIGNMENT – 4
1. What is an Artificial Neural Network (ANN), and how does it draw inspiration
from biological neural networks?
Solution: An Artificial Neural Network (ANN) is a computational model inspired by the way
biological neural networks in the human brain process information. In biological neural
networks, neurons are connected through synapses, which strengthen or weaken based on
learning experiences. Similarly, ANNs consist of layers of artificial neurons or "nodes"
connected by "weights." These weights adjust during training to make the network learn patterns
in data.
model = Sequential()
model.add(Dense(units=6, activation='relu'))
# Adding the output layer
model.add(Dense(units=1, activation='sigmoid'))
# Fitting the ANN to the training set (sample training data required)
Δwij\Delta w_{ij}Δwij is the change in the weight between neurons iii and jjj,
η\etaη is the learning rate,
xix_ixi is the input from neuron iii,
yjy_jyj is the output from neuron jjj.
Example Code:
import numpy as np
# Example data
inputs = np.array([1, 0, -1])
# Update weights
import numpy as np
weights = np.zeros(X.shape[1])
return weights
# Sample data
weights = perceptron_learning(X, y)
import numpy as np
weights = np.zeros(X.shape[1])
return weights
y = np.array([1, 2, 3])
weights = adaline_sgd(X, y)
Linear Activation Function: The output is a weighted sum of inputs. Linear functions
are simple but lack the capacity to capture complex relationships.
o Formula: f(x)=xf(x) = xf(x)=x
o Limitation: Cannot introduce non-linearity, making it inadequate for deep neural
networks.
Nonlinear Activation Functions: These functions introduce non-linearity, which helps
networks capture complex patterns.
o Sigmoid: f(x)=11+e−xf(x) = \frac{1}{1 + e^{-x}}f(x)=1+e−x1 (Used for binary
classification)
o ReLU: f(x)=max(0,x)f(x) = \max(0, x)f(x)=max(0,x) (Efficient for deep
networks, avoiding the vanishing gradient problem)
o Tanh: f(x)=ex−e−xex+e−xf(x) = \frac{e^x - e^{-x}}{e^x + e^{-
x}}f(x)=ex+e−xex−e−x (Output ranges from -1 to 1, preserving the sign of the
input)
Example Code:
import numpy as np
# Activation functions
def sigmoid(x):
return 1 / (1 + np.exp(-x))
def relu(x):
return np.maximum(0, x)
def tanh(x):
return np.tanh(x)
# Example input
print("Sigmoid:", sigmoid(x))
print("ReLU:", relu(x))
print("Tanh:", tanh(x))