Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
0% found this document useful (0 votes)
25 views

MachineLeanrning With Python

assy

Uploaded by

kritbarnwal5004
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
25 views

MachineLeanrning With Python

assy

Uploaded by

kritbarnwal5004
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

Submitted by:- Kirti Singhal

BCA V M2

09413702022

ASSIGNMENT – 4

SUB : MACHINE LEARNING WITH PYTHON

1. What is an Artificial Neural Network (ANN), and how does it draw inspiration
from biological neural networks?

Solution: An Artificial Neural Network (ANN) is a computational model inspired by the way
biological neural networks in the human brain process information. In biological neural
networks, neurons are connected through synapses, which strengthen or weaken based on
learning experiences. Similarly, ANNs consist of layers of artificial neurons or "nodes"
connected by "weights." These weights adjust during training to make the network learn patterns
in data.

 Input Layer: Receives the input data.


 Hidden Layers: Perform computations and extract features.
 Output Layer: Produces the final prediction.

Example Code (Simple ANN using Python's Keras library):

from keras.models import Sequential

from keras.layers import Dense

# Initialize the ANN

model = Sequential()

# Adding input layer and the first hidden layer

model.add(Dense(units=6, activation='relu', input_dim=4))

# Adding the second hidden layer

model.add(Dense(units=6, activation='relu'))
# Adding the output layer

model.add(Dense(units=1, activation='sigmoid'))

# Compiling the ANN

model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy'])

# Fitting the ANN to the training set (sample training data required)

# model.fit(X_train, y_train, epochs=50, batch_size=10)

2. Explain the Hebbian learning rule and its significance in neural


networks.
Solution: The Hebbian learning rule, often summarized as "cells that fire together wire
together," is based on the idea that if two neurons are activated simultaneously, their connection
strengthens. This rule is essential for unsupervised learning, as it allows networks to self-
organize and discover patterns in data without explicit output labels.

Mathematical Formulation: Δwij=η⋅xi⋅yj\Delta w_{ij} = \eta \cdot x_i \cdot y_jΔwij=η⋅xi⋅yj


where:

 Δwij\Delta w_{ij}Δwij is the change in the weight between neurons iii and jjj,
 η\etaη is the learning rate,
 xix_ixi is the input from neuron iii,
 yjy_jyj is the output from neuron jjj.

Example Code:

import numpy as np

# Sample Hebbian Learning Rule Implementation

def hebbian_learning(weights, inputs, outputs, learning_rate=0.01):

delta_w = learning_rate * np.outer(inputs, outputs)

return weights + delta_w

# Example data
inputs = np.array([1, 0, -1])

outputs = np.array([1, -1, 0])

weights = np.zeros((3, 3))

# Update weights

weights = hebbian_learning(weights, inputs, outputs)

print("Updated Weights:\n", weights)

3. What is the perceptron learning rule, and how does it adjust


weights during training?
Solution: The perceptron learning rule updates weights based on errors between predicted and
actual outputs. It iteratively adjusts the weights to minimize these errors, making it suitable for
binary classification tasks.

Perceptron Learning Rule Formula: w=w+η⋅(y−y^)⋅xw = w + \eta \cdot (y - \hat{y}) \cdot


xw=w+η⋅(y−y^)⋅x where:

 www is the weight,


 η\etaη is the learning rate,
 yyy is the true label,
 y^\hat{y}y^ is the predicted label.

Example Code (Perceptron Training Algorithm):

import numpy as np

# Perceptron Learning Algorithm

def perceptron_learning(X, y, learning_rate=0.1, epochs=100):

weights = np.zeros(X.shape[1])

for epoch in range(epochs):

for i, x_i in enumerate(X):

y_pred = np.dot(x_i, weights) > 0

weights += learning_rate * (y[i] - y_pred) * x_i

return weights
# Sample data

X = np.array([[1, 1], [1, -1], [-1, 1], [-1, -1]])

y = np.array([1, 0, 0, 0]) # AND gate example

# Train the perceptron

weights = perceptron_learning(X, y)

print("Trained Weights:", weights)

4. Explain the concept of adaptive weights in Adaline.


Solution: Adaline (Adaptive Linear Neuron) is a type of neural network model that adjusts
weights continuously, unlike the perceptron, which updates only when errors occur. Adaline’s
weights are updated using the mean squared error between predicted and actual outputs, making
it suitable for regression tasks.

Weight Update Formula: w=w+η⋅(y−y^)⋅xw = w + \eta \cdot (y - \hat{y}) \cdot


xw=w+η⋅(y−y^)⋅x

Example Code (Adaline using Stochastic Gradient Descent):

import numpy as np

def adaline_sgd(X, y, learning_rate=0.01, epochs=10):

weights = np.zeros(X.shape[1])

for epoch in range(epochs):

for i, x_i in enumerate(X):

y_pred = np.dot(x_i, weights)

error = y[i] - y_pred

weights += learning_rate * error * x_i

return weights

# Sample data for Adaline


X = np.array([[1, 2], [2, 3], [3, 4]])

y = np.array([1, 2, 3])

# Train Adaline model

weights = adaline_sgd(X, y)

print("Trained Weights:", weights)

5. Compare and contrast linear and nonlinear activation functions.


Solution:

 Linear Activation Function: The output is a weighted sum of inputs. Linear functions
are simple but lack the capacity to capture complex relationships.
o Formula: f(x)=xf(x) = xf(x)=x
o Limitation: Cannot introduce non-linearity, making it inadequate for deep neural
networks.
 Nonlinear Activation Functions: These functions introduce non-linearity, which helps
networks capture complex patterns.
o Sigmoid: f(x)=11+e−xf(x) = \frac{1}{1 + e^{-x}}f(x)=1+e−x1 (Used for binary
classification)
o ReLU: f(x)=max⁡(0,x)f(x) = \max(0, x)f(x)=max(0,x) (Efficient for deep
networks, avoiding the vanishing gradient problem)
o Tanh: f(x)=ex−e−xex+e−xf(x) = \frac{e^x - e^{-x}}{e^x + e^{-
x}}f(x)=ex+e−xex−e−x (Output ranges from -1 to 1, preserving the sign of the
input)

Example Code:

import numpy as np

# Activation functions

def sigmoid(x):

return 1 / (1 + np.exp(-x))
def relu(x):

return np.maximum(0, x)

def tanh(x):

return np.tanh(x)

# Example input

x = np.array([-2.0, -1.0, 0.0, 1.0, 2.0])

print("Sigmoid:", sigmoid(x))

print("ReLU:", relu(x))

print("Tanh:", tanh(x))

You might also like