Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
0% found this document useful (0 votes)
6 views

Python 1

The document contains a Python implementation of a multilayer perceptron (MLP) neural network that uses the sigmoid activation function. It includes methods for forward propagation, backward propagation, and training the model on XOR logic gate data. The model is trained for 10,000 epochs and outputs predictions based on the input data.
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
6 views

Python 1

The document contains a Python implementation of a multilayer perceptron (MLP) neural network that uses the sigmoid activation function. It includes methods for forward propagation, backward propagation, and training the model on XOR logic gate data. The model is trained for 10,000 epochs and outputs predictions based on the input data.
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 2

Python 1 :

import numpy as np

# Fonction d'activation sigmoid

def sigmoid(x):

return 1 / (1 + np.exp(-x))

# Dérivée de la fonction sigmoid

def sigmoid_derivative(x):

return x * (1 - x)

# Perceptron multicouche

class MLP:

def __init__(self, input_size, hidden_size, output_size):

# Initialisation des poids

self.weights_input_hidden = np.random.randn(input_size, hidden_size)

self.weights_hidden_output = np.random.randn(hidden_size, output_size)

self.bias_hidden = np.zeros((1, hidden_size))

self.bias_output = np.zeros((1, output_size))

def forward(self, X):

self.hidden_input = np.dot(X, self.weights_input_hidden) + self.bias_hidden

self.hidden_output = sigmoid(self.hidden_input)

self.final_input = np.dot(self.hidden_output, self.weights_hidden_output) + self.bias_output

self.final_output = sigmoid(self.final_input)

return self.final_output

def backward(self, X, y, learning_rate):

# Calcul des erreurs

output_error = y - self.final_output

output_delta = output_error * sigmoid_derivative(self.final_output)


hidden_error = output_delta.dot(self.weights_hidden_output.T)

hidden_delta = hidden_error * sigmoid_derivative(self.hidden_output)

# Mise à jour des poids et des biais

self.weights_input_hidden += X.T.dot(hidden_delta) * learning_rate

self.weights_hidden_output += self.hidden_output.T.dot(output_delta) * learning_rate

self.bias_hidden += np.sum(hidden_delta, axis=0, keepdims=True) * learning_rate

self.bias_output += np.sum(output_delta, axis=0, keepdims=True) * learning_rate

def train(self, X, y, epochs, learning_rate):

for _ in range(epochs):

self.forward(X)

self.backward(X, y, learning_rate)

# Exemples d'entrée et sortie

X = np.array([[0, 0], [0, 1], [1, 0], [1, 1]]) # Entrées (porte logique XOR)

y = np.array([[0], [1], [1], [0]]) # Sorties attendues

# Création et entraînement du modèle

model = MLP(input_size=2, hidden_size=4, output_size=1)

model.train(X, y, epochs=10000, learning_rate=0.1)

# Test du modèle

predictions = model.forward(X)

print(f"Prédictions :\n{predictions}")

You might also like