Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Sols TD2

Download as pdf or txt
Download as pdf or txt
You are on page 1of 6

Sols_TD2

October 19, 2023

[1]: import numpy as np


import math

1 Exercice 12
[2]: numbers = [0.1] * 1000000

sum_normal = sum(numbers)

# Algorithme de sommation de Kahan


sum_kahan = 0.0 # Initialiser la somme
compensation = 0.0 # Initialiser la compensation

for number in numbers:


y = number - compensation
temp_sum = sum_kahan + y
compensation = (temp_sum - sum_kahan) - y
sum_kahan = temp_sum

# Somme exacte
sum_exact = len(numbers) * 0.1

# Afficher les résultats


print("Somme normale en virgule flottante :", sum_normal)
print("Sommation de Kahan :", sum_kahan)
print("Somme exacte (théorique) :", sum_exact)

Somme normale en virgule flottante : 100000.00000133288


Sommation de Kahan : 100000.0
Somme exacte (théorique) : 100000.0

2 Exercice 11
[3]: dtype = np.float64

a = np.float64(1.000001)

1
b = np.float64(-1000.0)
c = np.float64(1.0)

discriminant = b**2 - 4 * a * c

if discriminant >= 0:
x1 = (-b + np.sqrt(discriminant)) / (2 * a)
x2 = (-b - np.sqrt(discriminant)) / (2 * a)
else:
x1 = None
x2 = None

print("Précision utilisée : float64")


print("Solution 1 :", x1)
print("Solution 2 :", x2)

Précision utilisée : float64


Solution 1 : 999.9980000000002
Solution 2 : 0.0010000010000110458

[4]: dtype = np.float32

a = np.float32(1.000001)
b = np.float32(-1000.0)
c = np.float32(1.0)

discriminant = b**2 - 4 * a * c

if discriminant >= 0:
x1 = (-b + np.sqrt(discriminant)) / (2 * a)
x2 = (-b - np.sqrt(discriminant)) / (2 * a)
else:
x1 = None
x2 = None

print("Précision utilisée : float32")


print("Solution 1 :", x1)
print("Solution 2 :", x2)

Précision utilisée : float32


Solution 1 : 999.9980463255931
Solution 2 : 0.0010000010000093893

2
3 Exercice 9
[5]: n = 10000

sum_approx = np.float32(0.0)
for i in range(1, n + 1):
term = 1.0 / np.float32(i * i)
sum_approx += term

pi_squared_over_6 = np.pi * np.pi / 6.0

print(f"Somme en simple précision : {sum_approx:.7f}")


print(f"Valeur mathématique attendue de �^2/6 : {pi_squared_over_6:.7f}")

Somme en simple précision : 1.6448341


Valeur mathématique attendue de �^2/6 : 1.6449341

4 Exercice 7
[6]: n_terms = 1000000

pi_approx = 0.0

for k in range(n_terms):
denominator = 2 * k + 1
term = 1.0 / denominator if k % 2 == 0 else -1.0 / denominator
pi_approx += term

pi_approx *= 4.0

pi_math = math.pi

error = abs(pi_math - pi_approx)

# Affichage des résultats


print(f"Approximation de � en simple précision : {pi_approx:.7f}")
print(f"Valeur mathématique de � : {pi_math:.7f}")
print(f"Erreur d'arrondi : {error:.7f}")

Approximation de � en simple précision : 3.1415917


Valeur mathématique de � : 3.1415927
Erreur d'arrondi : 0.0000010

4.1 Compare the following three methods:


• Newton-Raphson:
𝑓(𝑥𝑛 )
𝑥𝑛+1 = 𝑥𝑛 −
𝑓 ′ (𝑥𝑛 )

3
• Modified Newton-Raphson:

𝑓(𝑥𝑛 )𝑓 ′ (𝑥𝑛 )
𝑥𝑛+1 = 𝑥𝑛 −
(𝑓 ′ (𝑥𝑛 ))2 − 𝑓(𝑥𝑛 )𝑓 ″ (𝑥𝑛 )

• Halley:
𝑓(𝑥𝑛 )𝑓 ′ (𝑥𝑛 )
𝑥𝑛+1 = 𝑥𝑛 −
(𝑓 ′ (𝑥𝑛 ))2 − 12 𝑓(𝑥𝑛 )𝑓 ″ (𝑥𝑛 )

Test functions:
• 𝑓1 (𝑥) = 𝑥𝑒𝑥 for which the root is 𝑥 = 0.
• 𝑓2 (𝑥) = (𝑥 − 1)3 (𝑥 + 2) for which the root is 𝑥 = 0 on ℝ+ .
• 𝑓3 (𝑥) = (𝑥 − 2)6 − 3 for which the root is 𝑥 = 2 − 31/6 on the interval [0, 2].
As parameters of convergence, use a tolerance of 1e-6 and a maximum number of iterations of 100.
[7]: - 1.0e308 + 1.0e308

[7]: 0.0

[8]: import numpy as np

def f(x):
return (x - 2)**6 - 3

def df(x):
return 6 * (x - 2)**5

def d2f(x):
return 30 * (x - 2)**4

def newton_raphson(f, df, x0, tol, max_iter):


x = x0
for i in range(max_iter):
f_value = f(x)
f_derivative = df(x)
x -= f_value / f_derivative
if abs(f_value) < tol:
return i,x
return None

def modified_newton_raphson(f, df, d2f, x0, tol, max_iter):


x = x0
for i in range(max_iter):
f_value = f(x)
f_derivative = df(x)
f_double_derivative = d2f(x)

4
x -= (f_value * f_derivative) / ((f_derivative**2) - (f_value *␣
↪f_double_derivative))
if abs(f_value) < tol:
return i,x
return None

def halley(f, df, d2f, x0, tol, max_iter):


x = x0
for i in range(max_iter):
f_value = f(x)
f_derivative = df(x)
f_double_derivative = d2f(x)
x -= (2 * f_value * f_derivative) / ((2 * (f_derivative**2)) - (f_value␣
↪* f_double_derivative))

if abs(f_value) < tol:


return i,x
return None

tolerance = 1e-6
max_iterations = 100
initial_guess = 1.2

ite1,result_newton = newton_raphson(f, df, initial_guess, tolerance,␣


↪max_iterations)

ite2,result_modified_newton = modified_newton_raphson(f, df, d2f,␣


↪initial_guess, tolerance, max_iterations)

ite3,result_halley = halley(f, df, d2f, initial_guess, tolerance,␣


↪max_iterations)

# Affichage des résultats


print("Méthode de Newton-Raphson standard :")
print(f"Résultat : {result_newton} avec iteration : {ite1}")
print()

print("Méthode de Newton-Raphson modifiée :")


print(f"Résultat : {result_modified_newton} avec iteration : {ite2}")
print()

print("Méthode de Halley :")


print(f"Résultat : {result_halley} avec iteration : {ite3}")

Méthode de Newton-Raphson standard :


Résultat : 0.7990630448239971 avec iteration : 8

Méthode de Newton-Raphson modifiée :


Résultat : 0.7990630448239971 avec iteration : 6

5
Méthode de Halley :
Résultat : 0.7990630448239971 avec iteration : 4

You might also like