Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Minor Project

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 16

MINOR PROJECT

On

TOPIC – “NUMERICAL ANALYSIS”

RUDRA PRATAP SINGH

Roll No: 2130014010327

B.Sc. MATHS Semester-VI

Submitted to:
MOHD. AMIR

Department of Mathematics

Y.D.P.G. COLLEGE

LAKHIMPUR-KHERI
ACKNOWLEDGEMENT

I would like to express my deepest appreciation Prof. Mohd. Amir,


who has the attitude and the substance of a genius: they continually
and convincingly conveyed a spirit of adventure in regard to research,
and an excitement in regard to teaching. Without their guidance and
persistent help, this term paper would not have been possible.

I would also like to thank Y.D.P.G. College for providing me with the
environment and resources to complete this term paper. I am also
grateful to my classmates and friends for their encouragement and
support throughout this process.

Lastly, I thank my parents for their unconditional support, both


emotionally and financially throughout my college years.

This accomplishment would not have been possible without them.


Thank you.
SR. NO. TOPICS PAGE NO.
1. Introduction to Numerical Analysis 1-8
(a.)

2. Principles of Crystal Structures 8-15


(a.) Crystal Symmetry and Arrangements
 Symmetry operations and space groups in crystallography
 Crystal lattice, unit cell, and periodicity
(b.) Crystallographic Axes and Planes
 Miller indices and description of crystallographic planes
 Atomic arrangements along crystallographic directions
3. Basics of X-Ray Diffraction 15-23
(a.) Theory of X-Ray Diffraction
 Bragg's Law and diffraction patterns
 Scattering of X-rays by crystal lattices
(b.) Experimental X-Ray Techniques
 Single-crystal and powder X-ray diffraction
 X-ray sources and detectors in diffraction experiments
4. Advanced X-Ray Methods 23-31
(a.) High-Resolution X-Ray Techniques
 High-resolution X-ray diffraction and its applications
 Detailed structural analysis at atomic levels
(b.) Time-Resolved X-Ray Techniques
 Dynamics studies and time-resolved X-ray experiments
 Probing transient states and dynamic processes in crystals
5. Applications of X-Ray Crystallography 31-40
(a.) Material Characterization
 Role of X-ray crystallography in characterizing materials
 Studying phase transitions, defects, and microstructures
(b.) Biological and Chemical Applications
 Determining structures of biomolecules and complex compounds
 Contributions to pharmaceuticals and biological sciences
6. Conclusion 40-44
(a.) Summary of Key Findings
 Recapitulation of tools and methods discussed
 Significance of X-ray techniques in crystallography
(b.) Closing Remarks on X-Ray Crystallography
 Concluding thoughts on its impact and potential future
advancements
 Encouragement for continued research and exploration

INDEX
Introduction to Numerical Analysis

(a.) Definition and Scope of numerical analysis


Numerical analysis is a branch of mathematics that deals with the
development, analysis, and implementation of numerical algorithms to obtain
approximate solutions to mathematical problems. It encompasses a wide range
of techniques and methods for solving mathematical problems that cannot be
solved exactly using analytical methods. The field of numerical analysis has had
a profound impact on applied mathematics, scientific computing, and
engineering.

The scope of numerical analysis is vast, covering various areas such as


numerical linear algebra, numerical optimization, numerical differential
equations, numerical integration, and numerical methods for solving partial
differential equations, among others. It involves the study of mathematical
models, the development of numerical algorithms to solve these models, and
the theoretical analysis of the algorithms' accuracy, stability, and efficiency.

One of the fundamental goals of numerical analysis is to develop algorithms


that can provide accurate and efficient approximations to the exact solutions of
mathematical problems. These problems can arise in a wide range of fields,
including physics, engineering, economics, and biology. Numerical analysis
provides a systematic approach to obtaining numerical solutions to these
problems, allowing scientists and engineers to simulate and analyze complex
systems that are beyond the reach of analytical methods.

Numerical linear algebra is a fundamental area of numerical analysis that deals


with the development and analysis of algorithms for solving systems of linear
equations and more general problems involving matrices. Many problems in
science and engineering can be formulated as systems of linear equations, and
efficient algorithms for solving these systems are essential. Numerical methods
for linear algebra include direct methods such as Gaussian elimination and LU
factorization, as well as iterative methods such as the conjugate gradient
method and the Jacobi method.

Numerical optimization is another important area of numerical analysis that


deals with finding the maximum or minimum of a given function. Optimization
problems are ubiquitous in various fields such as engineering design, signal
processing, and finance. Numerical optimization algorithms aim to find the
optimal solution by iteratively improving an initial guess. Popular optimization
algorithms include the Newton method, the gradient descent method, and the
evolutionary algorithms.

Numerical methods for differential equations are crucial in many scientific and
engineering applications, as they allow the simulation of dynamic systems and
the prediction of their behavior over time. Differential equations describe the
relationships between the rates of change of variables in a system, and their
solutions can often only be obtained numerically. These methods can be
categorized into two main types: ordinary differential equations (ODEs) and
partial differential equations (PDEs). Numerical methods for ODEs include
Euler's method, the Runge-Kutta methods, and implicit methods such as the
backward Euler method. Numerical methods for PDEs are more challenging
and can be categorized into finite difference methods, finite element methods,
and spectral methods, among others.

Numerical integration is concerned with approximating the definite integral of


a function over a given interval. Definite integrals frequently arise in scientific
and engineering applications, such as computing the area under a curve or the
expectation of a random variable. Numerical integration methods approximate
the integral by dividing the interval into smaller subintervals and approximating
the function within each subinterval. Some popular numerical integration
methods include the trapezoidal rule, Simpson's rule, and the Gaussian
quadrature.

In addition to the specific areas mentioned above, numerical analysis also


includes techniques for error analysis, approximation theory, and the numerical
solution of optimization and constraint satisfaction problems. Error analysis is
concerned with quantifying the difference between the true solution of a
problem and the computed approximate solution obtained using numerical
methods. Approximation theory deals with the construction and analysis of
functions that can approximate other functions to a desired accuracy.
Techniques such as interpolation, least squares fitting, and splines are
commonly used in approximation theory.

Numerical analysis is a vast and important field of mathematics that plays a


crucial role in science, engineering, and many other disciplines. It involves the
study of mathematical models, the development of numerical algorithms for
solving these models, and the analysis of the accuracy, stability, and efficiency
of these algorithms. The scope of numerical analysis includes various areas
such as numerical linear algebra, numerical optimization, numerical differential
equations, and numerical integration. The algorithms and techniques
developed in numerical analysis enable scientists and engineers to solve
complex mathematical problems that arise in a wide range of applications.

(b.) Importance of Numerical analysis in computational


science and Engineering
Numerical analysis is a field of mathematics that deals with solving complex
mathematical problems using computational techniques. In the realm of
computational science and engineering, numerical analysis plays a pivotal role
in several areas, including simulation and modeling, uncertainty quantification,
and optimization.

Simulation and modeling involve formulating mathematical models to


represent real-world physical systems. Numerical techniques are then used to
simulate and analyze the behavior of these systems under different conditions.
For example, numerical analysis is crucial in weather forecasting, fluid
dynamics, structural analysis, and combustion simulations. By predicting
outcomes and optimizing system parameters, numerical analysis aids in
designing efficient systems and understanding complex phenomena.
Uncertainty quantification is an essential aspect of computational science.
Numerical analysis offers methods to quantify and manage uncertainties
associated with input parameters, model assumptions, and numerical errors.
Techniques such as Monte Carlo simulations, sensitivity analysis, and error
estimation help researchers assess the reliability of their results and make
informed decisions based on the level of uncertainty present in computational
models.

Optimization, another vital application of numerical analysis, involves finding


the best solution to a given problem by minimizing or maximizing an objective
function while satisfying constraints. This field is particularly relevant in
machine learning, control systems, and design optimization. Numerical analysis
provides efficient algorithms and methods to solve complex optimization
problems, leading to improved system performance and resource utilization.

In summary, numerical analysis plays a fundamental role in advancing


computational science and engineering. It enables the simulation, analysis, and
optimization of complex systems, aids in quantifying and managing
uncertainties, and facilitates the search for optimal solutions. These capabilities
contribute to a better understanding of physical phenomena, more efficient
designs, and the ability to solve real-world problems that were previously
unsolvable using traditional analytical methods.
Fundamentals of Numerical Analysis
(a.) Errors in Numerical Computation
Numerical methods are the backbone of computational science and
engineering, providing techniques to solve complex mathematical problems
using computers. However, despite their power and versatility, numerical
computations are inherently prone to errors. Understanding and managing
these errors is crucial for obtaining accurate and reliable results. In this article,
we will delve into the fundamentals of errors in numerical computations,
discussing sources of error and methods for error analysis, with examples to
illustrate key concepts.

I. Sources of Errors in Numerical Computations


1. Roundoff Error
Roundoff error occurs due to the finite precision of numerical representation
in computers. Since computers use finite bits to represent real numbers, any
real number that cannot be exactly represented by these bits will be
approximated, leading to roundoff errors. These errors manifest as
discrepancies between the true value of a mathematical expression and its
computer-approximated value. For example, consider the expression 1/3.
Although mathematically correct, a computer can only represent a finite
number of decimal places. Hence, the computer will approximate 1/3 as
0.3333 instead of the exact value of 0.3333333333...

2. Truncation Error
Truncation error arises from approximations made during the discretization
and approximation steps of numerical methods. When continuous
mathematical models are approximated using discrete methods, such as finite
difference or finite element methods, errors are introduced due to the
simplification of the problem. Truncation error is prevalent in numerical
differentiation, integration, and differential equation solvers. For example,
when approximating the derivative of a function using a finite difference
scheme, the truncation error arises from disregarding higher-order
differentials.
3. Iterative Error
Iterative methods are commonly used in numerical computations to solve
equations iteratively until a desired accuracy is achieved. However, each
iteration introduces a small error, known as the iterative error or iteration
error. These errors accumulate as the iteration progresses and can affect the
overall accuracy of the final result. The convergence behavior of an iterative
method determines the rate at which these errors accumulate and the
accuracy of the final solution.

II. Methods for Error Analysis


1. Absolute Error
Absolute error measures the magnitude of the difference between the true
value and the approximate value obtained from a numerical computation. It
provides a measure of how close the approximation is to the true value. For
example, consider determining the value of π using a numerical method. The
absolute error is obtained by taking the absolute difference between the
computed value and the true value of π.

2. Relative Error
Relative error is a dimensionless measure that provides a normalized
comparison between the absolute error and the magnitude of the true value. It
quantifies the ratio of the absolute error to the true value and is often
expressed as a percentage. Relative error is particularly important when
comparing the accuracy of different numerical methods or when evaluating
the impact of errors on the overall result.

3. Order of Convergence

The order of convergence measures how quickly an iterative method


converges to the true solution. It quantifies the rate at which the error
diminishes as the number of iterations increases. If an iterative method has a
higher order of convergence, the error decreases faster, resulting in faster
convergence to the true solution. The order of convergence is determined by
examining the rate of change of the error in each iteration.
III. Examples
Example 1: Roundoff Error

Consider the problem of evaluating the function f(x) = sin(x) near x = 0. Using a
computer with limited precision, the sine function will be approximated as a
series expansion. As x approaches zero, the series expansion will require more
terms to achieve the desired accuracy. However, due to roundoff error, the
computed value of sin(x) may deviate from the actual value. This leads to
inaccuracies in the results of subsequent calculations that depend on sin(x).

Example 2: Truncation Error

Suppose we want to solve the differential equation y'(x) = y(x) using a finite
difference approximation. By discretizing the domain, we can approximate the
derivative as (y(x+h) - y(x))/h, where h is a small step size. However, this
approximation introduces truncation error because we are neglecting higher-
order terms in the Taylor series expansion. The truncation error will affect the
accuracy of the computed solution, especially for large step sizes.

Example 3: Iterative Error

Consider using the Newton-Raphson method to find the root of the equation
f(x) = 0. In each iteration, an approximation is obtained by evaluating f(x) and
its derivative. However, due to the iterative error, the computed values of f(x)
and its derivative may not be perfectly accurate, especially when the initial
guess is far from the true root. As the number of iterations increases, the error
accumulates, potentially leading to a slower convergence or even divergence
from the true root.

IV. Techniques to Mitigate Errors


1. Error Propagation Analysis

Performing error propagation analysis allows us to estimate the sensitivity of


the final result to various sources of errors. By quantifying the impact of
roundoff error, truncation error, and iterative error on the overall result, we
can identify critical areas that require attention or modification to improve
accuracy.
2. Adaptive Stepsize and Iterative Refinement

Adaptive step size control techniques can dynamically adjust the step size in
numerical methods based on error estimates to achieve higher accuracy.
Similarly, iterative methods can be iteratively refined by using higher-order
schemes or convergence acceleration techniques to reduce the iterative error
and achieve faster convergence.

3. Error Control and Error Estimation

Error control and error estimation techniques assess the accuracy of a


numerical method by evaluating the discrepancy between consecutive
iterations or different discretization schemes. By monitoring the error,
adaptive adjustments can be made to improve the accuracy or determine
when convergence has been achieved.

In conclusion, errors are inherent in numerical computations due to roundoff,


truncation, and iterative errors. Understanding the sources of errors and
employing appropriate error analysis techniques is essential for obtaining
accurate and reliable results. Through error propagation analysis, adaptive
techniques, and error control methods, practitioners can mitigate errors and
enhance the accuracy and effectiveness of numerical methods in
computational science and engineering.

(b.) Floating point representation and arithmetic

Numerical methods play a crucial role in solving mathematical problems in


various fields such as physics, engineering, finance, and computer science.
However, working with real numbers on computers introduces certain
limitations due to the finite representation of numbers.

I. Floating-Point Representation:
1.1 Overview:
Floating-point representation is a method used to store and manipulate real
numbers in computers. It consists of three main components: sign, exponent,
and significand.

1.2 Sign:
The sign bit determines whether the number is positive or negative. It is
usually represented using 1 bit, with 0 indicating a positive number and 1
indicating a negative number.

1.3 Exponent:
The exponent represents the scaling factor applied to the significand. It
determines the range and precision of the floating-point number. Examples of
exponent formats include biased exponent, two's complement exponent, etc.

1.4 Significand:
The significand, also known as the mantissa or fraction, represents the actual
digits of the number. It is stored in a fixed or floating-point format, with a
predefined number of bits allocated for its representation.

II. Floating-Point Arithmetic:


2.1 Addition and Subtraction:
Floating-point addition and subtraction involve aligning the significands and
adjusting the exponents, followed by the arithmetic operation on the
significands. The result is then normalized and rounded according to the
floating-point format.

Example:
Consider the addition of two floating-point numbers:
Number A: +1.0101 x 23
Number B: -0.1100 x 21

Step 1: Aligning the exponents:


A: +1.0101 x 23
B: -0.0110 x 23 (Shifted by 2 positions)

Step 2: Performing addition:


A + B = +0.1001 x 23

Step 3: Normalization and rounding:


The result is normalized to +1.0010 x 24 and rounded to the appropriate
precision.

2.2 Multiplication:
Floating-point multiplication involves multiplying the significands and adding
the exponents. The result is then normalized and rounded accordingly.
Example:
Consider the multiplication of two floating-point numbers:
Number A: +1.0100 x 23
Number B: -0.1100 x 2-2

Step 1: Multiplying the significands:


A x B = -0.1111 x 21

Step 2: Adding the exponents:


3 + (-2) = 1

Step 3: Normalization and rounding:


The result is normalized to -1.111 x 21 and rounded as per the floating-point
format.

III. Limitations and Challenges:


3.1 Round-off Error:
Due to limited precision in floating-point representation, round-off errors
occur during arithmetic operations. These errors can accumulate and affect the
accuracy of numerical solutions.

3.2 Underflow and Overflow:


Floating-point numbers have limited ranges. Underflow occurs when a number
is too small to be represented accurately, while overflow occurs when a
number is too large to be represented within the given range.

3.3 Loss of Significance:


Certain arithmetic operations can lead to significant loss of precision. For
example, subtracting two nearly equal floating-point numbers can result in a
significant loss of digits in the significand.

IV. Conclusion:
Floating-point representation and arithmetic are fundamental concepts in
numerical methods. They enable the manipulation of real numbers on
computers, but have limitations due to finite precision and range.
Understanding these limitations and potential challenges is essential for
developing accurate and reliable numerical algorithms.
(c.) Stability and convergence of numerical algorithms

Numerical methods are mathematical techniques designed to solve complex


problems that are difficult or impossible to solve using analytical methods.
These techniques rely on approximations, which means that the solutions
obtained may not be entirely accurate. This introduces the possibility of error,
which can undermine the validity of these methods. For this reason, it is
essential to analyze the stability and convergence properties of numerical
algorithms.

Stability is the ability of a numerical algorithm to produce consistent and


reliable results. A stable algorithm is less sensitive to small perturbations in the
input data, which means that it can provide accurate solutions even with slight
variations. A simple example of a stable numerical algorithm is the forward
Euler method for solving ordinary differential equations (ODEs). This method
calculates the solution at a future time step based on the current value of the
solution and the derivative at that point. Despite the simplicity of this
approach, it is relatively stable and can produce accurate results for most
ODEs.

In contrast, an unstable algorithm is highly sensitive to small changes in the


input data, and even minor perturbations can result in significantly different
outcomes. An example of an unstable numerical algorithm is the backward
Euler method for solving ODEs. This method calculates the solution at a future
time step based on the derivative at the future time step, instead of at the
current time step. This approach can lead to numerical instabilities and erratic
behavior, which can make the algorithm unreliable.

The concept of convergence is related to the tendency of a numerical


algorithm to approach the correct solution as we refine the approximation. In
other words, as we increase the number of iterations or decrease the step size,
a convergent algorithm will produce more accurate results that are closer to
the exact solution. Convergence is essential because it ensures that a
numerical algorithm yields results that are sufficiently accurate for the
practical purposes of the problem at hand.

A classic example of an algorithm that demonstrates convergence is the


Newton-Raphson method for finding roots of a nonlinear equation. This
method is an iterative algorithm that approximates the root by repeatedly
solving a linear equation that approximates the slope of the function at that
point. The algorithm converges to the solution if the function is well-behaved
around the root, and the initial guess is reasonably close.

Stability and convergence are related concepts, and the behavior of a


numerical algorithm can be influenced by both. A stable algorithm can be more
robust to numerical perturbations and less likely to encounter convergence
issues. Similarly, a convergent algorithm that approaches the correct solution
as we refine the approximation will typically be more robust and reliable than
an algorithm that does not.

One common source of instability in numerical algorithms is round-off error.


This error occurs when an algorithm approximates a real number using a finite
number of digits. The accuracy of the approximation decreases as the number
of digits used in the representation decreases, which can lead to significant
errors in the result. The effects of round-off error can be mitigated by using
more precise representations or avoiding operations that are prone to large
numerical errors, such as subtracting two nearly equal numbers.

Another source of instability is numerical stiffness. Stiffness refers to the


presence of rapidly varying or stiff components in the function being
approximated. These components can cause significant errors in the solution,
particularly when the time step used in the algorithm is too large. A common
approach to manage stiffness is to use adaptive time-stepping, which modifies
the time step based on the behavior of the solution.

Understanding the stability and convergence properties of numerical


algorithms is critical for ensuring the accuracy and reliability of the numerical
solutions obtained. By analyzing these properties and identifying sources of
error or instability, it is possible to develop robust and accurate numerical
techniques that can be applied in a wide range of scientific and engineering
applications.
Interpolation and Curve Fitting

(a.) Polynomial Interpolation

You might also like