Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Formule

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 2

FPN The number of bits is usually fixed for any given computer.

Using binary representation gives us an insufficient range and precision of numbers to do relevant engineering calculations. To
achieve the range of values needed with the same number of bits, we use floating point numbers. single precision (32 bits) and double precision (64 bits) Instead of utilizing each bit as the
coefficient of a power of 2, floats allocate bits to three different parts: the sign indicator, s, which says whether a number is positive or negative; characteristic or exponent, e, which is the power of
2; and the fraction, f , which is the coefficient of the exponent.In the IEEE754 standard for single precision, 1 bit is allocated to the sign indicator, 8 bits are allocated to the exponent, and 23 bits are
allocated to the fraction. We call the distance from one number to the next the gap. Because the fraction is multiplied by 2c^127, the gap grows as the number represented grows. When the
exponent is 0, the leading 1 in the fraction takes the value 0 instead. The result is a subnormal number, which is computed by n = 1s2126(0 + f ). When the exponent is 255 and f is nonzero, then
the result is Not a Number,. Numbers that are larger than the largest representable floating point number result in overflow, and MATLAB handles this case by assigning the result to Inf. Numbers
that are smaller than the smallest subnormal number result in underflow, and MATLAB handles this case by assigning the result to 0. So, what have we gained by using IEEE754 versus binary? Using
32 bits gives us about 4 billion (232) numbers.

C:1 MAT There is no solution for x. If rank([A, y]) = rank(A) + 1, then y is linearly independent from the columns of A. Therefore y is not in the range of A and by definition, there cannot be an x that
satisfies the equation. Thus, comparing rank([A, y]) and rank(A) provides an easy way to check if there are no solutions to a system of linear equations. Avoid using A\y
C:2 MAT There is a unique solution for x. If rank([A, y]) = rank(A), then y can be written as a linear combination of the columns of A and there is at least one solution for the matrix equation. For
there to be only one solution, rank(A) = n must also be true.
a) m < n, rank(A) = n cannot possibly be true because this means we have a fat matrix with fewer equations than unknowns
b) m = n and rank(A) = n, then A is square and invertible. Since the inverse of a matrix is unique, then the matrix equation Ax = y can be solved by multiplying each side of the equation, on
the left, by A1. This results in A1Ax = A1 y I x = A1 y x = A1 y, which gives the unique solution to the equation.
c) m > n, then there are more equations than unknowns. However if rank(A) = n, then it is possible to choose n equations (i.e., rows of A) such that if these equations are satisfied, then the
remaining m n equations will be also satisfied. In other words, they are redundant.

In linear interpolation, the estimated point is assumed to lie on the line joining the nearest points to the left and right. Assume, without loss of generality, that the x-data points are in ascending
order; that is, xi < xi+1, and let x be a point such that xi < x < xi+1. Then the linear interpolation at x is y(x) = yi + (yi+1 yi)/(x xi) (xi+1 xi) .
In cubic spline interpolation (Figure 14.3), the interpolating function is a set of piecewise cubic functions. Specifically, we assume that the points (xi, yi) and (xi+1, yi+1) are joined by a cubic
polynomial Si(x) = ai x3 + bi x2 + ci x + di that is valid for xi x xi+1 for i = 1,..., n 1. To find the interpolating function, we must first determine the coefficients ai, bi, ci, di for each of the cubic
functions. For n points, there are n 1 cubic functions to find, and each cubic function requires four coefficients. Therefore we have a total of 4(n 1) unknowns, and so we need 4(n 1)
independent equations to find all the coefficients. First we know that the cubic functions must intersect the data the points on the left and the right:
Si(xi) = yi, i = 1,..., n 1, Si(xi+1) = yi+1, i = 1,..., n 1, which gives us 2(n 1) equations. Next, we want each cubic function to join as smoothly with its neighbors as possible, so we constrain the
splines to have continuous first and second derivatives at the data points i = 2,..., n 1.

Lagrange polynomial interpolation finds a single polynomial that goes through all the data points. This polynomial is referred to as a Lagrange polynomial, L(x), and as an interpolation function, it
should have the property 14.4 Lagrange Polynomial Interpolation 217 L(xi) = yi for every point in the data set. For computing Lagrange polynomials, it is useful to write them as a linear combination
of Lagrange basis polynomials

Clearly, it is not useful to express functions as infinite sums because we cannot even compute them that way. However, it is often useful to approximate functions by using an Nth order Taylor series
15.2 Approximations with Taylor Series 227 approximation of a function, which is a truncation of its Taylor expansion at some n = N. This technique is especially powerful especially when there is a
point around which we have knowledge about a function for all its derivatives.

ROOT FINDING The root or zero of a function, f (x), is an xr such that f (xr) = 0. For functions such as f (x) = x2 9, the roots are clearly 3 and 3. However, for other functions such as f (x) = cos(x) x,
determining an analytic, or exact, solution for the roots of functions can be difficult. For these cases, it is useful to generate numerical approximations of the roots of f and understand the limitations
in doing so.

Tolerance is the level of error that is acceptable for an engineering application. We say that a computer program has converged to a solution when it has found a solution with an error smaller
than the tolerance. When computing roots numerically, or conducting any other kind of numerical analysis, it is important to establish both a metric for error and a tolerance that is
suitable for a given engineering application. For computing roots, we want an xr such that f (xr) is very close to 0. Therefore | f (x)| is a possible choice for the measure o f error since the
smaller it is, the likelier we are to a root. Also if we assume that xi is the ith guess of an algorithm for finding a root, then |xi+1 xi| is another possible choice for measuring error, since
we expect the improvements between subsequent guesses to diminish as it approaches a solution.
The Intermediate Value Theorem says that if f (x) is a continuous function between a and b, and sign( f (a)) = sign( f (b)), then there must be a c, such that a < c < b and f (c) = 0. This is illustrated
in Figure 16.1. The bisection method uses the intermediate value theorem iteratively to find roots. Let f (x) be a continuous function, and a and b be real scalar values such that a < b.
Assume, without loss of generality, that f (a) > 0 and f (b) < 0. Then by the intermediate value theorem, there must be a root on the open interval (a, b). Now let m = b+a 2 , the midpoint
between and a and b. If f (m) = 0 or is close enough, then m is a root. If f (m) > 0, then m is an improvement on the left bo und, a, and there is guaranteed to be a root on the open interval
(m, b). If f (m) < 0, then m is an improvement on the right bound, b, and there is guaranteed to be a root on the open interval (a, m).
New Raps Met Let f (x) be a smooth and continuous function and xr be an unknown root of f (x). Now assume that x0 is a guess for xr. Unless x0 is a very lucky guess, f (x0) will not be a root.
Given this scenario, we want to find an x1 that is an improvement on x0 (i.e., closer to xr than x0). If we assume that x0 is close enough to xr, then we can improve upon it by taking the
linear approximation of f (x) around x0, which is a line, and finding the intersection of this line with the x-axis. Written out, the linear approximation of f (x) around x0 is f (x) f (x0) + f
(x0)(x x0). Using this approximation, we find x1 such that f (x1) = 0. Plugging these values into the linear approximation results in the equation 0 = f (x0) + f (x0)(x1 x0), which when
solved for x1 is x1 = x0 f (x0) f (x0) .Written generally, a Newton step computes an improved guess, xi , using a previous guess xi1, and is given by the equation xi = xi1 g(xi1) g
(xi1) . The Newton-Raphson Method of finding roots iterates Newton steps from x0 until the error is less than the tolerance.
In numerical analysis, numerical differentiation describes algorithms for estimating the derivative of a mathematical function or function subroutine using values of the function and perhaps
other knowledge about the function.

central diff formula

Because explicit derivation of functions is sometimes cumbersome for engineering applications, numerical approaches can be preferable. 2. Numerical approximation of derivatives can be done
using a grid on which the derivative is approximated by finite differences. 3. Finite differences approximate the derivative by ratios of differences in the function value over small intervals.
4. Finite difference schemes have different approximation orders depending on the method used. 5. There are issues with finite differences for approximation of derivatives when the
data is noisy.
Given a function f (x), we want to approximate the integral of f (x) over the total interval, [a, b]. Figure 18.1 illustrates this area. To accomplish this goal, we assume that the interval has been
discretized into a numeral grid, x, consisting of n + 1 points with spacing, h = ba n . Here, we denote each point in x by xi , where x0 = a and xn = b. Note: There are n + 1 grid points
because the count starts at x0. We also assume we have a function, f (x), that can be computed for any of the grid points, or that we have been given the function implicitly as f (xi). The
interval [xi, xi+1] is referred to as a subinterval.
The simplest method for approximating integrals is by summing the area of rectangles that are defined for each subinterval. The width of the rectangle is xi+1 xi = h, and the height is defined
by a function value f (x) for some x in the subinterval. An obvious choice for the height is the function value at the left endpoint, xi , or the right endpoint, xi+1, because these values can
be used even if the function itself is not known. This method gives the Riemann Integral approximation, which is

The Trapezoid Rule fits a trapezoid into each subinterval and sums the areas of the trapezoid to approximate the total integral. This approximation for the integral to an arbitrary function is
shown in Figure 18.2. For each subinterval, the Trapezoid Rule computes the area of a trapezoid with corners at (xi, 0), (xi+1, 0), (xi, f (xi)), and (xi+1, f (xi+1)), which is h f (xi)+ f (xi+1) 2 .
Thus, the Trapezoid Rule approximates integrals according to the expression

Consider two consecutive subintervals, [xi1, xi] and [xi, xi+1]. Simpsons Rule(Note that to use Simpsons Rule, you must have an even number of intervals and, therefore, an odd number of
grid points) approximates the area under f (x) over these two subintervals by fitting a quadratic polynomial through the points (xi1, f (xi1)), (xi, f (xi)), and (xi+1, f (xi+1)), which is a
unique polynomial, and then integrating the quadratic exactly. Figure 18.3 shows this integral approximation for an arbitrary function. First, we construct the quadratic polynomial
approximation of the function over the two subintervals. The easiest way to do this is to use Lagrange polynomials, which was discussed in Chapter 14. By applying the formula for
constructing Lagrange polynomials we get the polynomial

-
This formula is called the Explicit Euler Formula, and it allows us to compute an approximation for the state at S(tj+1) given the state at S(tj). Starting from a given initial value of S0 = S(t0),
we can use this formula to integrate the states up to S(t f ); these S(t) values are then an approximation for the solution of the differential equation. The Explicit Euler formula is the
simplest and most intuitive method for solving initial value problems. At any state (tj, S(tj))it uses F at that state to point toward the next state and then moves in that direction a
distance of h. Although there are more sophisticated and accurate methods for solving these problems, they all have the same fundamental structure.
Assume we are given a function F(t, S(t)) that computes d S(t) dt , a numerical grid, t, of the interval, [t0, t f ], and an initial state value S0 = S(t0). We can compute S(tj) for every tj in t
using the following steps. 1. Store S0 = S(t0) in an array, S. 2. Compute S(t1) = S0 + hF(t0, S0). 3. Store S1 = S(t1) in S. 4. Compute S(t2) = S1 + hF(t1, S1). 5. Store S2 = S(t1) in S. 6. 7.
Compute S(t f ) = S f 1 + hF(t f 1, S f 1). 8. Store S f = S(t f ) in S. 9. S is an approximation of the solution to the initial value problem.

The Explicit Euler Formula is called explicit because it only requires information at tj to compute the state at tj+1. That is, S(tj+1) can be written explicitly in terms of values we have (i.e.,
tj and S(tj)). The Implicit Euler Formula can be derived by taking the linear approximation of S(t) around tj+1 and computing it at tj :

Trapezoidal Formula, which is the average of the Explicit and Implicit Euler Formulas:

A classical method for integrating ODEs with a high order of accuracy is the Fourth Order Runge Kutta (RK4) method. This method uses four predictor corrector steps called k1, k2, k3, and
k4. A weighted average of these predictions is used to produce the approximation of the solution. The formula is as follows. k1 = F(tj, S(tj)) k2 = F tj + h 2 , S(tj) + 1 2 k1h k3 = F tj + h 2 ,
S(tj) + 1 2 k2h k4 = F(tj + h, S(tj) + k3h) Predictorcorrector methods of solving initial value problems improve the approximation accuracy of nonpredictor-corrector methods by querying
the F function several times at different locations (predictions), and then using a weighted average of the results (corrections) to update the state. The midpoint method method has a
predictor step: S tj + h 2 = S(tj) + h 2 F(tj, S(tj)),

You might also like