BCS054
BCS054
BCS054
(a) (i) Fixed-point number representation: This is a method of representing numbers in a computer using
a fixed number of digits for the integer part and a fixed number of digits for the fractional part. The
position of the decimal point is fixed, hence the name.
Example: If we have a 4-digit fixed-point number with 2 digits for the integer part and 2 digits for the
fractional part, then 12.34 would be represented as 1234.
(ii) Round-off error: This is the error that occurs when a real number is approximated by a finite-
precision floating-point number. It happens when the number cannot be represented exactly due to the
limitations of the number system.
Example: If we try to represent the number 0.1 in a binary floating-point system with 4 bits for the
mantissa, the closest representation would be 0.0001, which is slightly different from the actual value.
(iii) Representation of zero as floating point number: In a floating-point number system, zero is
represented by a special value. The exponent is set to the smallest possible value, and the mantissa is
set to zero. This ensures that the number is treated as zero in calculations.
(iv) Significant digits in a decimal number representation: The significant digits in a decimal number are
the digits that are not zero, starting from the leftmost non-zero digit.
Example: In the number 0.00123, the significant digits are 1, 2, and 3.
(v) Normalized representation of a floating point number: A normalized floating-point number is one
where the leading digit of the mantissa is non-zero. This ensures that the number is represented in a
unique way.
Example: The number 123.45 can be normalized as 1.2345 * 10^2.
(vi) Overflow: This occurs when a calculation results in a number that is too large to be represented in
the available number system.
Example: If we try to add two very large numbers together, the result may overflow if the sum is too
large to fit in the available number of bits.
(b) The multiplication operation may not be distributive over plus in computer arithmetic due to the
limited precision of floating-point numbers. This means that (a * b) + (a * c) may not be equal to a * (b +
c) for some values of a, b, and c.
Example: Let a = 1.0, b = 0.1, and c = -0.1. In a floating-point system with limited precision, we might
have:
(a * b) + (a * c) = 0.1 + (-0.1) = 0 a * (b + c) = 1.0 * 0 = 0
In this case, the two expressions are equal. However, if we had used different values for a, b, and c, we
might have obtained different results due to the rounding errors introduced by the limited precision of
the floating-point numbers.
(c) To find out to how many decimal places the value 22/7 is accurate as an approximation of
3.14159265, we can calculate the absolute error and compare it to the desired accuracy. The absolute
error is given by:
|22/7 - 3.14159265| = 0.001264489
Since the desired accuracy is 8 places after the decimal, we need to compare the absolute error to 10
^-8:
0.001264489 > 10^-8
Therefore, the approximation 22/7 is not accurate to 8 decimal places. To find the number of decimal
places to which it is accurate, we can keep increasing the exponent in 10^-n until the absolute error
becomes less than or equal to it.
(d) To calculate a bound for the truncation error in approximating f(x) = sin x by the given polynomial,
we can use the Lagrange error bound theorem. This theorem states that the error is bounded by:
|E(x)| <= M * |x-a|^n+1 / (n+1)!
where M is the maximum value of the (n+1)-th derivative of f(x) on the interval [-1, 1], a is the point
around which the Taylor series is centered (in this case, a = 0), and n is the degree of the polynomial
approximation.
Bcs-054 Page 1
approximation.
For f(x) = sin x, the (n+1)-th derivative is either sin x or cos x, both of which have a maximum absolute
value of 1 on the interval [-1, 1]. Therefore, we can take M = 1.
Substituting n = 5 into the Lagrange error bound, we get:
|E(x)| <= 1 * |x|^6 / 6!
Since |x| <= 1, we can further simplify the bound to:
|E(x)| <= 1 / 720
Therefore, the truncation error in approximating sin x by the given polynomial is bounded by 1/720.
(e) To approximate the value of (3.7)^-1 using the first three terms of Taylor's series expansion, we can
use the formula:
f(x) ≈ f(a) + f'(a)(x-a) + f''(a)(x-a)^2/2
where f(x) = x^-1, a = 3, and x = 3.7.
Calculating the derivatives and substituting the values, we get:
f(3) = 1/3 f'(3) = -1/9 f''(3) = 2/27
Substituting these values into the formula, we get:
(3.7)^-1 ≈ 1/3 - 1/9 * 0.7 + 2/27 * 0.7^2/2 ≈ 0.27027
Bcs-054 Page 2
x2(3) = 3.0325, x3(3) = -10.585 k=3: x1(4) = 3.0488, x2(4) = 3.0076, x3(4) = -10.4625
Comparison:
The Gauss-Seidel method gives a better approximation to the exact solution after four iterations. This is
because the Gauss-Seidel method uses the updated values of x1 and x2 in the calculation of x3, while
the Jacobi method uses only the old values. This can lead to faster convergence for the Gauss-Seidel
method.
Bcs-054 Page 3
iterations.
(a) Interpolation is the process of estimating the value of a function at a point between two known data
points. It is used in various numerical problems to:
• Approximate values of functions that are difficult or impossible to evaluate directly.
• Fill in missing data points in a dataset.
• Smooth out noisy data.
• Extrapolate values beyond the range of the known data points (though this is less reliable than
interpolation).
(b) Δ3f1 as a backward difference:
Δ3f1 = f1 - 3f2 + 3f3 - f4
(c) Δ3f1 as a central difference:
Δ3f1 = (1/8) * (f-3 - 3f-2 + 3f-1 - f0)
(d) Difference table:
x y Δy Δ²y Δ³y Δ⁴y Δ⁵y
0 -16.8575 55.905 -94.5275 120.49 -106.555 81.495
1 24.0625 -37.6275 25.905 16.9825 -25.065 -33.93
2 16.565 -50.5025 42.8875 -32.0475 5.03
3 -13.9375 42.425 74.9125 -37.015 -
4 28.5625 115.5 -58.5475 -
5 144.0625 -
Forward differences:
Δy1 = 55.905 Δ²y1 = -94.5275 Δ³y1 = 120.49 Δ⁴y1 = -106.555 Δ⁵y1 = 81.495
Backward differences:
∇y5 = -58.5475 ∇²y5 = 74.9125 ∇³y5 = -32.0475 ∇⁴y5 = 16.9825 ∇⁵y5 = -94.5275
Part (a)
Given data:
Year (x) Population (y)
1971 112
1981 132
1991 158
2001 189
2011 226
Export to Sheets
Note: For Stirling's central difference formula, we need an even number of data points. We can add a
fictitious data point for the year 2021 using backward differences.
1. Find backward differences:
Δy4 = 226 - 189 = 37
Δ²y3 = 37 - (189 - 158) = 16
Δ³y2 = 16 - (158 - 132) = -8
Δ⁴y1 = -8 - (132 - 112) = -16
Bcs-054 Page 4
Δ⁴y1
Note: Please perform the numerical calculations for the specific values to get the final estimates for the
population and f(3).
Part (a)
Given data:
x f(x)
76 5.3147
81 5.4346
86 5.5637
91 5.6629
Export to Sheets
Step 1: Calculate forward differences:
Δy1 = f(81) - f(76) = 0.1209
Δ²y1 = f(86) - 2f(81) + f(76) = 0.0131
Step 2: Approximate first derivative using O(h²) formula:
f'(76) ≈ (Δy1 - Δ²y1 / 2) / h = (0.1209 - 0.0131 / 2) / 5 = 0.02356
Bcs-054 Page 5
Step 3: Approximate second derivative using O(h²) formula:
f''(76) ≈ (Δ²y1) / h² = 0.0131 / 25 = 0.000524
Note: To calculate the actual errors, we would need the exact function f(x). Without this information,
we can't compute the exact derivatives.
Part (a)
Given integral:
∫(8.4 to 10.4) (5x + 4x² + 3) dx
where xi = a + (i-1) * h.
Trapezoidal Rule:
∫(a to b) f(x) dx ≈ h/2 * (f(a) + 2∑(i=2 to n-1) f(xi) + f(b))
Note: You'll need to calculate the values of f(xi) for each method and then apply the respective formulas
to get the approximate integral.
Remember to replace n with the desired number of subintervals and perform the calculations
accordingly.
Bcs-054 Page 6