This document contains information about a student named Kiran Kumar Malik enrolled in the first semester of the computer science branch at Bhubaneswar campus. It defines maximal and minimal elements in a partially ordered set (poset) and provides examples to identify these elements from Hasse diagrams. It also contains solutions to questions asking to find the maximal, minimal, greatest and least elements for different posets defined by the divides relation.
The document discusses eigenvalue problems and algorithms for solving them. Eigenvalue problems involve finding the eigenvalues and eigenvectors of a matrix and occur across science and engineering. The properties of the eigenvalue problem, like whether the matrix is real or complex, affect the choice of algorithm. The Power Method is described as an iterative technique for determining the dominant eigenvalue and eigenvector of a matrix. It works by successively applying the matrix to a starting vector to isolate the component in the direction of the dominant eigenvector. Variants can find other eigenvalues like the smallest. General projection methods approximate eigenvectors within a subspace, while subspace iteration generalizes Power Method to compute multiple eigenvalues.
Second order homogeneous linear differential equations Viraj Patel
1) The document discusses second order linear homogeneous differential equations, which have the general form P(x)y'' + Q(x)y' + R(x)y = 0.
2) It describes methods for finding the general solution including reduction of order, and discusses the solutions when the coefficients are constants.
3) The general solution depends on the nature of the roots of the auxiliary equation: distinct real roots, repeated real roots, or complex roots.
This document provides an overview of probability concepts including:
- The three axioms of probability: probabilities are between 0 and 1, the probability of the sample space is 1, and the probability of the union of disjoint events equals the sum of the individual probabilities.
- Formulas for probability, conditional probability, independence, and complements.
- Discrete and continuous random variables and their properties including expected value and variance.
- Examples of probability mass functions for binomial and Poisson distributions.
The document discusses the secant method for finding the roots of non-linear equations. It introduces the secant method which uses successive secant lines through points on the graph of a function to better approximate roots. The methodology section explains that a secant line is defined by two initial points and the next point is where the secant line crosses the x-axis. The algorithm involves calculating the next estimate from the two initial guesses and checking if the error is below a tolerance level. Applications include using the secant method for earthquake engineering analysis and limitations include potential division by zero errors or root jumping.
Numerical Methods - Power Method for Eigen valuesDr. Nirav Vyas
The document discusses the power method, an iterative method for estimating the largest or smallest eigenvalue and corresponding eigenvector of a matrix. It begins by introducing the power method and notes it is useful when a matrix's eigenvalues can be ordered by magnitude. It then provides the working rules for determining a matrix's largest eigenvalue using the power method, which involves iteratively computing the matrix-vector product and rescaling the vector. Finally, it includes an example applying the power method to estimate the largest eigenvalue and eigenvector of a 2x2 matrix.
1. Generating functions can be used to represent sequences as power series and solve recurrence relations.
2. Common examples of generating functions are presented for various sequences like the constant sequence {1,1,...}, the sequence of powers of 2, and binomial coefficients.
3. The process of using generating functions to solve recurrence relations involves rewriting the relation, multiplying by x^n and summing, identifying the generating function, and extracting the nth term.
This document provides information about eigenvalues and eigenvectors. It defines eigenvalues and eigenvectors as scalars (λ) and vectors (x) that satisfy the equation Ax = λx, where A is a matrix. It discusses properties of eigenvalues including that the sum of eigenvalues is the trace of A, and the product is the determinant. The characteristic equation is defined as det(A - λI) = 0, where the roots are the eigenvalues. Cayley-Hamilton theorem states that every matrix satisfies its own characteristic equation. Examples are given to demonstrate Cayley-Hamilton theorem.
This document contains information about a student named Kiran Kumar Malik enrolled in the first semester of the computer science branch at Bhubaneswar campus. It defines maximal and minimal elements in a partially ordered set (poset) and provides examples to identify these elements from Hasse diagrams. It also contains solutions to questions asking to find the maximal, minimal, greatest and least elements for different posets defined by the divides relation.
The document discusses eigenvalue problems and algorithms for solving them. Eigenvalue problems involve finding the eigenvalues and eigenvectors of a matrix and occur across science and engineering. The properties of the eigenvalue problem, like whether the matrix is real or complex, affect the choice of algorithm. The Power Method is described as an iterative technique for determining the dominant eigenvalue and eigenvector of a matrix. It works by successively applying the matrix to a starting vector to isolate the component in the direction of the dominant eigenvector. Variants can find other eigenvalues like the smallest. General projection methods approximate eigenvectors within a subspace, while subspace iteration generalizes Power Method to compute multiple eigenvalues.
Second order homogeneous linear differential equations Viraj Patel
1) The document discusses second order linear homogeneous differential equations, which have the general form P(x)y'' + Q(x)y' + R(x)y = 0.
2) It describes methods for finding the general solution including reduction of order, and discusses the solutions when the coefficients are constants.
3) The general solution depends on the nature of the roots of the auxiliary equation: distinct real roots, repeated real roots, or complex roots.
This document provides an overview of probability concepts including:
- The three axioms of probability: probabilities are between 0 and 1, the probability of the sample space is 1, and the probability of the union of disjoint events equals the sum of the individual probabilities.
- Formulas for probability, conditional probability, independence, and complements.
- Discrete and continuous random variables and their properties including expected value and variance.
- Examples of probability mass functions for binomial and Poisson distributions.
The document discusses the secant method for finding the roots of non-linear equations. It introduces the secant method which uses successive secant lines through points on the graph of a function to better approximate roots. The methodology section explains that a secant line is defined by two initial points and the next point is where the secant line crosses the x-axis. The algorithm involves calculating the next estimate from the two initial guesses and checking if the error is below a tolerance level. Applications include using the secant method for earthquake engineering analysis and limitations include potential division by zero errors or root jumping.
Numerical Methods - Power Method for Eigen valuesDr. Nirav Vyas
The document discusses the power method, an iterative method for estimating the largest or smallest eigenvalue and corresponding eigenvector of a matrix. It begins by introducing the power method and notes it is useful when a matrix's eigenvalues can be ordered by magnitude. It then provides the working rules for determining a matrix's largest eigenvalue using the power method, which involves iteratively computing the matrix-vector product and rescaling the vector. Finally, it includes an example applying the power method to estimate the largest eigenvalue and eigenvector of a 2x2 matrix.
1. Generating functions can be used to represent sequences as power series and solve recurrence relations.
2. Common examples of generating functions are presented for various sequences like the constant sequence {1,1,...}, the sequence of powers of 2, and binomial coefficients.
3. The process of using generating functions to solve recurrence relations involves rewriting the relation, multiplying by x^n and summing, identifying the generating function, and extracting the nth term.
This document provides information about eigenvalues and eigenvectors. It defines eigenvalues and eigenvectors as scalars (λ) and vectors (x) that satisfy the equation Ax = λx, where A is a matrix. It discusses properties of eigenvalues including that the sum of eigenvalues is the trace of A, and the product is the determinant. The characteristic equation is defined as det(A - λI) = 0, where the roots are the eigenvalues. Cayley-Hamilton theorem states that every matrix satisfies its own characteristic equation. Examples are given to demonstrate Cayley-Hamilton theorem.
The document presents information about differential equations including:
- A definition of a differential equation as an equation containing the derivative of one or more variables.
- Classification of differential equations by type (ordinary vs. partial), order, and linearity.
- Methods for solving different types of differential equations such as variable separable form, homogeneous equations, exact equations, and linear equations.
- An example problem demonstrating how to use the cooling rate formula to calculate the time of death based on measured body temperatures.
This document discusses the Gauss-Jordan elimination method for solving systems of linear equations. It explains that Gauss-Jordan elimination uses elementary row operations to transform the augmented matrix of a system into row-echelon form, from which the solutions can be read directly. Pseudocode and examples in Fortran and Java programming languages are provided to demonstrate how to implement the Gauss-Jordan algorithm to solve systems of linear equations numerically on a computer.
This document discusses ordinary differential equations (ODEs). It defines ODEs and differentiates them from partial differential equations. ODEs can be classified by type, order, and linearity. Initial value problems involve solving an ODE with initial conditions specified at a point, while boundary value problems involve conditions at boundary points. The document provides examples of solving first- and second-order initial value problems. It also discusses the existence and uniqueness of solutions to initial value problems under certain continuity conditions on the functions defining the ODE.
The document discusses diagonalization of matrices. Diagonalization involves finding an invertible matrix P such that P-1AP is a diagonal matrix. The procedure for diagonalizing a matrix A includes: (1) finding the eigenvalues of A, (2) finding linearly independent eigenvectors corresponding to the eigenvalues, (3) forming the matrix P with the eigenvectors as columns, and (4) showing that P-1AP is a diagonal matrix with the eigenvalues along the diagonal. An example demonstrates finding the eigenvectors of a 2x2 matrix A and constructing the matrix P to diagonalize A.
Engineering Mathematics-IV_B.Tech_Semester-IV_Unit-VRai University
This document describes numerical integration and differentiation techniques taught in a B.Tech Engineering Mathematics course. It covers the Trapezoidal, Simpson's 1/3 and 3/8 rules for numerical integration of functions. For numerical differentiation, it discusses Euler's method, Picard's method, and Taylor series for solving ordinary differential equations. Examples are provided to illustrate the application of these numerical methods to evaluate integrals and solve initial value problems.
The document provides examples to illustrate how to find the eigenvalues and eigenvectors of a matrix.
1) For a 2x2 matrix, the characteristic polynomial is computed by taking the determinant of the matrix minus the identity matrix. The roots of the characteristic polynomial are the eigenvalues. The corresponding eigenvectors are found by solving the original eigenvalue equation.
2) For a triangular matrix, the eigenvalues are the diagonal elements. The eigenvectors are found by setting rows corresponding to non-diagonal elements to zero.
3) The document provides a numerical example to demonstrate finding the eigenvalues (3, 1, -2) and eigenvectors of a 3x3 matrix.
Fuzzy sets allow for gradual membership of elements in a set, rather than binary membership as in classical set theory. Membership is described on a scale of 0 to 1 using a membership function. Fuzzy sets generalize classical sets by treating classical sets as special cases where membership values are restricted to 0 or 1. Fuzzy set theory can model imprecise or uncertain information and is used in domains like bioinformatics. Examples of fuzzy sets include sets like "tall people" where membership in the set is a matter of degree.
This document discusses the Gamma and Beta functions. It defines them using improper definite integrals and notes they are special transcendental functions. The Gamma function was introduced by Euler and both functions have applications in areas like number theory and physics. The document provides properties of each function and examples of evaluating integrals using their definitions and relations.
This document discusses differential equations and includes the following key points:
1. It defines differential equations and provides examples of ordinary and partial differential equations of varying orders.
2. It classifies differential equations as ordinary or partial, linear or non-linear, and of first or higher order. Examples are given of each type.
3. Applications of differential equations are listed, including modeling projectile motion, electric circuits, heat transfer, vibrations, population growth, and chemical reactions.
4. Methods of solving differential equations including finding general and particular solutions are explained. Initial value and boundary value problems are also defined.
B.tech ii unit-2 material beta gamma functionRai University
1. The document discusses the gamma and beta functions, which are defined in terms of improper definite integrals involving exponential and power functions.
2. Examples are provided to demonstrate properties and applications of the gamma function, including evaluating integrals involving the gamma function.
3. The beta function is defined in terms of an integral from 0 to 1, and its relationship to the gamma function is described.
A short presentation on the topic Numerical Integration for Civil Engineering students.
This presentation consist of small introduction about Simpson's Rule, Trapezoidal Rule, Gaussian Quadrature and some basic Civil Engineering problems based of above methods of Numerical Integration.
The document discusses partial differential equations (PDEs). It defines PDEs and gives their general form involving independent variables, dependent variables, and partial derivatives. It describes methods for obtaining the complete integral, particular solution, singular solution, and general solution of a PDE. It provides examples of types of PDEs and how to solve them by assuming certain forms for the dependent and independent variables and their partial derivatives.
1) The graphical method involves graphing the lines represented by each equation on the same coordinate plane and finding the point where they intersect, which gives the solution.
2) Cramer's rule expresses each unknown as a ratio of determinants, with the numerator being the determinant of the coefficient matrix with one column replaced by the constants.
3) Gaussian elimination transforms the coefficient matrix into upper triangular form using elementary row operations, then back substitution solves for the unknowns.
The document discusses numerical computing and various interpolation techniques. Numerical computing involves solving complex mathematical problems using simple arithmetic operations by formulating models that can be solved numerically. The document then discusses nonlinear equations and various iterative methods to solve them, including bracketing methods like bisection and regula falsi, and open-end methods like Newton-Raphson and secant. It also discusses fixed point iteration. Finally, it covers interpolation techniques like Lagrange interpolation and Newton interpolation to estimate values of a function at intermediate points.
Most real life problems are modeled by differential equations. Stability analysis plays an important role while analyzing such models. In this project, we demonstrate stability of a few such problems in an introductory manner. We begin by defining different types of stability. Some methods for determining stability of various systems has been studied here. We start our study by categorizing stability of differential equations by the roots of their characteristic equations. Then we discuss the complexities of such analysis. Then we review Lyapunov’s stability concepts which gives a better way to analyze stability of a system of differential equations. We demonstrate the techniques with some examples.
This document discusses key concepts in quantum mechanics including wave functions, operators, linear vector spaces, inner products, orthogonal and orthonormal bases, Hilbert spaces, and the expansion theorem. It defines wave functions and operators as the two main constructs in quantum mechanics. It also explains that the natural language of quantum mechanics is linear algebra and describes concepts like linear vector spaces, inner products, orthogonal and orthonormal bases, and Hilbert spaces in the context of quantum mechanics.
This document presents information on fuzzy arithmetic and operations. It discusses fuzzy numbers, linguistic variables, and arithmetic operations on fuzzy intervals and fuzzy numbers. Some key points:
- Fuzzy numbers are fuzzy sets with certain properties like being normal, having closed interval alpha-cuts, and bounded support.
- Linguistic variables assign linguistic values like "young" or "old" to numerical variables. They are represented as fuzzy sets.
- Arithmetic operations on fuzzy intervals are defined based on the corresponding operations on their alpha-cuts, which are closed intervals. Properties like commutativity and distributivity are discussed.
- Operations on fuzzy numbers are similarly defined based on the alpha-cuts of the resulting fuzzy
Eigenvalues and eigenfunctions are key concepts in linear algebra. An eigenfunction is a function that when operated on by a linear operator produces a constant multiplied version of itself. The constant is the corresponding eigenvalue. Eigenvalues are the solutions to the characteristic polynomial of the linear operator. Eigenfunctions are not unique as any constant multiple of an eigenfunction is also an eigenfunction with the same eigenvalue. The spectrum of an operator is the set of all its eigenvalues.
The finite difference method can be considered as a direct discretization of differential equations but in finite element methods, we generate difference equations by using approximate methods with piecewise polynomial solution. In this paper, we use the Galerkin method to obtain the approximate solution of a boundary value problem. The convergence analysis of these solution are also considered.
The document analyzes the use of the Galerkin method to obtain approximate finite element solutions of boundary value problems. It presents an example problem of solving a second order differential equation over the domain from 0 to 1 with specified boundary conditions. The Galerkin method is applied by assuming a trial solution as a linear combination of basis functions, determining the residuals, and setting the weighted integral of the residuals equal to zero, resulting in a system of equations that can be solved for the coefficients. The approximate solution is compared to the exact solution, showing good agreement. A second example problem applying the same Galerkin method is also presented.
The document presents information about differential equations including:
- A definition of a differential equation as an equation containing the derivative of one or more variables.
- Classification of differential equations by type (ordinary vs. partial), order, and linearity.
- Methods for solving different types of differential equations such as variable separable form, homogeneous equations, exact equations, and linear equations.
- An example problem demonstrating how to use the cooling rate formula to calculate the time of death based on measured body temperatures.
This document discusses the Gauss-Jordan elimination method for solving systems of linear equations. It explains that Gauss-Jordan elimination uses elementary row operations to transform the augmented matrix of a system into row-echelon form, from which the solutions can be read directly. Pseudocode and examples in Fortran and Java programming languages are provided to demonstrate how to implement the Gauss-Jordan algorithm to solve systems of linear equations numerically on a computer.
This document discusses ordinary differential equations (ODEs). It defines ODEs and differentiates them from partial differential equations. ODEs can be classified by type, order, and linearity. Initial value problems involve solving an ODE with initial conditions specified at a point, while boundary value problems involve conditions at boundary points. The document provides examples of solving first- and second-order initial value problems. It also discusses the existence and uniqueness of solutions to initial value problems under certain continuity conditions on the functions defining the ODE.
The document discusses diagonalization of matrices. Diagonalization involves finding an invertible matrix P such that P-1AP is a diagonal matrix. The procedure for diagonalizing a matrix A includes: (1) finding the eigenvalues of A, (2) finding linearly independent eigenvectors corresponding to the eigenvalues, (3) forming the matrix P with the eigenvectors as columns, and (4) showing that P-1AP is a diagonal matrix with the eigenvalues along the diagonal. An example demonstrates finding the eigenvectors of a 2x2 matrix A and constructing the matrix P to diagonalize A.
Engineering Mathematics-IV_B.Tech_Semester-IV_Unit-VRai University
This document describes numerical integration and differentiation techniques taught in a B.Tech Engineering Mathematics course. It covers the Trapezoidal, Simpson's 1/3 and 3/8 rules for numerical integration of functions. For numerical differentiation, it discusses Euler's method, Picard's method, and Taylor series for solving ordinary differential equations. Examples are provided to illustrate the application of these numerical methods to evaluate integrals and solve initial value problems.
The document provides examples to illustrate how to find the eigenvalues and eigenvectors of a matrix.
1) For a 2x2 matrix, the characteristic polynomial is computed by taking the determinant of the matrix minus the identity matrix. The roots of the characteristic polynomial are the eigenvalues. The corresponding eigenvectors are found by solving the original eigenvalue equation.
2) For a triangular matrix, the eigenvalues are the diagonal elements. The eigenvectors are found by setting rows corresponding to non-diagonal elements to zero.
3) The document provides a numerical example to demonstrate finding the eigenvalues (3, 1, -2) and eigenvectors of a 3x3 matrix.
Fuzzy sets allow for gradual membership of elements in a set, rather than binary membership as in classical set theory. Membership is described on a scale of 0 to 1 using a membership function. Fuzzy sets generalize classical sets by treating classical sets as special cases where membership values are restricted to 0 or 1. Fuzzy set theory can model imprecise or uncertain information and is used in domains like bioinformatics. Examples of fuzzy sets include sets like "tall people" where membership in the set is a matter of degree.
This document discusses the Gamma and Beta functions. It defines them using improper definite integrals and notes they are special transcendental functions. The Gamma function was introduced by Euler and both functions have applications in areas like number theory and physics. The document provides properties of each function and examples of evaluating integrals using their definitions and relations.
This document discusses differential equations and includes the following key points:
1. It defines differential equations and provides examples of ordinary and partial differential equations of varying orders.
2. It classifies differential equations as ordinary or partial, linear or non-linear, and of first or higher order. Examples are given of each type.
3. Applications of differential equations are listed, including modeling projectile motion, electric circuits, heat transfer, vibrations, population growth, and chemical reactions.
4. Methods of solving differential equations including finding general and particular solutions are explained. Initial value and boundary value problems are also defined.
B.tech ii unit-2 material beta gamma functionRai University
1. The document discusses the gamma and beta functions, which are defined in terms of improper definite integrals involving exponential and power functions.
2. Examples are provided to demonstrate properties and applications of the gamma function, including evaluating integrals involving the gamma function.
3. The beta function is defined in terms of an integral from 0 to 1, and its relationship to the gamma function is described.
A short presentation on the topic Numerical Integration for Civil Engineering students.
This presentation consist of small introduction about Simpson's Rule, Trapezoidal Rule, Gaussian Quadrature and some basic Civil Engineering problems based of above methods of Numerical Integration.
The document discusses partial differential equations (PDEs). It defines PDEs and gives their general form involving independent variables, dependent variables, and partial derivatives. It describes methods for obtaining the complete integral, particular solution, singular solution, and general solution of a PDE. It provides examples of types of PDEs and how to solve them by assuming certain forms for the dependent and independent variables and their partial derivatives.
1) The graphical method involves graphing the lines represented by each equation on the same coordinate plane and finding the point where they intersect, which gives the solution.
2) Cramer's rule expresses each unknown as a ratio of determinants, with the numerator being the determinant of the coefficient matrix with one column replaced by the constants.
3) Gaussian elimination transforms the coefficient matrix into upper triangular form using elementary row operations, then back substitution solves for the unknowns.
The document discusses numerical computing and various interpolation techniques. Numerical computing involves solving complex mathematical problems using simple arithmetic operations by formulating models that can be solved numerically. The document then discusses nonlinear equations and various iterative methods to solve them, including bracketing methods like bisection and regula falsi, and open-end methods like Newton-Raphson and secant. It also discusses fixed point iteration. Finally, it covers interpolation techniques like Lagrange interpolation and Newton interpolation to estimate values of a function at intermediate points.
Most real life problems are modeled by differential equations. Stability analysis plays an important role while analyzing such models. In this project, we demonstrate stability of a few such problems in an introductory manner. We begin by defining different types of stability. Some methods for determining stability of various systems has been studied here. We start our study by categorizing stability of differential equations by the roots of their characteristic equations. Then we discuss the complexities of such analysis. Then we review Lyapunov’s stability concepts which gives a better way to analyze stability of a system of differential equations. We demonstrate the techniques with some examples.
This document discusses key concepts in quantum mechanics including wave functions, operators, linear vector spaces, inner products, orthogonal and orthonormal bases, Hilbert spaces, and the expansion theorem. It defines wave functions and operators as the two main constructs in quantum mechanics. It also explains that the natural language of quantum mechanics is linear algebra and describes concepts like linear vector spaces, inner products, orthogonal and orthonormal bases, and Hilbert spaces in the context of quantum mechanics.
This document presents information on fuzzy arithmetic and operations. It discusses fuzzy numbers, linguistic variables, and arithmetic operations on fuzzy intervals and fuzzy numbers. Some key points:
- Fuzzy numbers are fuzzy sets with certain properties like being normal, having closed interval alpha-cuts, and bounded support.
- Linguistic variables assign linguistic values like "young" or "old" to numerical variables. They are represented as fuzzy sets.
- Arithmetic operations on fuzzy intervals are defined based on the corresponding operations on their alpha-cuts, which are closed intervals. Properties like commutativity and distributivity are discussed.
- Operations on fuzzy numbers are similarly defined based on the alpha-cuts of the resulting fuzzy
Eigenvalues and eigenfunctions are key concepts in linear algebra. An eigenfunction is a function that when operated on by a linear operator produces a constant multiplied version of itself. The constant is the corresponding eigenvalue. Eigenvalues are the solutions to the characteristic polynomial of the linear operator. Eigenfunctions are not unique as any constant multiple of an eigenfunction is also an eigenfunction with the same eigenvalue. The spectrum of an operator is the set of all its eigenvalues.
The finite difference method can be considered as a direct discretization of differential equations but in finite element methods, we generate difference equations by using approximate methods with piecewise polynomial solution. In this paper, we use the Galerkin method to obtain the approximate solution of a boundary value problem. The convergence analysis of these solution are also considered.
The document analyzes the use of the Galerkin method to obtain approximate finite element solutions of boundary value problems. It presents an example problem of solving a second order differential equation over the domain from 0 to 1 with specified boundary conditions. The Galerkin method is applied by assuming a trial solution as a linear combination of basis functions, determining the residuals, and setting the weighted integral of the residuals equal to zero, resulting in a system of equations that can be solved for the coefficients. The approximate solution is compared to the exact solution, showing good agreement. A second example problem applying the same Galerkin method is also presented.
This document discusses finding the eigenvalues and eigenfunctions of a spin-1/2 particle pointing along an arbitrary direction. It shows that the eigenvalue equation reduces to a set of two linear, homogeneous equations. The eigenvalues are found to be ±1/2, and the corresponding eigenvectors are written in terms of the direction angles θ and Φ. As an example, it shows that for a spin oriented along the z-axis, the eigenvectors reduce to simple forms as expected for a spin-1/2 particle. It also introduces the Gauss elimination method for numerically solving systems of linear equations that arise in eigenvalue problems.
The document discusses matrix representations of operators and changes of basis in quantum mechanics. Some key points:
- Matrix elements of an operator are computed using a basis of kets. The expectation value of an operator is computed from its matrix elements and the state vectors.
- If two operators commute, they have the same set of eigenkets.
- A change of basis is a unitary transformation that relates two different sets of basis kets that span the same space. It establishes a link between the two basis representations.
- Linear algebra concepts like linear independence of eigenvectors and Hermitian operators having real eigenvalues are important in quantum mechanics.
1. The document describes two coupled oscillator problems. The first problem derives expressions for the normal mode frequencies of two masses connected by a string. The second problem finds the normal mode frequencies and displacement ratios of two masses connected by springs and an elastic band.
2. The second problem is about a system of two masses with different springs coupled by an elastic band. The normal mode frequencies are derived as square roots of ratios involving the spring and mass constants. The displacement ratios between the masses are found to be 1 for one mode and -2 for the other mode.
3. The third problem considers two identical coupled spring-mass systems. An expression is derived for the number of oscillations of one mass before its oscillations die down
Optimum Engineering Design - Day 2b. Classical Optimization methodsSantiagoGarridoBulln
This document provides an overview of an optimization methods course, including its objectives, prerequisites, and materials. The course covers topics such as linear programming, nonlinear programming, and mixed integer programming problems. It also includes mathematical preliminaries on topics like convex sets and functions, gradients, Hessians, and Taylor series expansions. Methods for solving systems of linear equations and examples are presented.
The document discusses linear combinations and linear independence of vectors and functions. It defines a linear combination of vectors as a vector that can be expressed as a sum of scalar multiples of other vectors. A set of vectors is linearly dependent if one vector can be written as a linear combination of the others. A set is linearly independent if the only solution to the equation involving scalar multiples of the vectors is when all scalars are zero. It also discusses the Wronskian and its use in determining linear independence of functions. Examples are provided to illustrate these concepts.
This document provides an overview of solving systems of linear ordinary differential equations (ODEs). It discusses solving a single linear ODE, defining the matrix exponential eA, and solving the general system x'=Ax. The solution takes the form of a sum of terms with eigenvalues and eigenvectors. Examples are provided to demonstrate finding eigenvalues/vectors and graphing solutions for real and complex cases.
The document discusses the differences and relationships between quadratic functions and quadratic equations. It notes that quadratic functions can take any real number as an input, while quadratic equations only have two solutions. The roots of a quadratic equation are also the x-intercepts of the graph of the corresponding quadratic function. The remainder theorem states that the value of a polynomial when a number is substituted for the variable is equal to the remainder when the polynomial is divided by the linear factor corresponding to that number. This connects the roots of quadratic equations to factors of quadratic functions. A quadratic can only have two distinct roots, as having three would mean it has an infinite number of roots.
1) The document discusses representation of the Dirac delta function in cylindrical and spherical coordinate systems. It shows that δ(r - r') = δ(ρ - ρ')δ(φ - φ')δ(z - z')/ρ in cylindrical coordinates and δ(r - r') = δ(r - r')δ(θ - θ')δ(φ - φ')/r^2 in spherical coordinates.
2) It also derives the important relation ∇^2(1/r) = -4πδ(r) and shows its application to the Laplace equation for electrostatic potential.
3) The completeness of eigenfunctions of harmonic oscillators and Legend
We disclose a simple and straightforward method of solving single-order linear partial differential equations. The advantage of the method is that it is applicable to any orders and the big disadvantage is that it is restricted to a single order at a time. As it is very easy compared to classical methods, it has didactic value.
1. Graeffe's root squaring method is used to find all the roots of a polynomial equation by repeatedly squaring the equation. This separates the roots so they can be easily determined.
2. The method was applied to find the roots of x3-8x2 + 17x - 10 = 0. After repeating the process, the roots were determined to be 5, 2, and 1.
3. The same method was used to find the roots of x3-2x2-5x+6=0, resulting in roots of 3, -2, and 1.
4. The method can also determine complex roots, using properties of how the coefficients fluctuate under squ
1. Graeffe's root squaring method is used to find all the roots of a polynomial equation by repeatedly squaring the equation. This separates the roots so they can be easily determined.
2. The method was applied to find the roots of x3-8x2 + 17x - 10 = 0. After repeating the process, the roots were determined to be 5, 2, and 1.
3. The same method was used to find the roots of x3-2x2-5x+6=0, resulting in roots of 3, -2, and 1.
4. The method can also determine complex roots, using properties of how the coefficients fluctuate under squ
This document discusses several numerical analysis methods for finding roots of equations or solving systems of equations. It describes the bisection method for finding roots of continuous functions, the method of false positions for approximating roots between two values with opposite signs of a function, Gauss elimination for transforming a system of equations into triangular form, Gauss-Jordan method which further eliminates variables in equations below, and iterative methods which find solutions through successive approximations rather than direct computation.
Comparative analysis of x^3+y^3=z^3 and x^2+y^2=z^2 in the Interconnected Sets Vladimir Godovalov
This paper introduces an innovative technique of study z^3-x^3=y^3 on the subject of its insolvability in integers. Technique starts from building the interconnected, third degree sets: A3={a_n│a_n=n^3,n∈N}, B3={b_n│b_n=a_(n+1)-a_n }, C3={c_n│c_n=b_(n+1)-b_n } and P3={6} wherefrom we get a_n and b_n expressed as figurate polynomials of third degree, a new finding in mathematics. This approach and the results allow us to investigate equation z^3-x^3=y in these interconnected sets A3 and B3, where z^3∧x^3∈A3, y∈B3. Further, in conjunction with the new Method of Ratio Comparison of Summands and Pascal’s rule, we finally prove inability of y=y^3. After we test the technique, applying the same approach to z^2-x^2=y where we get family of primitive z^2-x^2=y^2 as well as introduce conception of the basic primitiveness of z^'2-x^'2=y^2 for z^'-x^'=1 and the dependant primitiveness of z^'2-x^'2=y^2 for co-prime x,y,z and z^'-x^'>1.
This document discusses several numerical analysis methods for finding roots of equations or solving systems of equations. It describes the bisection method for finding roots of continuous functions, the method of false positions for approximating roots between two values with opposite signs, Gauss elimination for transforming a system of equations into triangular form, Gauss-Jordan method which further eliminates variables in equations below, and iterative methods which find solutions through successive approximations rather than direct computation.
A Probabilistic Algorithm for Computation of Polynomial Greatest Common with ...mathsjournal
- The document presents a probabilistic algorithm for computing the polynomial greatest common divisor (PGCD) with smaller factors.
- It summarizes previous work on the subresultant algorithm for computing PGCD and discusses its limitations, such as not always correctly determining the variant τ.
- The new algorithm aims to determine τ correctly in most cases when given two polynomials f(x) and g(x). It does so by adding a few steps instead of directly computing the polynomial t(x) in the relation s(x)f(x) + t(x)g(x) = r(x).
Similar to Solution of equations and eigenvalue problems (20)
Some types of matrices, Eigen value , Eigen vector, Cayley- Hamilton Theorem & applications, Properties of Eigen values, Orthogonal matrix , Pairwise orthogonal, orthogonal transformation of symmetric matrix, denationalization of a matrix by orthogonal transformation (or) orthogonal deduction, Quadratic form and Canonical form , conversion from Quadratic to Canonical form, Order, Index Signature, Nature of canonical form.
Basic concepts of integration, definite and indefinite integrals,properties of definite integral, problem based on properties,method of integration, substitution, partial fraction, rational , irrational function integration, integration by parts, reduction formula, improper integral, convergent and divergent of integration
Partial differentiation, total differentiation, Jacobian, Taylor's expansion, stationary points,maxima & minima (Extreme values),constraint maxima & minima ( Lagrangian multiplier), differentiation of implicit functions.
critical points/ stationary points , turning points,Increasing, decreasing functions, absolute maxima & Minima, Local Maxima & Minima , convex upward & convex downward - first & second derivative tests.
The derivative of a function represents the rate of change of one variable with respect to another at a given point. It is a slope and itself a function that varies across points. To find the derivative of a function f(x) at a point, we use the slope formula and take the limit as the change in x approaches 0. For example, the derivative of x^2 is 2x, meaning the slope or rate of change of x^2 is 2x at any point. There are various rules for finding derivatives, such as the power rule, sum and difference rules, product rule and quotient rule.
The document discusses key concepts in calculus including:
- Differential calculus examines how quantities change by looking at their rates of change, represented by derivatives.
- Integration is used to determine quantities like material needs or structure weights by calculating the area under a curve.
- Calculus has various applications in fields like engineering, physics, and robotics where quantities change continuously over time.
- The document provides examples of how differential and integral calculus are used in applications such as space travel planning, architecture, and robotics.
1. Fourier transforms represent a function as a sum of sinusoidal functions using integral transforms. The Fourier transform of a function f(x) is defined as an integral transform using a kernel function, with examples including the Laplace, Fourier, Hankel, and Mellin transforms.
2. The Fourier integral theorem states that if a function f(x) is piecewise continuous and differentiable, its Fourier transform represents the function as an integral using sinusoidal functions.
3. The Fourier transform and its inverse are defined by integrals using the function and a complex exponential kernel. Properties of Fourier transforms include linearity, shifting, scaling, and relationships between a function and its derivative or integral transforms.
Periodic Function, Dirichlet's Condition, Fourier series, Even & Odd functions, Euler's Formula for Fourier Coefficients, Change of Interval, Fourier series in the intervals (0,2l), (-l,l) , (-pi, pi), (0, 2pi), Half Range Cosine & Sine series Root mean square, Complex Form of Fourier series, Parseval's Identity
To find the complete solution to the second order PDE
(i.e) To find the Complementary Function & Particular Integral for a second order (Higher Order) PDE
The document discusses methods for solving partial differential equations (PDEs) of the form pP + qQ = R, where P, Q, R are functions of x, y, and z. It describes the subsidiary equation and two solution methods: the method of grouping and the method of multipliers. The method of multipliers involves choosing Lagrangian multipliers l, m, n such that lP + mQ + nR = 0, which yields an exact differential equation that can be integrated to find the solution.
The Laplace transform is an integral transform that converts a function of time into a function of complex frequency. It is defined as the integral of the function multiplied by e-st from 0 to infinity. The Laplace transform is used to solve differential equations by converting them to algebraic equations. Some key properties of the Laplace transform include linearity, shifting theorems, differentiation and integration formulas, and methods for periodic and anti-periodic functions.
Cauchy's integral theorem, Cauchy's integral formula, Cauchy's integral formula for derivatives, Taylor's Series, Maclaurin’s Series,Laurent's Series,Singularities and zeros, Cauchy's Residue theorem,Evaluation various types of complex integrals.
Complementary function, particular integral,homogeneous linear functions with constant variables, Euler Cauchy's equation, Legendre's equation, Method of variation of parameters,Simultaneous first order linear differential equation with constant coefficients,
Methods of integration, integration of rational algebraic functions, integration of irrational algebraic functions, definite integrals, properties of definite integral, integration by parts, Bernoulli's theorem, reduction formula
Analytic Function, C-R equation, Harmonic function, laplace equation, Construction of analytic function, Critical point, Invariant point , Bilinear Transformation
1. Vector calculus deals with vector-valued functions and their derivatives. It includes vector point functions that assign vectors to points in space, as well as scalar point functions that assign real numbers.
2. Key concepts include the gradient of a scalar function, the divergence and curl of a vector function, and vector fields to describe variations of quantities like velocity over a region of space.
3. Vector calculus theorems relate integrals over surfaces to integrals over bounding curves or volumes, such as Green's theorem, Stokes' theorem, and the divergence theorem. These allow problems involving surfaces and volumes to be solved via line and surface integrals.
The document discusses different experimental design techniques including completely randomized design (CRD), randomized block design (RBD), and Latin square design (LSD). It provides examples of how each design can be applied and compares their advantages and disadvantages. Key principles of experimental design are randomization, replication, and local control. The goal of design of experiments is to control insignificant variables and attribute results only to the experimental variables.
This document discusses several numerical methods for solving ordinary differential equations (ODEs), including:
1. The Taylor series method, which approximates solutions by computing successive derivatives. It is useful for initial values but becomes tedious for higher derivatives.
2. Euler's method, which uses the slope at each step to approximate the next value.
3. Modified Euler's method and the fourth-order Runge-Kutta method, which are single-step methods that do not require computing higher derivatives.
4. Multi-step methods like Milne's method and Adams-Bashforth method, which use values at previous steps to compute predictions and corrections for the next value.
This document discusses various methods of interpolation and numerical differentiation using divided differences and Newton's formulas. It introduces Lagrange interpolation for both equal and unequal intervals. Inverse interpolation and Newton's divided difference interpolation are also covered. Forward and backward difference formulas are presented for interpolation with equal intervals. Numerical differentiation can be performed by taking derivatives of the interpolation polynomial or using forward difference formulas to estimate derivatives at the data points.
Front Desk Management in the Odoo 17 ERPCeline George
Front desk officers are responsible for taking care of guests and customers. Their work mainly involves interacting with customers and business partners, either in person or through phone calls.
The membership Module in the Odoo 17 ERPCeline George
Some business organizations give membership to their customers to ensure the long term relationship with those customers. If the customer is a member of the business then they get special offers and other benefits. The membership module in odoo 17 is helpful to manage everything related to the membership of multiple customers.
Principles of Roods Approach!!!!!!!.pptxibtesaam huma
Principles of Rood’s Approach
Treatment technique used in physiotherapy for neurological patients which aids them to recover and improve quality of life
Facilitatory techniques
Inhibitory techniques
Understanding and Interpreting Teachers’ TPACK for Teaching Multimodalities i...Neny Isharyanti
Presented as a plenary session in iTELL 2024 in Salatiga on 4 July 2024.
The plenary focuses on understanding and intepreting relevant TPACK competence for teachers to be adept in teaching multimodality in the digital age. It juxtaposes the results of research on multimodality with its contextual implementation in the teaching of English subject in the Indonesian Emancipated Curriculum.
No, it's not a robot: prompt writing for investigative journalismPaul Bradshaw
How to use generative AI tools like ChatGPT and Gemini to generate story ideas for investigations, identify potential sources, and help with coding and writing.
A talk from the Centre for Investigative Journalism Summer School, July 2024
Views in Odoo - Advanced Views - Pivot View in Odoo 17Celine George
In Odoo, the pivot view is a graphical representation of data that allows users to analyze and summarize large datasets quickly. It's a powerful tool for generating insights from your business data.
The pivot view in Odoo is a valuable tool for analyzing and summarizing large datasets, helping you gain insights into your business operations.
Credit limit improvement system in odoo 17Celine George
In Odoo 17, confirmed and uninvoiced sales orders are now factored into a partner's total receivables. As a result, the credit limit warning system now considers this updated calculation, leading to more accurate and effective credit management.
How to Configure Time Off Types in Odoo 17Celine George
Now we can take look into how to configure time off types in odoo 17 through this slide. Time-off types are used to grant or request different types of leave. Only then the authorities will have a clear view or a clear understanding of what kind of leave the employee is taking.
Slide Presentation from a Doctoral Virtual Open House presented on June 30, 2024 by staff and faculty of Capitol Technology University
Covers degrees offered, program details, tuition, financial aid and the application process.
How to Install Theme in the Odoo 17 ERPCeline George
With Odoo, we can select from a wide selection of attractive themes. Many excellent ones are free to use, while some require payment. Putting an Odoo theme in the Odoo module directory on our server, downloading the theme, and then installing it is a simple process.
How to Show Sample Data in Tree and Kanban View in Odoo 17Celine George
In Odoo 17, sample data serves as a valuable resource for users seeking to familiarize themselves with the functionalities and capabilities of the software prior to integrating their own information. In this slide we are going to discuss about how to show sample data to a tree view and a kanban view.
How to Add Colour Kanban Records in Odoo 17 NotebookCeline George
In Odoo 17, you can enhance the visual appearance of your Kanban view by adding color-coded records using the Notebook feature. This allows you to categorize and distinguish between different types of records based on specific criteria. By adding colors, you can quickly identify and prioritize tasks or items, improving organization and efficiency within your workflow.
2. To find an approximate real root of given
equation.
ITERATION FORMULA OF NEWTON-
RAPHSON METHOD
𝒙𝒊+𝟏 = 𝒙𝒊 −
𝒇 𝒙 𝒊
𝒇′ 𝒙 𝒊
, 𝒊 = 𝟎, 𝟏, 𝟐 … .
3. THE CONDITION FOR CONVERGENCE OF
NEWTON-RAPHSON METHOD FOR 𝒇(𝒙) = 𝟎
The condition is |𝑓(𝑥). 𝑓′(𝑥)| < 𝑓′ 𝑥
2
in a
neighbourhood of the root
ORDER OF CONVERGENCE OF NEWTON-
RAPHSON METHOD
The order of convergence is 2.
4. To find the root of the equation 𝑓(𝑥) = 0
Step 1. Find the numbers a and b in such a way that
𝑓(𝑎) and 𝑓(𝑏) are in opposite signs. (From this we
conclude that there is one real root between a and b)
Step 2. Choose the initial approximate root as 𝑥0
Step 3. Using the Newton Raphson’s formula 𝒙𝒊+𝟏 =
𝒙𝒊 −
𝒇 𝒙𝒊
𝒇′ 𝒙 𝒊
, 𝒊 = 𝟎, 𝟏, 𝟐 … ., find the sequence
𝑥0, 𝑥1, … 𝑥 𝑛, . . and the point of convergence of the
sequence is the root of the given equation.
5. 1. It can be used for finding root of both
algebraic and transcendental equations.
2. The convergence of Newton’s method is
faster and so it is preferred compared to
other methods.
3. It is simple and easy to deal with and it is
used to improve the results obtained by
other methods.
6. To find the root of the equation 𝑓(𝑥) = 0
Step 1. Find the numbers a and b in such a way that
𝑓(𝑎) and 𝑓(𝑏) are in opposite signs. (From this we
conclude that there is one real root between a and b)
Step 2. Write the given equation on the form 𝑥 =
∅(𝑥)
Step 3. Choose the initial approximate root as 𝑥0
Step 4. Replace 𝑥 by 𝑥0 in step 2, and take 𝑥1 =
∅(𝑥0)
Step 5. Further , find 𝑥2 = ∅(𝑥1)
Step 6. Continuing this way, we will get a sequence
𝑥0, 𝑥1, … 𝑥 𝑛, . . and the point of convergence of the
sequence is the root of the given equation
7. ORDER OF CONVERGENCE OF FIXED POINT
ITERATION METHOD
The order of convergence is 1.
THE CONDITION FOR CONVERGENCE OF
FIXED POINT ITERATION METHOD FOR
𝒇(𝒙) = 𝟎
The condition is |∅′ 𝑥 | < 1, for all values of
𝑥
9. This is a direct method. In this method the
given ‘n’ system of simultaneous linear
equations
𝑎𝑖1 𝑥1 + 𝑎𝑖2 𝑥2 + ⋯ + 𝑎𝑖𝑛 𝑥 𝑛 = 𝑏𝑖 , 𝑖 = 1,2, … 𝑛
can be written in the form AX = B
Where 𝐴 = 𝑎𝑖𝑗 𝑛×𝑛
𝑋 = 𝑥𝑖 𝑛×1 𝐵 =
𝑏𝑖 𝑛×1 matrices
And in the augmented matrix (A, B), the
coefficient matrix A can be reduced to upper
triangular matrix, which can be solved by
using back substitution.
10. This is a direct method. In this method the
given ‘n’ system of simultaneous linear
equations
𝑎𝑖1 𝑥1 + 𝑎𝑖2 𝑥2 + ⋯ + 𝑎𝑖𝑛 𝑥 𝑛 = 𝑏𝑖 , 𝑖 = 1,2, … 𝑛
can be written in the form AX = B
Where 𝐴 = 𝑎𝑖𝑗 𝑛×𝑛
𝑋 = 𝑥𝑖 𝑛×1 𝐵 =
𝑏𝑖 𝑛×1 matrices
And in the augmented matrix (A, B), the
coefficient matrix A can be reduced to a
diagonal matrix, which can be solved by
using back substitution.
11. This is an iterative method.
Suppose the given linear equations are
𝑎1 𝑥 + 𝑏1 𝑦 + 𝑐1 𝑧 = 𝑑1 , 𝑎2 𝑥 + 𝑏2 𝑦 + 𝑐2 𝑧 = 𝑑2 , 𝑎3 𝑥 + 𝑏3 𝑦 + 𝑐3 𝑧 = 𝑑3
Let the coefficient matrix 𝐴 =
𝑎1 𝑏1 𝑐1
𝑎2 𝑏2 𝑐2
𝑎3 𝑏3 𝑐3
To apply Gauss – Jacobi, the coefficient matrix A should be diagonally dominant.
(i.e) 𝑎1 > 𝑏1 + 𝑐1
𝑏2 > 𝑎2 + 𝑐2
𝑐3 > 𝑎3 + |𝑏3|
To solve the given system, we have
𝑥 =
1
𝑎1
(𝑑1 − 𝑏1 𝑦 − 𝑐1 𝑧)
𝑦 =
1
𝑏2
(𝑑2 − 𝑎2 𝑥 − 𝑐2 𝑧)
𝑧 =
1
𝑐3
(𝑑3 − 𝑎3 𝑥 − 𝑏3 𝑦)
If 𝑥(0), 𝑦(0), 𝑧(0) are the initial values of x, y, z respectively
First iteration values are
𝑥 1 =
1
𝑎1
(𝑑1 − 𝑏1 𝑦 0 − 𝑐1 𝑧 0 )
𝑦 1 =
1
𝑏2
(𝑑2 − 𝑎2 𝑥 0 − 𝑐2 𝑧 0 )
𝑧 1 =
1
𝑐3
(𝑑3 − 𝑎3 𝑥 0 − 𝑏3 𝑦 0 )
Second iteration values are
𝑥 2 =
1
𝑎1
(𝑑1 − 𝑏1 𝑦 1 − 𝑐1 𝑧 1 )
𝑦 2 =
1
𝑏2
(𝑑2 − 𝑎2 𝑥 1 − 𝑐2 𝑧 1 )
𝑧 2 =
1
𝑐3
(𝑑3 − 𝑎3 𝑥 1 − 𝑏3 𝑦 1 )
Proceeding like this and stop the iteration when the values of x, y, z are start repeating with required degree of
accuracy.
12. This is an iterative method.
Suppose the given linear equations are
𝑎1 𝑥 + 𝑏1 𝑦 + 𝑐1 𝑧 = 𝑑1 , 𝑎2 𝑥 + 𝑏2 𝑦 + 𝑐2 𝑧 = 𝑑2 , 𝑎3 𝑥 + 𝑏3 𝑦 + 𝑐3 𝑧 = 𝑑3
Let the coefficient matrix 𝐴 =
𝑎1 𝑏1 𝑐1
𝑎2 𝑏2 𝑐2
𝑎3 𝑏3 𝑐3
To apply Gauss – Seidel, the coefficient matrix A should be diagonally dominant.
(i.e) 𝑎1 > 𝑏1 + 𝑐1
𝑏2 > 𝑎2 + 𝑐2
𝑐3 > 𝑎3 + |𝑏3|
To solve the given system, we have
𝑥 =
1
𝑎1
(𝑑1 − 𝑏1 𝑦 − 𝑐1 𝑧)
𝑦 =
1
𝑏2
(𝑑2 − 𝑎2 𝑥 − 𝑐2 𝑧)
𝑧 =
1
𝑐3
(𝑑3 − 𝑎3 𝑥 − 𝑏3 𝑦)
If 𝑥(0), 𝑦(0), 𝑧(0) are the initial values of x, y, z respectively
First iteration values are
𝑥 1 =
1
𝑎1
(𝑑1 − 𝑏1 𝑦 0 − 𝑐1 𝑧 0 )
𝑦 1 =
1
𝑏2
(𝑑2 − 𝑎2 𝑥 1 − 𝑐2 𝑧 0 )
𝑧 1 =
1
𝑐3
(𝑑3 − 𝑎3 𝑥 1 − 𝑏3 𝑦 1 )
Second iteration values are
𝑥 2 =
1
𝑎1
(𝑑1 − 𝑏1 𝑦 1 − 𝑐1 𝑧 1 )
𝑦 2 =
1
𝑏2
𝑑2 − 𝑎2 𝑥 2 − 𝑐2 𝑧 1
𝑧 2 =
1
𝑐3
(𝑑3 − 𝑎3 𝑥 2 − 𝑏3 𝑦 2 )
Proceeding like this and stop the iteration when the values of x, y, z are start repeating with required degree of
accuracy.
13. The condition is the absolute value of largest
coefficient is greater than the sum of the
absolute value of all the remaining
coefficients.
(i.e) the coefficient matrix is diagonally
dominant
(i.e) 𝑎𝑖𝑖 > 𝑗=1,𝑖≠𝑗
𝑛
|𝑎𝑖𝑗| ∀ 𝑖 = 1,2 … 𝑛
The rate of convergence in Gauss –Seidel
method is very fast than in Gauss-Jacobi
14. S.No Gauss Jorden Gauss Jacobi
1 Direct method Iterative method
2 This method produce exact
solution
after a finite number of
steps
This method gives a sequence
of approximate solutions, which
ultimately approach the actual
solution.
3 Applicable if the coefficient
matrix
is non-singular
Applicable if the coefficient
matrix is diagonally dominant.
15. S.No Gauss elimination Gauss Jacobi
1 Direct method Iterative method
2 This method produce exact
solution after a finite number
of steps
This method gives a sequence
of approximate solutions,
which ultimately approach the
actual solution.
3 Applicable if the coefficient
matrix
is non-singular
Applicable if the coefficient
matrix is diagonally dominant.
16. Procedure
Step 1. Write the augmented matrix (𝐴, 𝐼) ,
where A is the given matrix
Step 2. Reduce the matrix A in (𝐴, 𝐼) to the
identity matrix by using row transformation.
Step 3. From step 2, you will get (𝐼 𝐴−1
)
17. If A is any square matrix, then there exist a
scalar 𝜆 and a non-zero column vector X such
that AX = 𝜆 X, the scalar 𝜆 is called Eigen value
of A and the corresponding X is called Eigen
vector.
By the property of Eigen values and Eigen
vectors , if 𝜆 is an Eigen value of A and X is the
corresponding Eigen vector, then
1
𝜆
is an Eigen
value of 𝐴−1
with the same Eigen vector X
Also the smallest Eigen value of A is
1
𝜆
and the
corresponding Eigen vector is X
And sum of the Eigen values of A= sum of the
principal diagonal elements of A
18. Procedure - (for a square matrix A of order 3 x 3)
1. Let 𝑋0 be the initial which is usually chosen as a vector
with all components equal to 1. 𝑖. 𝑒 𝑋0 =
1
1
1
(i.e.,
normalized)
2. Find the product 𝐴𝑋0 and express it in the form 𝐴𝑋0 =
𝜆1 𝑋1, where 𝑋1 is normalized by taking out the largest
component𝜆1.
3. Find 𝐴𝑋1 and express it in the form𝐴𝑋1 = 𝜆2 𝑋2, where
𝑋2 is normalized by taking out the largest component 𝜆2
and continue the process.
4. Thus we have a sequence of equations
𝐴𝑋0 = 𝜆1 𝑋1, 𝐴𝑋1 = 𝜆2 𝑋2, 𝐴𝑋2 = 𝜆3 𝑋3 … . ·
5. We stop at the stage where 𝑋 𝑟−1, 𝑋 𝑟 are almost same.
Then 𝜆 𝑟 is the largest Eigen value and 𝑋 𝑟 is the
corresponding Eigen vector.
19. 1. Symmetric Matrix
A square matrix 𝑎𝑖𝑗 𝑛×𝑛
is said to be
symmetric matrix if 𝑎𝑖𝑗 = 𝑎𝑗𝑖 𝑓𝑜𝑟 𝑎𝑙𝑙 𝑖 ≠ 𝑗
2. Orthogonal Matrix
A square matrix 𝑎𝑖𝑗 𝑛×𝑛
is said to be
Orthogonal matrix if 𝐴𝐴 𝑇
= 𝐼
(Or) 𝐴 𝑇 = 𝐴−1
20. 3. For an Orthogonal matrix A, 𝑑𝑒𝑡 𝐴 = 𝐴 =
± 1
4. The diagonal elements of a diagonal
matrix D are its Eigen values
21. Working Rule (Jacobi Method)
If 𝐴 =
𝑎11 𝑎12 𝑎13
𝑎21 𝑎22 𝑎23
𝑎31 𝑎32 𝑎33
is the given symmetric matrix (𝑖𝑒. 𝑎𝑖𝑗 = 𝑎𝑗𝑖)
Step 1. Choose the largest off diagonal element (say 𝑎13)
Step 2. Then take the rotation matrix 𝑆1 =
𝑐𝑜𝑠𝜃 0 −𝑠𝑖𝑛𝜃
0 1 0
𝑠𝑖𝑛𝜃 0 𝑐𝑜𝑠𝜃
Step 3. Define 𝑡𝑎𝑛2𝜃 =
2𝑎13
𝑎11−𝑎33
𝑖. 𝑒 𝜃 =
1
2
tan−1 2𝑎13
𝑎11−𝑎33
𝑖𝑓 𝑎11 ≠ 𝑎33
(or) 𝜃 =
𝜋
4
𝑖𝑓 𝑎11 = 𝑎33 & 𝑎12 > 0
(or) 𝜃 = −
𝜋
4
𝑖𝑓 𝑎11 = 𝑎33 & 𝑎12 < 0
Step 4. Substitute 𝜃 value in 𝑆1
Step 5. Find 𝐴1 = 𝑆1
−1
𝐴𝑆1 = 𝑆1
𝑇
𝐴 𝑆1
Step 6. Again choose the largest off-diagonal element of 𝐴1
Step 7. Proceed again from Step 2 to step 5 until you get a diagonal matrix 𝐴 𝑛
The diagonal elements in 𝐴 𝑛 are the Eigen values of A
Step 8. For Eigen vector , to find the matrix 𝑆 = 𝑆1 𝑆2 . . = 𝑠𝑖𝑗
The corresponding vectors are 𝑠𝑖1 , 𝑠𝑖2 , 𝑠𝑖3