Book 1
Book 1
Book 1
A.Holst, V.Ufnarovski
15.5. HINTS AND ANSWERS 291
Answer: X = A1 CB 1 .
1.12. Use that b is a number that can be moved free. Use b(Y T A1 X =
ba 1 to simplify the calculations.
For the (upper) triangular matrix use that Y = O and apply the induction.
292
1 0 0 0 0 0 0 1 0 0 0 0 0
3.1. Follows directly from the formula for the determinant on the page
40The permutation matrix P(1,2) corresponding to the transposition (1, 2) is a
possible counterexample for n > 3..
3.2.
1 + x1 y1
x1 y2 x1 yn
x2 y1 1 + x2 y2 x2 yn
................................... =
xn y1 xn y2 1 + xn yn
1
0 0 x1 y1
x1 y2 x1 yn
x2 y1 1 + x2 y2 x2 yn x2 y1 1 + x2 y2 x2 yn
............................... + ............................... .
xn y1 xn y2 1 + xn yn xn y1 xn y2 1 + xn yn
In the first determinant use the expansion in the first row and induction, in the
second carry out x1 and use the remaining in the first row to clean the rest.
3.3. In one direction: use 1 = det A det A1 . In the opposite direction use
theorem 3.16.
3.4. 26 .
3.5. For odd n : use det(A) = (1)n det A. For n = 4 use direct calcula-
tions to get:
0 a b c
a 0 d e
=
b d 0 f
c e f 0
2
a2 f 2 + 2 adf c 2 aebf + b2 e2 2 bedc + d2 c2 = (af + dc be) .
For arbitrary n = 2k one can prove that |A| = (P fk )2 , where the Pfaffian P fk
is defined as
X 1 2 3 4 2k 1 2k
sign ai1 j1 ai2 j2 aik jk ,
i1 j1 i2 j2 ik jk
where the sum extends over all possible distinct terms subject to the restric-
tion 1 is < js 2k, 1 s k.
3.6. Expand in the last column.
3.7. If C = O then this is theorem 3.18. Try to reduce the general
case to
I O
this one by the multiplication from the left with the block matrix
X I
choosing the correct block X.
3.8. Use the previous exercise!
294
4.1. A standard way toPget the basis vectors Xi is to solve the system
AX = O and to write X = ti Xi , were ti are free variables. But there exists
other ways.
Answer: Endast a). One possible basis: (2, 1, 0)T ; (1, 0, 1)T .
4.2. Use the hint from the exercise 3.1 to get a basis. One possible basis:
(0, 1, 0, 31 , 1)T ; (0, 3, 1, 0, 0)T , (1, 0, 0, 0)T .
4.3. To find a basis one need to know from calculus that every solution
can be written as y(t) = C1 et + C2 tet + C3 t2 et . Without such knowledge use
substitution y(t) = x(t)et and get the equation for x(t) which is easy to solve
directly.
Answer: one possible basis is et , tet , t2 et , thus the dimension is equal to 3.
4.4. Try to express the basis with the help of Eij .
Answer: a) The basis Eij Eji with i < j gives the dimension n(n1) 2 .
b) The basis Eij Ein with j < n gives the dimension n(n 1).
c),d),e) are not vector spaces.
15.5. HINTS AND ANSWERS 295
5.2. Use the Gauss elimination or LU decomposition. Use the hint from
4.1 for the kernels. Answer: r(A) = 2. Possible bases are:
for Ker A : (1, 0, 0, 0, 0)T , (0, 3, 1, 0, 0)T , (0, 3, 0, 1, 3)T .
for Im A : (0, 1, 2, 1)T , (0, 3, 9, 3)T .
for Ker AT : (1, 0, 0, 0)T , ((0, 5, 2, 1)T .
for Im A : (0, 1, 3, 32)T , (0, 0, 0, 3, 1)T . The second vector is not a row in A,
but the second non-zero row in U. Try to understand why the non-zero rows in
U can be used as a basis for AT , though it does not work for the columns.
5.3. The basis 1, x, . . . , xn shows that the dimension is equal to n + 1. In
this basis the matrix for D : Pn Pn looks as
0 1 0 0 0
0 0 2 0 0
0 0 0 0 0
........................
0 0 0 n 1 0
0 0 0 0 n
0 0 0 0 0
The rank is n 1 and we have neither left nor right inverse. If we take away the
last row we get the matrix for D : Pn Pn1 and find that the right inverse
should exist and of course this is the integration.
5.4. The map has a nontrivial kernel and cannot be injective. This example
shows that the situation for the linear operators in the infinite dimensional case
is more complicated: injectivity does not follows from the surjectivity.
5.5. a) 2. b) 2. c) 3.
5.6. See theorem 5.6.
5.7. See theorem 5.7.
5.8. Try to imitate the dimensions arguments used for the Lagrange inter-
polation. Later we will prove more general case - see Hermite interpolation.
296
7.1.
a) E1 , E2 , E3 , E4 , E5 , E8 .
b) E4 , E7 , E8 .
c) E4 , E8 .
d) One can cut rows and columns with indices 6, 7, 9 and get
4 1 0 0 0 0
0 4 0 0 0 0
0 0 2 0 0 0
.
0 0 0 0 1 0
0 0 0 0 0 0
0 0 0 0 0 0
1 0 0
7.11. Use that multiplication with D from the left multiplies the rows by i
and from the right the columns. Compare the result outside the main diagonal.
7.12. Use the previous result.
15.5. HINTS AND ANSWERS 301
0 0 0 0 0 0 0 0
T
9.1. The map p(x) (p(2), p(1), p0 (1), p00 (1)) has the matrix
1 2 4 8
1 1 1 1
A= .
0
1 2 3
0 0 2 6
Because
1 26 8 1
27 27 9 3
1
19 2 1
A1
= 9 3 2 ,
1
9 19 13 0
1 1
27 27 19 61
we have:
1 1
(1 + 3x + 3x2 + x3 ); L20 (x) = 26 3x 3x2 x3
L10 (x) =
27 27
1 1
(8 + 6x 3x2 x3 ); L22 (x) = 2 + 3x x3 .
L21 (x) =
9 6
The polynomial corresponding to etx is
p(x) = e2t L10 (x) + et L20 (x) + tet L21 (x) + t2 et L22 (x) =
1 2t t t 2 t
2t t t 27 2
e + 26e + 24te + 9t e + 3e 3e + 18te + t x+
27 2
9
3e2t 3et 9t2 et x2 + e2t et 3tet t2 et x3 .
2
1
e2t + 26et + 24tet + 9t2 et I + 3e2t 3et + 18te t 27 2
etA = 27 + 2 t A
+ 3e2t 3et 9t2 et A2 + e2t et 3tet 92 t2 et A3 .
is rather small.
11.8. see the section 11.4.
11.9. The second statement follows directly from the first because A +
B = A(I + A1 B). Use the spectral radius of A to show that no eigenvalue of
I + A are equal to zero. Alternative approach: consider the geometric series
I A + A2 A3 + and show that it converges and gives (1 + A)1 .
304
12.7. A = cXX H , for some complex column vector X of length 1 and real
constant c.
Such matrix is obviously Hermitian. To get that this is the only choice note
that by theorem 12.11 A = U DU H for a diagonal matrix D of rank 1. If c = dii
is the only non-zero element in D then X = U Ei .
Another longer solution is to write A as A = XY H . Consider first the case
when |X| = |Y | = 1 and use that A = AH = Y X H to get |xi |2 = |yi |2 . Put
xi = i yi and get more from AH = A.
15.5. HINTS AND ANSWERS 305
13.1. Nontrivial is that AAH and AH A can have different size. Use theorem
13.3. If A = U SV H is the SV decomposition, then AAH = U S 2 U H is also a
SV decomposition. What about
AH A?
1i 3
13.2. Eigenvalues: a) 2 ; b)1; c)0, 1; d)0, 1 + i.
q
Singular values: a),b) 32 5 ; c) 2; d)2.
13.3. Use that the Frobenius norm does not change after the multiplication
with a unitary matrix.
13.4. If A = U SV H is the SV decomposition then it is sufficient to show
that A and S are the matrices of the same linear map in different orthonormal
bases.
13.5. First, using the SV decomposition reduce the problem to the case
A = S, thus suppose that all singular values are on the main diagonal and the
rest is zero. Second, we need to find a matrix of rang k such that kA Xk2 =
k+1 this is easily done, replace all j in S with j > k by zeros.
The last and most difficult part is to show that for any matrix of rang k we
have kA Xk2 k+1 . Here we use the dimensions arguments. Consider two
subspaces: Ker X and U , consisting of vectors having last n k 1 coordinates
equal to zero, i.e. of form v = (v1 , . . . , vk , vk+1 , 0 . . . , 0)T . Because dim Ker X +
dim U nk +(k +1) > n we should have a non-zero r vector v in their intersection.
Pk+1 2
|(AX)v| |Av| i=1 (i vi )
For this vector we have |v| = |v| = P k+1 2
k+1 .
i=1 (vi )
4.1. a) No. For example, A = E11 E22 , B = E12 + E21 . b) Yes. Consider
first the case when A = C 2 , where C is a diagonal matrix with positive elements
on the diagonal.
More details: we have that AB = C 2 B is similar to C 1 C 2 BC = CBC. But
C = C H thus CBC = C H BC is congruent to B (which is positive definite) and
therefore is positive definite as well and therefor has positive eigenvalues. For
general case let U H AU = D be a diagonal matrix for some unitary matrix U.
Then U H BU = B1 is positive definite as well and DB1 = U H ABU is similar
to AB. But D = C 2 because A is positive definite and we can use the previous
case.
14.2. In the Gaussian elimination we get d1 = d2 = 1, d3 = 2. All numbers
are
positive
and the answer is yes. Alternatively, 1 = a11 = 1 > 0, 2 =
1 1
1 2 = 1 > 0 and 3 = det A > 0 thus the matrix is positively definite.
and we see that p() has real roots. On the other hand
1
p() = = 2 1.
1
14.10. To make the calculations easier note that AH A = A2 = abs(A)
and we can use Lagrange interpolation.
308
1
5 6 28 3
Answer: U = , P = 161 .
61 6 5 3 33
14.11. Consider vector space of polynomials R 1 of degree less than n and check
that we have a scalar product (f (x), g(x)) = 0 f (x)g(x)dx here. Check that in
the basis 1, x, . . . xn1 we have (xi , xj ) = aij . The corresponding quadratic form
(f, f ) has matrix A in this basis and is positive definite, because it corresponds
to a scalar product.
14.12. Consider a vector space V consisting of all functions v(x) that are
ikt
linear combinations of the R functions e for k = 0, 1 . . . , n. It is sufficient to
show that (u(x), v(x)) = u(x)v(x)f (x)dx is a scalar product here.
14.13. Consider the matrix B =P A n I. It is positive semidefinite and
X H BX 0. Take X = Ei and X = Eii and see what this means for A.
14.14. We need only to check the number of positive eigenvalues for A, A2I
and A 5I, which can be done with the help of Gaussian elimination.
15.5. HINTS AND ANSWERS 309