Lecture Notes To Transition To Advanced Mathematics - Mauch
Lecture Notes To Transition To Advanced Mathematics - Mauch
or
Advanced Mathematical Methods for Scientists and Engineers
Sean Mauch
Anti-Copyright xv
Preface xvii
0.1 Advice to Teachers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xvii
0.2 Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xvii
0.3 Warnings and Disclaimers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xvii
0.4 Suggested Use . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xviii
0.5 About the Title . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xviii
I Algebra 1
1 Sets and Functions 3
1.1 Sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.2 Single Valued Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.3 Inverses and Multi-Valued Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.4 Transforming Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.5 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
1.6 Hints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
1.7 Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
2 Vectors 17
2.1 Vectors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
2.1.1 Scalars and Vectors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
2.1.2 The Kronecker Delta and Einstein Summation Convention . . . . . . . . . . . 19
2.1.3 The Dot and Cross Product . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
2.2 Sets of Vectors in n Dimensions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
2.3 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
2.4 Hints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
2.5 Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
II Calculus 31
3 Differential Calculus 33
3.1 Limits of Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
3.2 Continuous Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
3.3 The Derivative . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
3.4 Implicit Differentiation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
3.5 Maxima and Minima . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
3.6 Mean Value Theorems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
3.6.1 Application: Using Taylors Theorem to Approximate Functions. . . . . . . . 45
3.6.2 Application: Finite Difference Schemes . . . . . . . . . . . . . . . . . . . . . . 47
i
3.7 LHospitals Rule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
3.8 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
3.8.1 Limits of Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
3.8.2 Continuous Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
3.8.3 The Derivative . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
3.8.4 Implicit Differentiation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
3.8.5 Maxima and Minima . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
3.8.6 Mean Value Theorems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
3.8.7 LHospitals Rule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
3.9 Hints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
3.10 Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
3.11 Quiz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
3.12 Quiz Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
4 Integral Calculus 73
4.1 The Indefinite Integral . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
4.2 The Definite Integral . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
4.2.1 Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
4.2.2 Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
4.3 The Fundamental Theorem of Integral Calculus . . . . . . . . . . . . . . . . . . . . . 78
4.4 Techniques of Integration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
4.4.1 Partial Fractions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
4.5 Improper Integrals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
4.6 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
4.6.1 The Indefinite Integral . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
4.6.2 The Definite Integral . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
4.6.3 The Fundamental Theorem of Integration . . . . . . . . . . . . . . . . . . . . 84
4.6.4 Techniques of Integration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
4.6.5 Improper Integrals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
4.7 Hints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
4.8 Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
4.9 Quiz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
4.10 Quiz Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94
5 Vector Calculus 97
5.1 Vector Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
5.2 Gradient, Divergence and Curl . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
5.3 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
5.4 Hints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105
5.5 Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106
5.6 Quiz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112
5.7 Quiz Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113
ii
6.8 Hints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133
6.9 Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135
iii
11 Cauchys Integral Formula 301
11.1 Cauchys Integral Formula . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 301
11.2 The Argument Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 305
11.3 Rouches Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 307
11.4 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 308
11.5 Hints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 311
11.6 Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 312
iv
IV Ordinary Differential Equations 465
14 First Order Differential Equations 467
14.1 Notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 467
14.2 Example Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 468
14.2.1 Growth and Decay . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 468
14.3 One Parameter Families of Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . 469
14.4 Integrable Forms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 471
14.4.1 Separable Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 471
14.4.2 Exact Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 472
14.4.3 Homogeneous Coefficient Equations . . . . . . . . . . . . . . . . . . . . . . . 475
14.5 The First Order, Linear Differential Equation . . . . . . . . . . . . . . . . . . . . . . 478
14.5.1 Homogeneous Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 478
14.5.2 Inhomogeneous Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 478
14.5.3 Variation of Parameters. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 480
14.6 Initial Conditions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 480
14.6.1 Piecewise Continuous Coefficients and Inhomogeneities . . . . . . . . . . . . . 481
14.7 Well-Posed Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 484
14.8 Equations in the Complex Plane . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 485
14.8.1 Ordinary Points . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 485
14.8.2 Regular Singular Points . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 487
14.8.3 Irregular Singular Points . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 490
14.8.4 The Point at Infinity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 491
14.9 Additional Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 493
14.10Hints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 495
14.11Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 497
14.12Quiz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 508
14.13Quiz Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 509
v
17 Techniques for Linear Differential Equations 563
17.1 Constant Coefficient Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 563
17.1.1 Second Order Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 563
17.1.2 Real-Valued Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 566
17.1.3 Higher Order Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 567
17.2 Euler Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 569
17.2.1 Real-Valued Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 570
17.3 Exact Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 572
17.4 Equations Without Explicit Dependence on y . . . . . . . . . . . . . . . . . . . . . . 573
17.5 Reduction of Order . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 573
17.6 *Reduction of Order and the Adjoint Equation . . . . . . . . . . . . . . . . . . . . . 574
17.7 Additional Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 576
17.8 Hints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 579
17.9 Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 581
vi
21 Inhomogeneous Differential Equations 641
21.1 Particular Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 641
21.2 Method of Undetermined Coefficients . . . . . . . . . . . . . . . . . . . . . . . . . . . 642
21.3 Variation of Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 644
21.3.1 Second Order Differential Equations . . . . . . . . . . . . . . . . . . . . . . . 644
21.3.2 Higher Order Differential Equations . . . . . . . . . . . . . . . . . . . . . . . 646
21.4 Piecewise Continuous Coefficients and Inhomogeneities . . . . . . . . . . . . . . . . . 648
21.5 Inhomogeneous Boundary Conditions . . . . . . . . . . . . . . . . . . . . . . . . . . . 650
21.5.1 Eliminating Inhomogeneous Boundary Conditions . . . . . . . . . . . . . . . 650
21.5.2 Separating Inhomogeneous Equations and Inhomogeneous Boundary Conditions651
21.5.3 Existence of Solutions of Problems with Inhomogeneous Boundary Conditions 651
21.6 Green Functions for First Order Equations . . . . . . . . . . . . . . . . . . . . . . . 653
21.7 Green Functions for Second Order Equations . . . . . . . . . . . . . . . . . . . . . . 655
21.7.1 Green Functions for Sturm-Liouville Problems . . . . . . . . . . . . . . . . . 660
21.7.2 Initial Value Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 662
21.7.3 Problems with Unmixed Boundary Conditions . . . . . . . . . . . . . . . . . 664
21.7.4 Problems with Mixed Boundary Conditions . . . . . . . . . . . . . . . . . . . 665
21.8 Green Functions for Higher Order Problems . . . . . . . . . . . . . . . . . . . . . . . 667
21.9 Fredholm Alternative Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 670
21.10Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 674
21.11Hints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 677
21.12Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 679
21.13Quiz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 701
21.14Quiz Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 702
vii
24 Asymptotic Expansions 755
24.1 Asymptotic Relations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 755
24.2 Leading Order Behavior of Differential Equations . . . . . . . . . . . . . . . . . . . . 757
24.3 Integration by Parts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 762
24.4 Asymptotic Series . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 767
24.5 Asymptotic Expansions of Differential Equations . . . . . . . . . . . . . . . . . . . . 767
24.5.1 The Parabolic Cylinder Equation. . . . . . . . . . . . . . . . . . . . . . . . . 767
viii
29 Regular Sturm-Liouville Problems 863
29.1 Derivation of the Sturm-Liouville Form . . . . . . . . . . . . . . . . . . . . . . . . . 863
29.2 Properties of Regular Sturm-Liouville Problems . . . . . . . . . . . . . . . . . . . . . 864
29.3 Solving Differential Equations With Eigenfunction Expansions . . . . . . . . . . . . 871
29.4 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 875
29.5 Hints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 877
29.6 Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 878
ix
32.11Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 960
x
37.7 General Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1046
37.8 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1047
37.9 Hints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1056
37.10Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1059
41 Waves 1135
41.1 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1136
41.2 Hints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1140
41.3 Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1142
xi
45 Green Functions 1199
45.1 Inhomogeneous Equations and Homogeneous Boundary Conditions . . . . . . . . . . 1199
45.2 Homogeneous Equations and Inhomogeneous Boundary Conditions . . . . . . . . . . 1199
45.3 Eigenfunction Expansions for Elliptic Equations . . . . . . . . . . . . . . . . . . . . . 1201
45.4 The Method of Images . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1203
45.5 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1205
45.6 Hints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1212
45.7 Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1214
B Notation 1373
xii
H Table of Taylor Series 1385
W Physics 1419
X Probability 1421
X.1 Independent Events . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1421
X.2 Playing the Odds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1421
Y Economics 1423
Z Glossary 1425
xiii
xiv
Anti-Copyright
No rights reserved. Any part of this publication may be reproduced, stored in a retrieval system,
transmitted or desecrated without permission.
xv
xvi
Preface
During the summer before my final undergraduate year at Caltech I set out to write a math text
unlike any other, namely, one written by me. In that respect I have succeeded beautifully. Unfor-
tunately, the text is neither complete nor polished. I have a Warnings and Disclaimers section
below that is a little amusing, and an appendix on probability that I feel concisesly captures the
essence of the subject. However, all the material in between is in some stage of development. I am
currently working to improve and expand this text.
This text is freely available from my web set. Currently Im at http://www.its.caltech.edu/sean.
I post new versions a couple of times a year.
0.2 Acknowledgments
I would like to thank Professor Saffman for advising me on this project and the Caltech SURF
program for providing the funding for me to write the first edition of this book.
xvii
Finding the typos and mistakes in this book is left as an exercise for the reader. (Eye ewes
a spelling chequer from thyme too thyme, sew their should knot bee two many misspellings.
Though I aint so sure the grammars too good.)
The theorems and methods in this text are subject to change without notice.
This is a chain book. If you do not make seven copies and distribute them to your friends
within ten days of obtaining this text you will suffer great misfortune and other nastiness.
The surgeon general has determined that excessive studying is detrimental to your social life.
This text has been buffered for your protection and ribbed for your pleasure.
xviii
Part I
Algebra
1
Chapter 1
1.1 Sets
Definition. A set is a collection of objects. We call the objects, elements. A set is denoted by
listing the elements between
braces. For example: {e, , , 1} is the set of the integer 1, the pure
imaginary number = 1 and the transcendental numbers e = 2.7182818 . . . and = 3.1415926 . . ..
For elements of a set, we do not count multiplicities. We regard the set {1, 2, 2, 3, 3, 3} as identical
to the set {1, 2, 3}. Order is not significant in sets. The set {1, 2, 3} is equivalent to {3, 2, 1}.
In enumerating the elements of a set, we use ellipses to indicate patterns. We denote the set of
positive integers as {1, 2, 3, . . .}. We also denote sets with the notation {x|conditions on x} for sets
that are more easily described than enumerated. This is read as the set of elements x such that
. . . . x S is the notation for x is an element of the set S. To express the opposite we have
x 6 S for x is not an element of the set S.
Examples. We have notations for denoting some of the commonly encountered sets.
R = {x|x = a1 a2 an .b1 b2 } is the set of real numbers, i.e. the set of numbers with decimal
expansions. 2
Z+ , Q+ and R+ are the sets of positive integers, rationals and reals, respectively. For example,
Z+ = {1, 2, 3, . . .}.
Z0+ , Q0+ and R0+ are the sets of non-negative integers, rationals and reals, respectively. For
example, Z0+ = {0, 1, 2, . . .}.
1/2 = 2/4 = 3/6 = (1)/(2) = . This does not pose a problem as we do not count multiplicities.
2 Guess what R is for.
3
The cardinality or order of a set S is denoted |S|. For finite sets, the cardinality is the number
of elements in the set. The Cartesian product of two sets is the set of ordered pairs:
X Y {(x, y)|x X, y Y }.
X1 X2 Xn {(x1 , x2 , . . . , xn )|x1 X1 , x2 X2 , . . . , xn Xn }.
Equality. Two sets S and T are equal if each element of S is an element of T and vice versa. This
is denoted, S = T . Inequality is S 6= T , of course. S is a subset of T , S T , if every element of S
is an element of T . S is a proper subset of T , S T , if S T and S 6= T . For example: The empty
set is a subset of every set, S. The rational numbers are a proper subset of the real numbers,
Q R.
Operations. The union of two sets, S T , is the set whose elements are in either of the two sets.
The union of n sets,
nj=1 Sj S1 S2 Sn
is the set whose elements are in any of the sets Sj . The intersection of two sets, S T , is the set
whose elements are in both of the two sets. In other words, the intersection of two sets in the set of
elements that the two sets have in common. The intersection of n sets,
nj=1 Sj S1 S2 Sn
is the set whose elements are in all of the sets Sj . If two sets have no elements in common, S T = ,
then the sets are disjoint. If T S, then the difference between S and T , S \T , is the set of elements
in S which are not in T .
S \ T {x|x S, x 6 T }
The difference of sets is also denoted S T .
Properties. The following properties are easily verified from the above definitions.
S = S, S = , S \ = S, S \ S = .
Commutative. S T = T S, S T = T S.
Associative. (S T ) U = S (T U ) = S T U , (S T ) U = S (T U ) = S T U .
Distributive. S (T U ) = (S T ) (S U ), S (T U ) = (S T ) (S U ).
The range is a subset of the codomain. For each Z Y , the inverse image of Z is defined:
4
Examples.
Pn
Finite polynomials, f (x) = k=0 ak xk , ak R, and the exponential function, f (x) = ex , are
examples of single valued functions which map real numbers to real numbers.
The greatest integer function, f (x) = bxc, is a mapping from R to Z. bxc is defined as the
greatest integer less than or equal to x. Likewise, the least integer function, f (x) = dxe, is the
least integer greater than or equal to x.
The -jectives. A function is injective if for each x1 6= x2 , f (x1 ) 6= f (x2 ). In other words, distinct
elements are mapped to distinct elements. f is surjective if for each y in the codomain, there is an
x such that y = f (x). If a function is both injective and surjective, then it is bijective. A bijective
function is also called a one-to-one mapping.
Examples.
The exponential function f (x) = ex , considered as a mapping from R to R+ , is bijective, (a
one-to-one mapping).
f (x) = x2 is a bijection from R+ to R+ . f is not injective from R to R+ . For each positive y
in the range, there are two values of x such that y = x2 .
f (x) = sin x is not injective from R to [1..1]. For each y [1..1] there exists an infinite
number of values of x such that y = sin x.
Example 1.3.1 y = x2 , a many-to-one function has the inverse x = y 1/2 . For each positive y, there
are two values of x such that x = y 1/2 . y = x2 and y = x1/2 are graphed in Figure 1.3.
5
one-to-one many-to-one one-to-many
1/2
We say that there are two
branches of y = x : the positive and the negative
branch. We denote
the positive branch
as y = x; the negative branch is y = x. We call x
2 the principal branch of
x1/2 . Note that x is a one-to-one
function. Finally, x = (x1/2 2
) since ( x) = x, but x 6= (x2 )1/2
2 1/2
since (x ) = x. y = x is graphed in Figure 1.4.
Figure 1.4: y = x
Now consider the many-to-one function y = sin x. The inverse is x = arcsin y. For each y
[1..1] there are an infinite number of values x such that x = arcsin y. In Figure 1.5 is a graph of
y = sin x and a graph of a few branches of y = arcsin x.
Example 1.3.2 arcsin x has an infinite number of branches. We will denote the principal branch
6
by Arcsin x which maps [1..1] to 2 .. 2 . Note that x = sin(arcsin x), but x 6= arcsin(sin x).
Example 1.3.3 Consider 11/3 . Since x3 is a one-to-one function, x1/3 is a single-valued function.
(See Figure 1.7.) 11/3 = 1.
Example 1.3.4 Consider arccos(1/2). cos x and a portion of arccos x are graphed in Figure 1.8.
The equation cos x = 1/2 has the two solutions x = /3 in the range x (..]. We use the
periodicity of the cosine, cos(x + 2) = cos x, to find the remaining solutions.
7
x = is a solution of the former equation, (let = g() = h()), then it is necessarily a solution of
latter. This is because f (g()) = f (h()) reduces to the identity f () = f (). If f (x) is bijective,
then the converse is true: any solution of the latter equation is a solution of the former equation.
Suppose that x = is a solution of the latter, f (g()) = f (h()). That f (x) is a one-to-one mapping
implies that g() = h(). Thus x = is a solution of the former equation.
It is always safe to apply a one-to-one, (bijective), function to an equation, (provided it is defined
for that domain). For example, we can apply f (x) = x3 or f (x) = ex , considered as mappings on
R, to the equation x = 1. The equations x3 = 1 and ex = e each have the unique solution x = 1 for
x R.
Since (sin2 x)1/2 6= sin x, we cannot simplify the left side of the equation. Instead we could use the
definition of f (x) = x1/2 as the inverse of the x2 function to obtain
sin x = 11/2 = 1.
Now note that we should not just apply arcsin to both sides of the equation as arcsin(sin x) 6= x.
Instead we use the definition of arcsin as the inverse of sin.
x = arcsin(1)
x = arcsin(1) has the solutions x = /2+2n and x = arcsin(1) has the solutions x = /2+2n.
We enumerate the solutions. n o
x= + n | n Z
2
8
1.5 Exercises
Exercise 1.1
The area of a circle is directly proportional to the square of its diameter. What is the constant of
proportionality?
Exercise 1.2
Consider the equation
x+1 x2 1
= 2 .
y2 y 4
1. Why might one think that this is the equation of a line?
2. Graph the solutions of the equation to demonstrate that it is not the equation of a line.
Exercise 1.3
Consider the function of a real variable,
1
f (x) = .
x2 + 2
What is the domain and range of the function?
Exercise 1.4
The temperature measured in degrees Celsius 3 is linearly related to the temperature measured in
degrees Fahrenheit 4 . Water freezes at 0 C = 32 F and boils at 100 C = 212 F . Write the
temperature in degrees Celsius as a function of degrees Fahrenheit.
Exercise 1.5
Consider the function graphed in Figure 1.9. Sketch graphs of f (x), f (x + 3), f (3 x) + 2, and
f 1 (x). You may use the blank grids in Figure 1.10.
Exercise 1.6
A culture of bacteria grows at the rate of 10% per minute. At 6:00 pm there are 1 billion bacteria.
How many bacteria are there at 7:00 pm? How many were there at 3:00 pm?
3 Originally, it was called degrees Centigrade. centi because there are 100 degrees between the two calibration
saturated water to be 0 . Later, the calibration points became the freezing point of water, 32 , and body temperature,
96 . With this method, there are 64 divisions between the calibration points. Finally, the upper calibration point
was changed to the boiling point of water at 212 . This gave 180 divisions, (the number of degrees in a half circle),
between the two calibration points.
9
Figure 1.10: Blank grids.
Exercise 1.7
The graph in Figure 1.11 shows an even function f (x) = p(x)/q(x) where p(x) and q(x) are rational
quadratic polynomials. Give possible formulas for p(x) and q(x).
2 2
1 1
1 2 2 4 6 8 10
Exercise 1.8
Find a polynomial of degree 100 which is zero only at x = 2, 1, and is non-negative.
10
1.6 Hints
Hint 1.1
area = constant diameter2 .
Hint 1.2
A pair (x, y) is a solution of the equation if it make the equation an identity.
Hint 1.3
The domain is the subset of R on which the function is defined.
Hint 1.4
Find the slope and x-intercept of the line.
Hint 1.5
The inverse of the function is the reflection of the function across the line y = x.
Hint 1.6
The formula for geometric growth/decay is x(t) = x0 rt , where r is the rate.
Hint 1.7
Note that p(x) and q(x) appear as a ratio, they are determined only up to a multiplicative constant.
We may take the leading coefficient of q(x) to be unity.
p(x) ax2 + bx + c
f (x) = = 2
q(x) x + x +
Use the properties of the function to solve for the unknown parameters.
Hint 1.8
Write the polynomial in factored form.
11
1.7 Solutions
Solution 1.1
area = radius2
area = diameter2
4
The constant of proportionality is 4.
Solution 1.2
1. If we multiply the equation by y 2 4 and divide by x + 1, we obtain the equation of a line.
y+2=x1
x+1 (x + 1)(x 1)
= .
y2 (y 2)(y + 2)
We note that one or both sides of the equation are undefined at y = 2 because of division
by zero. There are no solutions for these two values of y and we assume from this point that
y 6= 2. We multiply by (y 2)(y + 2).
(x + 1)(y + 2) = (x + 1)(x 1)
y+2=x1
y =x3
{(1, y) : y 6= 2} {(x, x 3) : x 6= 1, 5}
-6 -4 -2 2 4 6
-2
-4
-6
x+1 x2 1
Figure 1.12: The solutions of y2 = y 2 4 .
12
Solution 1.3
The denominator is nonzero for all x R. Since we dont have any division by zero problems, the
domain of the function is R. For x R,
1
0< 2.
x2 + 2
Consider
1
y= . (1.1)
+2 x2
For any y (0 . . . 1/2], there is at least one value of x that satisfies Equation 1.1.
1
x2 + 2 =
y
r
1
x= 2
y
Thus the range of the function is (0 . . . 1/2]
Solution 1.4
Let c denote degrees Celsius and f denote degrees Fahrenheit. The line passes through the points
(f, c) = (32, 0) and (f, c) = (212, 100). The x-intercept is f = 32. We calculate the slope of the line.
100 0 100 5
slope = = =
212 32 180 9
The relationship between fahrenheit and celcius is
5
c= (f 32).
9
Solution 1.5
We plot the various transformations of f (x).
Solution 1.6
The formula for geometric growth/decay is x(t) = x0 rt , where r is the rate. Let t = 0 coincide with
6:00 pm. We determine x0 .
0
9 11
x(0) = 10 = x0 = x0
10
x0 = 109
Solution 1.7
We write p(x) and q(x) as general quadratic polynomials.
p(x) ax2 + bx + c
f (x) = =
q(x) x2 + x +
13
Figure 1.13: Graphs of f (x), f (x + 3), f (3 x) + 2, and f 1 (x).
We will use the properties of the function to solve for the unknown parameters.
Note that p(x) and q(x) appear as a ratio, they are determined only up to a multiplicative
constant. We may take the leading coefficient of q(x) to be unity.
p(x) ax2 + bx + c
f (x) = = 2
q(x) x + x +
f (x) has a second order zero at x = 0. This means that p(x) has a second order zero there and that
6= 0.
ax2
f (x) = 2
x + x +
We note that f (x) 2 as x . This determines the parameter a.
ax2
lim f (x) = lim
x x x2 + x +
2ax
= lim
x 2x +
2a
= lim
x 2
=a
2x2
f (x) =
x2 + x +
Now we use the fact that f (x) is even to conclude that q(x) is even and thus = 0.
2x2
f (x) =
x2 +
14
Finally, we use that f (1) = 1 to determine .
2x2
f (x) =
x2 + 1
Solution 1.8
Consider the polynomial
p(x) = (x + 2)40 (x 1)30 (x )30 .
It is of degree 100. Since the factors only vanish at x = 2, 1, , p(x) only vanishes there. Since
factors are non-negative, the polynomial is non-negative.
15
16
Chapter 2
Vectors
2.1 Vectors
2.1.1 Scalars and Vectors
A vector is a quantity having both a magnitude and a direction. Examples of vector quantities are
velocity, force and position. One can represent a vector in n-dimensional space with an arrow whose
initial point is at the origin, (Figure 2.1). The magnitude is the length of the vector. Typographically,
variables representing vectors are often written in capital letters, bold face or with a vector over-line,
A, a, ~a. The magnitude of a vector is denoted |a|.
A scalar has only a magnitude. Examples of scalar quantities are mass, time and speed.
Vector Algebra. Two vectors are equal if they have the same magnitude and direction. The
negative of a vector, denoted a, is a vector of the same magnitude as a but in the opposite
direction. We add two vectors a and b by placing the tail of b at the head of a and defining a + b
to be the vector with tail at the origin and head at the head of b. (See Figure 2.2.)
b 2a
a
a
a+b -a
17
The difference, a b, is defined as the sum of a and the negative of b, a + (b). The result of
multiplying a by a scalar is a vector of magnitude || |a| with the same/opposite direction if is
positive/negative. (See Figure 2.2.)
Here are the properties of adding vectors and multiplying them by a scalar. They are evident
from geometric considerations.
Zero and Unit Vectors. The additive identity element for vectors is the zero vector or null
vector. This is a vector of magnitude zero which is denoted as 0. A unit vector is a vector of
magnitude one. If a is nonzero then a/|a| is a unit vector in the direction of a. Unit vectors are
often denoted with a caret over-line, n.
Rectangular Unit Vectors. In n dimensional Cartesian space, Rn , the unit vectors in the di-
rections of the coordinates axes are e1 , . . . en . These are called the rectangular unit vectors. To cut
down on subscripts, the unit vectors in three dimensional space are often denoted with i, j and k.
(Figure 2.3).
j
y
Components of a Vector. Consider a vector a with tail at the origin and head having the Carte-
sian coordinates (a1 , . . . , an ). We can represent this vector as the sum of n rectangular component
vectors, a = a1 e1 + + an en . (See Figure 2.4.) Another notation forp the vector a is ha1 , . . . , an i.
By the Pythagorean theorem, the magnitude of the vector a is |a| = a21 + + a2n .
a
a3 k
y
a1 i
a2 j
18
2.1.2 The Kronecker Delta and Einstein Summation Convention
The Kronecker Delta tensor is defined
(
1 if i = j,
ij =
0 6 j.
if i =
Example 2.1.1 Consider the matrix equation: A x = b. We can write out the matrix and vectors
explicitly.
a11 a1n x1 b1
.. . .. . .
.. .. = ...
.
an1 ann xn bn
This takes much less space when we use the summation convention.
aij xj = bi
a b |a||b| cos ,
where is the angle from a to b. From this definition one can derive the following properties:
a b = b a, commutative.
(a b) = (a) b = a (b), associativity of scalar multiplication.
a (b + c) = a b + a c, distributive. (See Exercise 2.1.)
ei ej = ij . In three dimensions, this is
i i = j j = k k = 1, i j = j k = k i = 0.
The Angle Between Two Vectors. We can use the dot product to find the angle between two
vectors, a and b. From the definition of the dot product,
a b = |a||b| cos .
19
Example 2.1.2 What is the angle between i and i + j?
i (i + j)
= arccos
|i||i + j|
1
= arccos
2
= .
4
Parametric Equation of a Line. Consider a line in Rn that passes through the point a and is
parallel to the vector t, (tangent). A parametric equation of the line is
x = a + ut, u R.
Implicit Equation of a Line In 2D. Consider a line in R2 that passes through the point a and
is normal, (orthogonal, perpendicular), to the vector n. All the lines that are normal to n have the
property that x n is a constant, where x is any point on the line. (See Figure 2.5.) x n = 0 is
the line that is normal to n and passes through the origin. The line that is normal to n and passes
through the point a is
x n = a n.
x n=1 x n= a n
n a
x n=0
x n=-1
The normal to a line determines an orientation of the line. The normal points in the direction
that is above the line. A point b is (above/on/below) the line if (ba)n is (positive/zero/negative).
The signed distance of a point b from the line x n = a n is
n
(b a) .
|n|
x n = a n.
20
The normal determines an orientation of the hyperplane. The normal points in the direction
that is above the hyperplane. A point b is (above/on/below) the hyperplane if (b a) n is
(positive/zero/negative). The signed distance of a point b from the hyperplane x n = a n is
n
(b a) .
|n|
y x
x y
Figure 2.6: There are two ways of labeling the axes in two dimensions.
There are also two ways of labeling the axes in a three-dimensional rectangular coordinate system.
These are called right-handed and left-handed coordinate systems. See Figure 2.7. Any other
labelling of the axes could be rotated into one of these configurations. The right-handed system
is the one that is used by default. If you put your right thumb in the direction of the z axis in a
right-handed coordinate system, then your fingers curl in the direction from the x axis to the y axis.
z z
k k
j y i x
i j
x y
a b = |a||b| sin n,
where is the angle from a to b and n is a unit vector that is orthogonal to a and b and in the
direction such that the ordered triple of vectors a, b and n form a right-handed system.
You can visualize the direction of a b by applying the right hand rule. Curl the fingers of your
right hand in the direction from a to b. Your thumb points in the direction of a b. Warning:
Unless you are a lefty, get in the habit of putting down your pencil before applying the right hand
rule.
The dot and cross products behave a little differently. First note that unlike the dot product,
the cross product is not commutative. The magnitudes of a b and b a are the same, but their
directions are opposite. (See Figure 2.8.)
Let
a b = |a||b| sin n and b a = |b||a| sin m.
21
a b
a
b a
The angle from a to b is the same as the angle from b to a. Since {a, b, n} and {b, a, m} are
right-handed systems, m points in the opposite direction as n. Since a b = b a we say that
the cross product is anti-commutative.
b b
b sin
a a
Figure 2.9: The parallelogram and the triangle defined by two vectors.
From the definition of the cross product, one can derive the following properties:
a b = b a, anti-commutative.
a (b + c) = a b + a c, distributive.
i i = j j = k k = 0.
i j = k, j k = i, k i = j.
i j k
a b = (a2 b3 a3 b2 )i + (a3 b1 a1 b3 )j + (a1 b2 a2 b1 )k = a1 a2 a3 ,
b1 b2 b3
cross product in terms of rectangular components.
22
b c
a
c
Scalar Triple Product. Consider the volume of the parallelopiped defined by three vectors. (See
Figure 2.10.) The area of the base is ||b||c| sin |, where is the angle between b and c. The height
is |a| cos , where is the angle between b c and a. Thus the volume of the parallelopiped is
|a||b||c| sin cos .
Note that
|a (b c)| = |a (|b||c| sin n)|
= ||a||b||c| sin cos | .
Thus |a (b c)| is the volume of the parallelopiped. a (b c) is the volume or the negative of the
volume depending on whether {a, b, c} is a right or left-handed system.
Note that parentheses are unnecessary in a b c. There is only one way to interpret the
expression. If you did the dot product first then you would be left with the cross product of a scalar
and a vector which is meaningless. a b c is called the scalar triple product.
Plane Defined by Three Points. Three points which are not collinear define a plane. Consider
a plane that passes through the three points a, b and c. One way of expressing that the point x
lies in the plane is that the vectors x a, b a and c a are coplanar. (See Figure 2.11.) If the
vectors are coplanar, then the parallelopiped defined by these three vectors will have zero volume.
We can express this in an equation using the scalar triple product,
(x a) (b a) (c a) = 0.
c
a
b
23
The vectors are orthogonal if x y = 0. The norm of a vector is the length of the vector generalized
to n dimensions.
kxk = x x
xi xj = 0 if i 6= j
If in addition each vector in the set has norm 1, then the set is orthonormal.
(
1 if i = j
xi xj = ij =
0 if i 6= j
{x1 , x2 , . . . , xn }
is complete if any n-dimensional vector can be written as a linear combination of the vectors in the
set. That is, any vector y can be written
n
X
y= ci xi .
i=1
24
2.3 Exercises
The Dot and Cross Product
Exercise 2.1
Prove the distributive law for the dot product,
a (b + c) = a b + a c.
Exercise 2.2
Prove that
a b = ai bi a1 b1 + + an bn .
Exercise 2.3
What is the angle between the vectors i + j and i + 3j?
Exercise 2.4
Prove the distributive law for the cross product,
a (b + c) = a b + a b.
Exercise 2.5
Show that
i j k
a b = a1 a2 a3
b1 b2 b3
Exercise 2.6
What is the area of the quadrilateral with vertices at (1, 1), (4, 2), (3, 7) and (2, 3)?
Exercise 2.7
What is the volume of the tetrahedron with vertices at (1, 1, 0), (3, 2, 1), (2, 4, 1) and (1, 2, 5)?
Exercise 2.8
What is the equation of the plane that passes through the points (1, 2, 3), (2, 3, 1) and (3, 1, 2)?
What is the distance from the point (2, 3, 5) to the plane?
25
2.4 Hints
The Dot and Cross Product
Hint 2.1
First prove the distributive law when the first vector is of unit length,
n (b + c) = n b + n c.
Then all the quantities in the equation are projections onto the unit vector n and you can use
geometry.
Hint 2.2
First prove that the dot product of a rectangular unit vector with itself is one and the dot product
of two distinct rectangular unit vectors is zero. Then write a and b in rectangular components and
use the distributive law.
Hint 2.3
Use a b = |a||b| cos .
Hint 2.4
First consider the case that both b and c are orthogonal to a. Prove the distributive law in this
case from geometric considerations.
Next consider two arbitrary vectors a and b. We can write b = b + bk where b is orthogonal
to a and bk is parallel to a. Show that
a b = a b .
Hint 2.5
Write the vectors in their rectangular components and use,
i j = k, j k = i, k i = j,
and,
i i = j j = k k = 0.
Hint 2.6
The quadrilateral is composed of two triangles. The area of a triangle defined by the two vectors a
and b is 12 |a b|.
Hint 2.7
Justify that the area of a tetrahedron determined by three vectors is one sixth the area of the
parallelogram determined by those three vectors. The area of a parallelogram determined by three
vectors is the magnitude of the scalar triple product of the vectors: a b c.
Hint 2.8
The equation of a line that is orthogonal to a and passes through the point b is a x = a b. The
distance of a point c from the plane is
(c b) a
|a|
26
2.5 Solutions
The Dot and Cross Product
Solution 2.1
First we prove the distributive law when the first vector is of unit length, i.e.,
n (b + c) = n b + n c. (2.1)
From Figure 2.12 we see that the projection of the vector b + c onto n is equal to the sum of the
projections b n and c n.
c
n
b b+c
nc
nb n (b+c)
Now we extend the result to the case when the first vector has arbitrary length. We define
a = |a|n and multiply Equation 2.1 by the scalar, |a|.
Solution 2.2
First note that
ei ei = |ei ||ei | cos(0) = 1.
Then note that that dot product of any two distinct rectangular unit vectors is zero because they are
orthogonal. Now we write a and b in terms of their rectangular components and use the distributive
law.
a b = ai ei bj ej
= ai bj ei ej
= ai bj ij
= ai bi
Solution 2.3
Since a b = |a||b| cos , we have
ab
= arccos
|a||b|
27
when a and b are nonzero.
!
(i + j) (i + 3j) 4 2 5
= arccos = arccos = arccos 0.463648
|i + j||i + 3j| 2 10 5
Solution 2.4
First consider the case that both b and c are orthogonal to a. b + c is the diagonal of the par-
allelogram defined by b and c, (see Figure 2.13). Since a is orthogonal to each of these vectors,
taking the cross product of a with these vectors has the effect of rotating the vectors through /2
radians about a and multiplying their length by |a|. Note that a (b + c) is the diagonal of the
parallelogram defined by a b and a c. Thus we see that the distributive law holds when a is
orthogonal to both b and c,
a (b + c) = a b + a c.
b a c
b+c c
a b
a (b+c)
Now consider two arbitrary vectors a and b. We can write b = b + bk where b is orthogonal
to a and bk is parallel to a, (see Figure 2.14).
a
b
b
Figure 2.14: The vector b written as a sum of components orthogonal and parallel to a.
a b = |a||b| sin n.
Note that
|b | = |b| sin ,
and that a b is a vector in the same direction as a b. Thus we see that
a b = |a||b| sin n
= |a|(sin |b|)n
= |a||b |n = |a||b | sin(/2)n
28
a b = a b .
Now we are prepared to prove the distributive law for arbitrary b and c.
a (b + c) = a (b + bk + c + ck )
= a ((b + c) + (b + c)k )
= a ((b + c) )
= a b + a c
=ab+ac
a (b + c) = a b + a c
Solution 2.5
We know that
i j = k, j k = i, k i = j,
and that
i i = j j = k k = 0.
Now we write a and b in terms of their rectangular components and use the distributive law to
expand the cross product.
a b = (a1 i + a2 j + a3 k) (b1 i + b2 j + b3 k)
= a1 i (b1 i + b2 j + b3 k) + a2 j (b1 i + b2 j + b3 k) + a3 k (b1 i + b2 j + b3 k)
= a1 b2 k + a1 b3 (j) + a2 b1 (k) + a2 b3 i + a3 b1 j + a3 b2 (i)
= (a2 b3 a3 b2 )i (a1 b3 a3 b1 )j + (a1 b2 a2 b1 )k
Next we evaluate the determinant.
i j k
a1 a2 a3 = i a2 a3
a1 a3 a1 a2
b2 j + k
b1 b2 b3 b3 b1 b3 b1 b2
Solution 2.6
The area area of the quadrilateral is the area of two triangles. The first triangle is defined by the
vector from (1, 1) to (4, 2) and the vector from (1, 1) to (2, 3). The second triangle is defined by
the vector from (3, 7) to (4, 2) and the vector from (3, 7) to (2, 3). (See Figure 2.15.) The area of a
triangle defined by the two vectors a and b is 21 |a b|. The area of the quadrilateral is then,
1 1 1 1
|(3i + j) (i + 2j)| + |(i 5j) (i 4j)| = (5) + (19) = 12.
2 2 2 2
Solution 2.7
The tetrahedron is determined by the three vectors with tail at (1, 1, 0) and heads at (3, 2, 1), (2, 4, 1)
and (1, 2, 5). These are h2, 1, 1i, h1, 3, 1i and h0, 1, 5i. The area of the tetrahedron is one sixth the
area of the parallelogram determined by these vectors. (This is because the area of a pyramid is
1
3 (base)(height). The base of the tetrahedron is half the area of the parallelogram and the heights
are the same. 12 13 = 16 ) Thus the area of a tetrahedron determined by three vectors is 16 |a b c|.
The area of the tetrahedron is
1 1 7
|h2, 1, 1i h1, 3, 1i h1, 2, 5i| = |h2, 1, 1i h13, 4, 1i| =
6 6 2
29
y (3,7)
(2,3)
(4,2)
(1,1) x
Solution 2.8
The two vectors with tails at (1, 2, 3) and heads at (2, 3, 1) and (3, 1, 2) are parallel to the plane.
Taking the cross product of these two vectors gives us a vector that is orthogonal to the plane.
We see that the plane is orthogonal to the vector h1, 1, 1i and passes through the point (1, 2, 3). The
equation of the plane is
h1, 1, 1i hx, y, zi = h1, 1, 1i h1, 2, 3i,
x + y + z = 6.
Consider the vector with tail at (1, 2, 3) and head at (2, 3, 5). The magnitude of the dot product of
this vector with the unit normal vector gives the distance from the plane.
h1, 1, 2i h1, 1, 1i = 4 = 4 3
|h1, 1, 1i| 3 3
30
Part II
Calculus
31
Chapter 3
Differential Calculus
lim y(x) =
x
Now we make the notion of arbitrarily close precise. For any > 0 there exists a > 0 such that
|y(x) | < for all 0 < |x | < . That is, there is an interval surrounding the point x = for
which the function is within of . See Figure 3.1. Note that the interval surrounding x = is a
deleted neighborhood, that is it does not contain the point x = . Thus the value of the function at
x = need not be equal to for the limit to exist. Indeed the function need not even be defined at
x = .
x
+
To prove that a function has a limit at a point we first bound |y(x) | in terms of for values
of x satisfying 0 < |x | < . Denote this upper bound by u(). Then for an arbitrary > 0, we
determine a > 0 such that the the upper bound u() and hence |y(x) | is less than .
lim x2 = 1.
x1
Consider any > 0. We need to show that there exists a > 0 such that |x2 1| < for all
33
|x 1| < . First we obtain a bound on |x2 1|.
Example 3.1.2 Recall that the value of the function y() need not be equal to limx y(x) for the
limit to exist. We show an example of this. Consider the function
(
1 for x Z,
y(x) =
0 for x 6 Z.
Left and Right Limits. With the notation limx+ y(x) we denote the right limit of y(x). This
is the limit as x approaches from above. Mathematically: limx+ exists if for any > 0 there
exists a > 0 such that |y(x) | < for all 0 < x < . The left limit limx y(x) is defined
analogously.
sin x
Example 3.1.3 Consider the function, |x| , defined for x 6= 0. (See Figure 3.2.) The left and right
limits exist as x approaches zero.
sin x sin x
lim = 1, lim = 1
x0+ |x| x0 |x|
34
Figure 3.2: Plot of sin(x)/|x|.
f (x) limx f (x)
lim = if lim g(x) 6= 0.
x g(x) limx g(x) x
Example 3.1.4 We prove that if limx f (x) = and limx g(x) = exist then
lim (f (x)g(x)) = lim f (x) lim g(x) .
x x x
Since the limit exists for f (x), we know that for all > 0 there exists > 0 such that |f (x) | <
for |x | < . Likewise for g(x). We seek to show that for all > 0 there exists > 0 such that
|f (x)g(x) | < for |x | < . We proceed by writing |f (x)g(x) |, in terms of |f (x) |
and |g(x) |, which we know how to bound.
If we choose a such that |f (x)||g(x) | < /2 and |f (x) ||| < /2 then we will have the
desired result: |f (x)g(x) | < . Trying to ensure that |f (x)||g(x) | < /2 is hard because of
the |f (x)| factor. We will replace that factor with a constant. We want to write |f (x) ||| < /2
as |f (x) | < /(2||), but this is problematic for the case = 0. We fix these two problems and
then proceed. We choose 1 such that |f (x) | < 1 for |x | < 1 . This gives us the desired form.
Next we choose 2 such that |g(x) | < /(2(|| + 1)) for |x | < 2 and choose 3 such that
|f (x) | < /(2(|| + 1)) for |x | < 3 . Let be the minimum of 1 , 2 and 3 .
|f (x)g(x) | (|| + 1)|g(x) | + |f (x) |(|| + 1) < + , for |x | <
2 2
|f (x)g(x) | < , for |x | <
lim (f (x)g(x)) = lim f (x) lim g(x) = .
x x x
35
Result 3.1.1 Definition of a Limit. The statement:
lim y(x) =
x
means that y(x) gets arbitrarily close to as x approaches . For any > 0
there exists a > 0 such that |y(x) | < for all x in the neighborhood
0 < |x | < . The left and right limits,
lim y(x) = and lim y(x) =
x x +
denote the limiting value as x approaches respectively from below and above.
The neighborhoods are respectively < x < 0 and 0 < x < .
Properties of Limits. Let lim u(x) and lim v(x) exist.
x x
to remove the discontinuity. If both the left and right limit of a function at a point exist, but are
not equal, then the function has a jump discontinuity at that point. If either the left or right limit
of a function does not exist, then the function is said to have an infinite discontinuity at that point.
sin x
Example 3.2.1 x has a removable discontinuity at x = 0. The Heaviside function,
0
for x < 0,
H(x) = 1/2 for x = 0,
1 for x > 0,
1
has a jump discontinuity at x = 0. x has an infinite discontinuity at x = 0. See Figure 3.3.
36
Figure 3.3: A Removable discontinuity, a Jump Discontinuity and an Infinite Discontinuity
Uniform Continuity. Consider a function f (x) that is continuous on an interval. This means
that for any point in the interval and any positive there exists a > 0 such that |f (x) f ()| <
for all 0 < |x | < . In general, this value of depends on both and . If can be chosen so
it is a function of alone and independent of then the function is said to be uniformly continuous
on the interval. A sufficient condition for uniform continuity is that the function is continuous on a
closed interval.
37
3.3 The Derivative
Consider a function y(x) on the interval (x . . . x + x) for some x > 0. We define the increment
y = y(x + x) y(x). The average rate of change, (average velocity), of the function on the
y
interval is x . The average rate of change is the slope of the secant line that passes through the
points (x, y(x)) and (x + x, y(x + x)). See Figure 3.5.
y
x
x
If the slope of the secant line has a limit as x approaches zero then we call this slope the
derivative or instantaneous rate of change of the function at the point x. We denote the derivative
dy y
by dx , which is a nice notation as the derivative is the limit of x as x 0.
dy y(x + x) y(x)
lim .
dx x0 x
dy
x may approach zero from below or above. It is common to denote the derivative dx d
by dx y, y 0 (x),
0
y or Dy.
A function is said to be differentiable at a point if the derivative exists there. Note that differ-
entiability implies continuity, but not vice versa.
y(1 + x) y(1)
y 0 (1) lim
x0 x
(1 + x)2 1
= lim
x0 x
= lim (2 + x)
x0
=2
Figure 3.6 shows the secant lines approaching the tangent line as x approaches zero from above
and below.
d 2 (x + x)2 x2
x = lim
dx x0 x
= lim (2x + x)
x0
= 2x
38
4 4
3.5 3.5
3 3
2.5 2.5
2 2
1.5 1.5
1 1
0.5 0.5
0.5 1 1.5 2 0.5 1 1.5 2
Properties. Let u(x) and v(x) be differentiable. Let a and b be constants. Some fundamental
properties of derivatives are:
d du dv
(au + bv) = a +b Linearity
dx dx dx
d du dv
(uv) = v+u Product Rule
dx dx dx
d u v du u dv
= dx 2 dx Quotient Rule
dx v v
d a du
(u ) = aua1 Power Rule
dx dx
d du dv
(u(v(x))) = = u0 (v(x))v 0 (x) Chain Rule
dx dv dx
u(x+x) u(x)
d u v(x+x) v(x)
= lim
dx v x0 x
u(x + x)v(x) u(x)v(x + x)
= lim
x0 xv(x)v(x + x)
u(x + x)v(x) u(x)v(x) u(x)v(x + x) + u(x)v(x)
= lim
x0 xv(x)v(x)
(u(x + x) u(x))v(x) u(x)(v(x + x) v(x))
= lim
x0 xv 2 (x)
u(x+x)u(x) v(x+x)v(x)
limx0 x v(x) u(x) limx0 x
=
v 2 (x)
v du dv
dx u dx
=
v2
39
Trigonometric Functions. Some derivatives of trigonometric functions are:
d d 1
sin x = cos x arcsin x =
dx dx (1 x2 )1/2
d d 1
cos x = sin x arccos x =
dx dx (1 x2 )1/2
d 1 d 1
tan x = arctan x =
dx cos2 x dx 1 + x2
d x d 1
e = ex ln x =
dx dx x
d d 1
sinh x = cosh x arcsinh x = 2
dx dx (x + 1)1/2
d d 1
cosh x = sinh x arccosh x = 2
dx dx (x 1)1/2
d 1 d 1
tanh x = arctanh x =
dx cosh2 x dx 1 x2
Inverse Functions. If we have a function y(x), we can consider x as a function of y, x(y). For
2y
example, if y(x) = 8x3 then x(y) = 3 y/2; if y(x) = x+2
x+1 then x(y) = y1 . The derivative of an
inverse function is
d 1
x(y) = dy .
dy dx
Example 3.3.5 The inverse function of y(x) = ex is x(y) = ln y. We can obtain the derivative of
the logarithm from the derivative of the exponential. The derivative of the exponential is
dy
= ex .
dx
Thus the derivative of the logarithm is
d d 1 1 1
ln y = x(y) = dy = x = .
dy dy e y
dx
40
Example 3.4.1 Consider the implicit equation
x2 xy y 2 = 1.
10(x2 xy y 2 )
=
(x + 2y)2
10
=
(x + 2y)2
Relative Extrema and Stationary Points. If f (x) is differentiable and f () is a relative ex-
trema then x = is a stationary point, f 0 () = 0. We can prove this using left and right limits.
Assume that f () is a relative maxima. Then there is a neighborhood (x , x + ), > 0 for which
f (x) f (). Since f (x) is differentiable the derivative at x = ,
f ( + x) f ()
f 0 () = lim ,
x0 x
41
exists. This in turn means that the left and right limits exist and are equal. Since f (x) f () for
< x < the left limit is non-positive,
f ( + x) f ()
f 0 () = lim 0.
x0 x
Since f (x) f () for < x < + the right limit is nonnegative,
f ( + x) f ()
f 0 () = lim 0.
x0+ x
Thus we have 0 f 0 () 0 which implies that f 0 () = 0.
It is not true that all stationary points are relative extrema. That is, f 0 () = 0 does not imply
that x = is an extrema. Consider the function f (x) = x3 . x = 0 is a stationary point since
f 0 (x) = x2 , f 0 (0) = 0. However, x = 0 is neither a relative maxima nor a relative minima.
It is also not true that all relative extrema are stationary points. Consider the function f (x) = |x|.
The point x = 0 is a relative minima, but the derivative at that point is undefined.
Example 3.5.1 Consider y = x2 and the point x = 0. The function is differentiable. The derivative,
y 0 = 2x, vanishes at x = 0. Since y 0 (x) is negative for x < 0 and positive for x > 0, the point x = 0
is a relative minima. See Figure 3.7.
Example 3.5.2 Consider y = cos x and the point x = 0. The function is differentiable. The
derivative, y 0 = sin x is positive for < x < 0 and negative for 0 < x < . Since the sign of y 0
goes from positive to negative, x = 0 is a relative maxima. See Figure 3.7.
Example 3.5.3 Consider y = x3 and the point x = 0. The function is differentiable. The derivative,
y 0 = 3x2 is positive for x < 0 and positive for 0 < x. Since y 0 is not identically zero and the sign of
y 0 does not change, x = 0 is not a relative extrema. See Figure 3.7.
42
Concavity. If the portion of a curve in some neighborhood of a point lies above the tangent line
through that point, the curve is said to be concave upward. If it lies below the tangent it is concave
downward. If a function is twice differentiable then f 00 (x) > 0 where it is concave upward and
f 00 (x) < 0 where it is concave downward. Note that f 00 (x) > 0 is a sufficient, but not a necessary
condition for a curve to be concave upward at a point. A curve may be concave upward at a point
where the second derivative vanishes. A point where the curve changes concavity is called a point
of inflection. At such a point the second derivative vanishes, f 00 (x) = 0. For twice continuously
differentiable functions, f 00 (x) = 0 is a necessary but not a sufficient condition for an inflection point.
The second derivative may vanish at places which are not inflection points. See Figure 3.8.
Second Derivative Test. Let f (x) be twice differentiable and let x = be a stationary point,
f 0 () = 0.
If f 00 () < 0 then the point is a relative maxima.
If f 00 () > 0 then the point is a relative minima.
If f 00 () = 0 then the test fails.
Example 3.5.4 Consider the function f (x) = cos x and the point x = 0. The derivatives of the
function are f 0 (x) = sin x, f 00 (x) = cos x. The point x = 0 is a stationary point, f 0 (0) =
sin(0) = 0. Since the second derivative is negative there, f 00 (0) = cos(0) = 1, the point is a
relative maxima.
Example 3.5.5 Consider the function f (x) = x4 and the point x = 0. The derivatives of the
function are f 0 (x) = 4x3 , f 00 (x) = 12x2 . The point x = 0 is a stationary point. Since the second
derivative also vanishes at that point the second derivative test fails. One must use the first derivative
test to determine that x = 0 is a relative minima.
Theorem of the Mean. If f (x) is continuous in [a, b] and differentiable in (a, b) then there exists
a point x = such that
f (b) f (a)
f 0 () = .
ba
43
Figure 3.9: Rolles Theorem.
That is, there is a point where the instantaneous velocity is equal to the average velocity on the
interval.
We prove this theorem by applying Rolles theorem. Consider the new function
f (b) f (a)
g(x) = f (x) f (a) (x a)
ba
Note that g(a) = g(b) = 0, so it satisfies the conditions of Rolles theorem. There is a point x =
such that g 0 () = 0. We differentiate the expression for g(x) and substitute in x = to obtain the
result.
f (b) f (a)
g 0 (x) = f 0 (x)
ba
f (b) f (a)
g 0 () = f 0 () =0
ba
f (b) f (a)
f 0 () =
ba
Generalized Theorem of the Mean. If f (x) and g(x) are continuous in [a, b] and differentiable
in (a, b), then there exists a point x = such that
f 0 () f (b) f (a)
= .
g 0 () g(b) g(a)
We have assumed that g(a) 6= g(b) so that the denominator does not vanish and that f 0 (x) and g 0 (x)
are not simultaneously zero which would produce an indeterminate form. Note that this theorem
reduces to the regular theorem of the mean when g(x) = x. The proof of the theorem is similar to
that for the theorem of the mean.
Taylors Theorem of the Mean. If f (x) is n + 1 times continuously differentiable in (a, b) then
there exists a point x = (a, b) such that
44
For the case n = 0, the formula is
f (b) = f (a) + (b a)f 0 (),
which is just a rearrangement of the terms in the theorem of the mean,
f (b) f (a)
f 0 () = .
ba
Example 3.6.1 Consider the function f (x) = ex . We want a polynomial approximation of this
function near the point x = 0. Since the derivative of ex is ex , the value of all the derivatives at
x = 0 is f (n) (0) = e0 = 1. Taylors theorem thus states that
x2 x3 xn xn+1
ex = 1 + x + + + + + e ,
2! 3! n! (n + 1)!
for some (0, x). The first few polynomial approximations of the exponent about the point x = 0
are
f1 (x) = 1
f2 (x) = 1 + x
x2
f3 (x) = 1 + x +
2
x2 x3
f4 (x) = 1 + x + +
2 6
The four approximations are graphed in Figure 3.11.
Note that for the range of x we are looking at, the approximations become more accurate as the
number of terms increases.
45
2.5 2.5 2.5 2.5
2 2 2 2
1.5 1.5 1.5 1.5
1 1 1 1
0.5 0.5 0.5 0.5
-1 -0.5 0.5 1 -1 -0.5 0.5 1 -1 -0.5 0.5 1 -1 -0.5 0.5 1
Example 3.6.2 Consider the function f (x) = cos x. We want a polynomial approximation of this
function near the point x = 0. The first few derivatives of f are
f (x) = cos x
f 0 (x) = sin x
f 00 (x) = cos x
f 000 (x) = sin x
f (4) (x) = cos x
Its easy to pick out the pattern here,
(
(1)n/2 cos x for even n,
f (n) (x) = (n+1)/2
(1) sin x for odd n.
Since cos(0) = 1 and sin(0) = 0 the n-term approximation of the cosine is,
x2 x4 x6 x2(n1) x2n
cos x = 1 + + + (1)2(n1) + cos .
2! 4! 6! (2(n 1))! (2n)!
Here are graphs of the one, two, three and four term approximations.
1 1 1 1
0.5 0.5 0.5 0.5
-3 -2 -1 1 2 3 -3 -2 -1 1 2 3 -3 -2 -1 1 2 3 -3 -2 -1 1 2 3
-0.5 -0.5 -0.5 -0.5
-1 -1 -1 -1
Note that for the range of x we are looking at, the approximations become more accurate as the
number of terms increases. Consider the ten term approximation of the cosine about x = 0,
x2 x4 x18 x20
cos x = 1 + + cos .
2! 4! 18! 20!
Note that for any value of , | cos | 1. Therefore the absolute value of the error term satisfies,
20
|x|20
x
|R| =
cos .
20! 20!
x20 /20! is plotted in Figure 3.13.
Note that the error is very small for x < 6, fairly small but non-negligible for x 7 and large
for x > 8. The ten term approximation of the cosine, plotted below, behaves just we would predict.
The error is very small until it becomes non-negligible at x 7 and large at x 8.
46
1
0.8
0.6
0.4
0.2
2 4 6 8 10
0.5
-10 -5 5 10
-0.5
-1
-1.5
-2
Example 3.6.3 Consider the function f (x) = ln x. We want a polynomial approximation of this
function near the point x = 1. The first few derivatives of f are
f (x) = ln x
1
f 0 (x) =
x
00 1
f (x) = 2
x
000 2
f (x) = 3
x
(4) 3
f (x) = 4
x
The derivatives evaluated at x = 1 are
f (0) = 0, f (n) (0) = (1)n1 (n 1)!, for n 1.
By Taylors theorem of the mean we have,
(x 1)2 (x 1)3 (x 1)4 (x 1)n (x 1)n+1 1
ln x = (x 1) + + + (1)n1 + (1)n .
2 3 4 n n + 1 n+1
Below are plots of the 2, 4, 10 and 50 term approximations.
Note that the approximation gets better on the interval (0, 2) and worse outside this interval as
the number of terms increases. The Taylor series converges to ln x only on this interval.
47
2 2 2 2
1 1 1 1
-1 0.5 1 1.5 2 2.5 3 -1 0.5 1 1.5 2 2.5 3 -1 0.5 1 1.5 2 2.5 3 -1 0.5 1 1.5 2 2.5 3
-2 -2 -2 -2
-3 -3 -3 -3
-4 -4 -4 -4
-5 -5 -5 -5
-6 -6 -6 -6
0.5
-4 -2 2 4
-0.5
-1
We wish to approximate the derivative of the function on the grid points using only the value
of the function on those discrete points. From the definition of the derivative, one is lead to the
formula
f (x + x) f (x)
f 0 (x) . (3.2)
x
Taylors theorem states that
x2 00
f (x + x) = f (x) + xf 0 (x) + f ().
2
Substituting this expression into our formula for approximating the derivative we obtain
2
f (x + x) f (x) f (x) + xf 0 (x) + x 00
2 f () f (x) x 00
= = f 0 (x) + f ().
x x 2
00
Thus we see that the error in our approximation of the first derivative is x2 f (). Since the error
has a linear factor of x, we call this a first order accurate method. Equation 3.2 is called the
forward difference scheme for calculating the first derivative. Figure 3.17 shows a plot of the value
of this scheme for the function f (x) = sin x and x = 1/4. The first derivative of the function
f 0 (x) = cos x is shown for comparison.
Another scheme for approximating the first derivative is the centered difference scheme,
f (x + x) f (x x)
f 0 (x) .
2x
Expanding the numerator using Taylors theorem,
f (x + x) f (x x)
2x
x2 00 x3 000 x2 00 x3 000
f (x) + xf 0 (x) + 2 f (x) + 6 f () f (x) + xf 0 (x) 2 f (x) + 6 f ()
=
2x
x2 000
= f 0 (x) + (f () + f 000 ()).
12
48
1
0.5
-4 -2 2 4
-0.5
-1
The error in the approximation is quadratic in x. Therefore this is a second order accurate
scheme. Below is a plot of the derivative of the function and the value of this scheme for the
function f (x) = sin x and x = 1/4.
1
0.5
-4 -2 2 4
-0.5
-1
Notice how the centered difference scheme gives a better approximation of the derivative than
the forward difference scheme.
Other singularities require more analysis to diagnose. Consider the functions sinx x , sin x
|x| and
sin x 0
1cos x at the point x = 0. All three functions evaluate to 0 at that point, but have different kinds
of singularities. The first has a removable discontinuity, the second has a finite discontinuity and
the third has an infinite discontinuity. See Figure 3.19.
49
sin x sin x sin x
Figure 3.19: The functions x , |x| and 1cos x .
a finite discontinuity. If either the left or right limit does not exist then the function has an infinite
discontinuity.
LHospitals Rule. Let f (x) and g(x) be differentiable and f () = g() = 0. Further, let g(x) be
nonzero in a deleted neighborhood of x = , (g(x) 6= 0 for x 0 < |x | < ). Then
f (x) f 0 (x)
lim = lim 0 .
x g(x) x g (x)
To prove this, we note that f () = g() = 0 and apply the generalized theorem of the mean. Note
that
f (x) f (x) f () f 0 ()
= = 0
g(x) g(x) g() g ()
for some between and x. Thus
f (x) f 0 () f 0 (x)
lim = lim 0 = lim 0
x g(x) g () x g (x)
sin x cos x
lim = lim =1
x0 x x0 1
sin x
Thus x has a removable discontinuity at x = 0.
sin x sin x
lim = lim+ =1
x0+ |x| x0 x
sin x sin x
lim = lim = 1
x0 |x| x0 x
sin x
Thus |x| has a finite discontinuity at x = 0.
sin x cos x 1
lim = lim = =
x0 1 cos x x0 sin x 0
sin x
Thus 1cos x has an infinite discontinuity at x = 0.
50
Example 3.7.2 Let a and d be nonzero.
ax2 + bx + c 2ax + b
lim = lim
x dx2 + ex + f x 2dx + e
2a
= lim
x 2d
a
=
d
This limit is an indeterminate of the form 00 . Applying LHospitals rule we see that limit is equal
to
sin x
lim .
x0 x cos x + sin x
This limit is again an indeterminate of the form 00 . We apply LHospitals rule again.
cos x 1
lim =
x0 x sin x + 2 cos x 2
Thus the value of the original limit is 12 . We could also obtain this result by expanding the functions
in Taylor series.
2 4
cos x 1 1 x2 + x24 1
lim = lim 3 5
x0 x x x + x
x0 x sin x
6 120
2
x4
x2 +24
= lim x 4 x6
x0 x2
6 + 120
2
12 + x24
= lim 2 4
x0 1 x + x
6 120
1
=
2
We can apply LHospitals Rule to the indeterminate forms 0 and by rewriting the
expression in a different form, (perhaps putting the expression over a common denominator). If at
first you dont succeed, try, try again. You may have to apply LHospitals rule several times to
evaluate a limit.
Example 3.7.4
1 x cos x sin x
lim cot x = lim
x0 x x0 x sin x
cos x x sin x cos x
= lim
x0 sin x + x cos x
x sin x
= lim
x0 sin x + x cos x
x cos x sin x
= lim
x0 cos x + cos x x sin x
=0
You can apply LHospitals rule to the indeterminate forms 1 , 00 or 0 by taking the logarithm
of the expression.
51
Example 3.7.5 Consider the limit,
lim xx ,
x0
ln(xx ) = x ln x.
As x 0 we now have the indeterminate form 0 . By rewriting the expression, we can apply
LHospitals rule.
ln x 1/x
lim = lim
x0 1/x x0 1/x2
= lim (x)
x0
=0
52
3.8 Exercises
3.8.1 Limits of Functions
Exercise 3.1
Does
1
lim sin
x0 x
exist?
Exercise 3.2
Does
1
lim x sin
x0 x
exist?
Exercise 3.4
Is the function sin(1/x) uniformly continuous in the open interval (0, 1)?
Exercise 3.5 1
Are the functions x and x uniformly continuous on the interval (0, 1)?
Exercise 3.6
Prove that a function which is continuous on a closed interval is uniformly continuous on that
interval.
Exercise 3.7
Prove or disprove each of the following.
53
d
3. dx (sin x) = cos x. (Youll need to use some trig identities.)
4. d
dx (f (g(x))) = f 0 (g(x))g 0 (x)
Exercise 3.9
Use the definition of differentiation to determine if the following functions differentiable at x = 0.
1. f (x) = x|x|
p
2. f (x) = 1 + |x|
f 0 () f (b) f (a)
= .
g 0 () g(b) g(a)
54
Assume that g(a) 6= g(b) so that the denominator does not vanish and that f 0 (x) and g 0 (x) are not
simultaneously zero which would produce an indeterminate form.
Exercise 3.19
The formulas f (x+x)f
x
(x)
and f (x+x)f
2x
(xx)
are first and second order accurate schemes for
approximating the first derivative f 0 (x). Find a couple other schemes that have successively higher
orders of accuracy. Would these higher order schemes actually give a better approximation of f 0 (x)?
Remember that x is small, but not infinitesimal.
1 x
c. limx+ 1 + x
d. limx0 csc2 x x12 . (First evaluate using LHospitals rule then using a Taylor series expan-
sion. You will find that the latter method is more convenient.)
55
3.9 Hints
Hint 3.1
Apply the , definition of a limit.
Hint 3.2
Set y = 1/x. Consider limy .
Hint 3.3
The composition of continuous functions is continuous. Apply the definition of continuity and look
at the point x = 0.
Hint 3.4
1 1
Note that for x1 = (n1/2) and x2 = (n+1/2) where n Z we have | sin(1/x1 ) sin(1/x2 )| = 2.
Hint 3.5
Note that the function x + x is a decreasing function of x and an increasing function of for
positive x and . Bound this function for fixed .
Consider any positive and . For what values of x is
1 1
> .
x x+
Hint 3.6
Let the function f (x) be continuous on a closed interval. Consider the function
Hint 3.7
CONTINUE
Hint 3.8
a. Newtons binomial formula is
n
X n n(n 1) n2 2
(a + b)n = ank bk = an + an1 b + a b + + nabn1 + bn .
k 2
k=0
b. Note that
d f (x + x)g(x + x) f (x)g(x)
(f (x)g(x)) = lim
dx x0 x
56
and
0 0 f (x + x) f (x) g(x + x) g(x)
g(x)f (x) + f (x)g (x) = g(x) lim + f (x) lim .
x0 x x0 x
Fill in the blank.
c. First prove that
sin
lim = 1.
0
and
cos 1
lim = 0.
0
d. Let u = g(x). Consider a nonzero increment x, which induces the increments u and f .
By definition,
f = f (u + u) f (u), u = g(x + x) g(x),
and f, u 0 as x 0. If u 6= 0 then we have
f df
= 0 as u 0.
u du
If u = 0 for some values of x then f also vanishes and we define = 0 for theses values.
In either case,
df
y = u + u.
du
Continue from here.
Hint 3.9
Hint 3.10
a. Use the product rule and the chain rule.
b. Use the chain rule.
c. Use the quotient rule and the chain rule.
d. Use the identity ab = eb ln a .
e. For x > 0, the expression is x sin x; for x < 0, the expression is (x) sin(x) = x sin x. Do
both cases.
Hint 3.11
Use that x0 (y) = 1/y 0 (x) and the identities cos x = (1 sin2 x)1/2 and cos(arctan x) = 1
(1+x2 )1/2
.
Hint 3.12
Differentiating the equation
x2 + [y(x)]2 = 1
yields
2x + 2y(x)y 0 (x) = 0.
Solve this equation for y 0 (x) and write y(x) in terms of x.
Hint 3.13
Differentiate the equation and solve for y 0 (x) in terms of x and y(x). Differentiate the expression
for y 0 (x) to obtain y 00 (x). Youll use that
x2 xy(x) + [y(x)]2 = 3
57
Hint 3.14
a. Use the second derivative test.
b. The function is not differentiable at the point x = 2 so you cant use a derivative test at that
point.
Hint 3.15
Let r be the radius and h the height of the cylinder. The volume of the cup is r2 h = 64. The radius
64 2 2 128
and height are related by h = r 2 . The surface area of the cup is f (r) = r + 2rh = r + r .
Use the second derivative test to find the minimum of f (r).
Hint 3.16
The proof is analogous to the proof of the theorem of the mean.
Hint 3.17
The first few terms in the Taylor series of sin(x) about x = 0 are
x3 x5 x7 x9
sin(x) = x + + + .
6 120 5040 362880
When determining the error, use the fact that | cos x0 | 1 and |xn | 1 for x [1, 1].
Hint 3.18
The terms in the approximation have the Taylor series,
x2 00 x3 000 x4 0000
f (x + x) = f (x) + xf 0 (x) + f (x) + f (x) + f (x1 ),
2 6 24
x2 00 x3 000 x4 0000
f (x x) = f (x) xf 0 (x) + f (x) f (x) + f (x2 ),
2 6 24
where x x1 x + x and x x x2 x.
Hint 3.19
Hint 3.20
a. Apply LHospitals rule three times.
b. You can write the expression as
x sin x
.
x sin x
c. Find the limit of the logarithm of the expression.
Hint 3.21
To evaluate the limits use the identity ab = eb ln a and then apply LHospitals rule.
58
3.10 Solutions
Solution 3.1
Note that in any open neighborhood of zero, (, ), the function sin(1/x) takes on all values in the
interval [1, 1]. Thus if we choose a positive such that < 1 then there is no value of for which
| sin(1/x) | < for all x (, ). Thus the limit does not exist.
Solution 3.2
We make the change of variables y = 1/x and consider y . We use that sin(y) is bounded.
1 1
lim x sin = lim sin(y) = 0
x0 x y y
Solution 3.3
Since x1 is continuous in the interval (0, 1) and the function sin(x) is continuous everywhere, the
composition sin(1/x) is continuous in the interval (0, 1).
Since limx0 sin(1/x) does not exist, there is no way of defining sin(1/x) at x = 0 to produce a
function that is continuous in [0, 1].
Solution 3.4
1 1
Note that for x1 = (n1/2) and x2 = (n+1/2) where n Z we have | sin(1/x1 ) sin(1/x2 )| = 2.
Thus for any 0 < < 2 there is no value of > 0 such that | sin(1/x1 ) sin(1/x2 )| < for all
x1 , x2 (0, 1) and |x1 x2 | < . Thus sin(1/x) is not uniformly continuous in the open interval
(0, 1).
Solution 3.5
First consider the function x. Note that the function x + x is a decreasing function of
x and
an increasing
function of for positive x and . Thus for any fixed , the maximum value
of
x +
x is bounded by . Therefore
on the interval (0, 1), a sufficient condition for
| x | < is |x | < 2 . The function x is uniformly continuous on the interval (0, 1).
Consider any positive and . Note that
1 1
>
x x+
for r !
1 4
x< 2 + .
2
Thus there is no value of such that
1 1
<
x
1
for all |x | < . The function x is not uniformly continuous on the interval (0, 1).
Solution 3.6
Let the function f (x) be continuous on a closed interval. Consider the function
Since f (x) is continuous, e(x, ) is a continuous function of x on the same closed interval. Since
continuous functions on closed intervals are bounded, there is a continuous, increasing function ()
satisfying,
e(x, ) (),
for all x in the closed interval. Since () is continuous and increasing, it has an inverse (). Now
note that |f (x) f ()| < for all x and in the closed interval satisfying |x | < (). Thus the
function is uniformly continuous in the closed interval.
59
Solution 3.7
1. The statement
lim an = L
n
is equivalent to
> 0, N s.t. n > N |an L| < .
We want to show that
(|2L| + ) =
p
= L2 + |L|
implies that
n > N |a2n L2 | < .
Therefore
> 0, M s.t. m > M |a2n L2 | < .
We conclude that limn a2n = L2 .
2. limn a2n = L2 does not imply that limn an = L. Consider an = 1. In this case
limn a2n = 1 and limn an = 1.
3. If an > 0 for all n > 200, and limn an = L, then L is not necessarily positive. Consider
an = 1/n, which satisfies the two constraints.
1
lim =0
n n
60
Solution 3.8
a.
(x + x)n xn
d n
(x ) = lim
dx x0 x
n(n1) n2
xn + nxn1 x + 2 x x2 + + xn xn
= lim
x0 x
n1 n(n 1) n2 n1
= lim nx + x x + + x
x0 2
= nxn1
d n
(x ) = nxn1
dx
b.
d f (x + x)g(x + x) f (x)g(x)
(f (x)g(x)) = lim
dx x0 x
[f (x + x)g(x + x) f (x)g(x + x)] + [f (x)g(x + x) f (x)g(x)]
= lim
x0 x
f (x + x) f (x) g(x + x) g(x)
= lim [g(x + x)] lim + f (x) lim
x0 x0 x x0 x
= g(x)f 0 (x) + f (x)g 0 (x)
d
(f (x)g(x)) = f (x)g 0 (x) + f 0 (x)g(x)
dx
c. Consider a right triangle with hypotenuse of length 1 in the first quadrant of the plane. Label
the vertices A, B, C, in clockwise order, starting with the vertex at the origin. The angle of A
is . The length of a circular arc of radius cos that connects C to the hypotenuse is cos .
The length of the side BC is sin . The length of a circular arc of radius 1 that connects B to
the x axis is . (See Figure 3.20.)
cos sin
A
C
Figure 3.20:
61
Considering the length of these three curves gives us the inequality:
cos sin .
Dividing by ,
sin
cos 1.
Taking the limit as 0 gives us
sin
lim = 1.
0
sin2
= lim
0 (cos + 1)
sin sin
= lim lim
0 0 (cos + 1)
0
= (1)
2
= 0.
d
(sin x) = cos x
dx
d. Let u = g(x). Consider a nonzero increment x, which induces the increments u and f .
By definition,
f = f (u + u) f (u), u = g(x + x) g(x),
f df
= 0 as u 0.
u du
If u = 0 for some values of x then f also vanishes and we define = 0 for theses values.
In either case,
df
y = u + u.
du
62
We divide this equation by x and take the limit as x 0.
df f
= lim
dx x0 x
df u u
= lim +
x0 du x x
df f u
= lim + lim lim
du x0 x x0 x0 x
df du du
= + (0)
du dx dx
df du
=
du dx
Thus we see that
d
(f (g(x))) = f 0 (g(x))g 0 (x).
dx
Solution 3.9
1.
|| 0
f 0 (0) = lim 0
= lim 0||
=0
The function is differentiable at x = 0.
2.
p
1 + || 1
f 0 (0) = lim 0
1
(1 + ||)1/2 sign()
= lim 0 2
1
1
= lim 0 sign()
2
Since the limit does not exist, the function is not differentiable at x = 0.
Solution 3.10
a.
d d d
[x sin(cos x)] = [x] sin(cos x) + x [sin(cos x)]
dx dx dx
d
= sin(cos x) + x cos(cos x) [cos x]
dx
= sin(cos x) x cos(cos x) sin x
d
[x sin(cos x)] = sin(cos x) x cos(cos x) sin x
dx
b.
d d
[f (cos(g(x)))] = f 0 (cos(g(x))) [cos(g(x))]
dx dx
d
= f 0 (cos(g(x))) sin(g(x)) [g(x)]
dx
= f 0 (cos(g(x))) sin(g(x))g 0 (x)
63
d
[f (cos(g(x)))] = f 0 (cos(g(x))) sin(g(x))g 0 (x)
dx
c.
d
d 1 [f (ln x)]
= dx
dx f (ln x) [f (ln x)]2
f 0 (ln x) dx
d
[ln x]
=
[f (ln x)]2
0
f (ln x)
=
x[f (ln x)]2
f 0 (ln x)
d 1
=
dx f (ln x) x[f (ln x)]2
Then we differentiate using the chain rule and the product rule.
d d
exp(exp(x ln x) ln x) = exp(exp(x ln x) ln x) (exp(x ln x) ln x)
dx dx
xx d 1
=x exp(x ln x) (x ln x) ln x + exp(x ln x)
dx x
x 1
= xx xx (ln x + x ) ln x + x1 exp(x ln x)
x
xx
x (ln x + 1) ln x + x1 xx
x
=x
x
= xx +x x1 + ln x + ln2 x
d xx x
x = xx +x x1 + ln x + ln2 x
dx
e. For x > 0, the expression is x sin x; for x < 0, the expression is (x) sin(x) = x sin x. Thus
we see that
|x| sin |x| = x sin x.
The first derivative of this is
sin x + x cos x.
d
(|x| sin |x|) = sin x + x cos x
dx
Solution 3.11
Let y(x) = sin x. Then y 0 (x) = cos x.
d 1
arcsin y = 0
dy y (x)
1
=
cos x
1
=
(1 sin2 x)1/2
1
=
(1 y 2 )1/2
64
d 1
arcsin x =
dx (1 x2 )1/2
d 1
arctan y = 0
dy y (x)
= cos2 x
= cos2 (arctan y)
1
=
(1 + y 2 )1/2
1
=
1 + y2
d 1
arctan x =
dx 1 + x2
Solution 3.12
Differentiating the equation
x2 + [y(x)]2 = 1
yields
2x + 2y(x)y 0 (x) = 0.
We can solve this equation for y 0 (x).
x
y 0 (x) =
y(x)
To find y 0 (1/2) we need to find y(x) in terms of x.
p
y(x) = 1 x2
Thus y 0 (x) is
x
y 0 (x) = .
1 x2
y 0 (1/2) can have the two values:
1 1
y0 = .
2 3
Solution 3.13
Differentiating the equation
x2 xy(x) + [y(x)]2 = 3
yields
2x y(x) xy 0 (x) + 2y(x)y 0 (x) = 0.
Solving this equation for y 0 (x)
y(x) 2x
y 0 (x) = .
2y(x) x
65
xy 0 (x) y(x)
y 00 (x) = 3 ,
(2y(x) x)2
00
x y(x)2x
2y(x)x y(x)
y (x) = 3 ,
(2y(x) x)2
x(y(x) 2x) y(x)(2y(x) x)
y 00 (x) = 3 ,
(2y(x) x)3
x2 xy(x) + [y(x)]2
y 00 (x) = 6 ,
(2y(x) x)3
18
y 00 (x) = ,
(2y(x) x)3
Solution 3.14
a.
128
2r = 0,
r2
2r3 128 = 0,
4
r= .
3
The second derivative of the surface area is f 00 (r) = 2 + 256 00 4 4
r 3 . Since f ( 3 ) = 6, r = 3 is a local
minimum of f (r). Since this is the only critical point for r > 0, it must be a global minimum.
4 4
The cup has a radius of 3 cm and a height of 3 .
Solution 3.16
We define the function
f (b) f (a)
h(x) = f (x) f (a) (g(x) g(a)).
g(b) g(a)
66
Note that h(x) is differentiable and that h(a) = h(b) = 0. Thus h(x) satisfies the conditions of
Rolles theorem and there exists a point (a, b) such that
f (b) f (a) 0
h0 () = f 0 () g () = 0,
g(b) g(a)
f 0 () f (b) f (a)
0
= .
g () g(b) g(a)
Solution 3.17
The first few terms in the Taylor series of sin(x) about x = 0 are
x3 x5 x7 x9
sin(x) = x + + + .
6 120 5040 362880
The seventh derivative of sin x is cos x. Thus we have that
x3 x5 cos x0 7
sin(x) = x + x ,
6 120 5040
x3 x5
sin x x +
6 120
1
has a maximum error of 5040 0.000198. Using this polynomial to approximate sin(1),
13 15
1 + 0.841667.
6 120
sin(1) 0.841471.
Solution 3.18
Expanding the terms in the approximation in Taylor series,
x2 00 x3 000 x4 0000
f (x + x) = f (x) + xf 0 (x) + f (x) + f (x) + f (x1 ),
2 6 24
x2 00 x3 000 x4 0000
f (x x) = f (x) xf 0 (x) + f (x) f (x) + f (x2 ),
2 6 24
where x x1 x + x and x x x2 x. Substituting the expansions into the formula,
f (x + x) 2f (x) + f (x x) x2 0000
2
= f 00 (x) + [f (x1 ) + f 0000 (x2 )].
x 24
Thus the error in the approximation is
x2 0000
[f (x1 ) + f 0000 (x2 )].
24
Solution 3.19
67
Solution 3.20
a.
x sin x 1 cos x
lim = lim
x0 x3 x0 3x2
sin x
= lim
x0 6x
h cos x i
= lim
x0 6
1
=
6
x sin x 1
lim 3
=
x0 x 6
b.
1 1 1
lim csc x = lim
x0 x x0 sin x x
x sin x
= lim
x0 x sin x
1 cos x
= lim
x0 x cos x + sin x
sin x
= lim
x0 x sin x + cos x + cos x
0
=
2
=0
1
lim csc x =0
x0 x
c.
x x
1 1
ln lim 1+ = lim ln 1+
x+ x x+ x
1
= lim x ln 1 +
x+ x
" #
ln 1 + x1
= lim
x+ 1/x
" 1 #
1 + x1 x12
= lim
x+ 1/x2
" 1 #
1
= lim 1+
x+ x
=1
Thus we have
x
1
lim
1+ = e.
x+ x
68
d. It takes four successive applications of LHospitals rule to evaluate the limit.
x2 sin2 x
2 1
lim csc x 2 = lim
x0 x x0 x2 sin2 x
2x 2 cos x sin x
= lim 2
x0 2x cos x sin x + 2x sin2 x
2 2 cos2 x + 2 sin2 x
= lim 2
x0 2x cos2 x + 8x cos x sin x + 2 sin2 x 2x2 sin2 x
8 cos x sin x
= lim
x0 12x cos x + 12 cos x sin x 8x2 cos x sin x 12x sin2 x
2
8 cos2 x 8 sin2 x
= lim
x0 24 cos2 x 8x2 cos2 x 64x cos x sin x 24 sin2 x + 8x2 sin2 x
1
=
3
It is easier to use a Taylor series expansion.
x2 sin2 x
2 1
lim csc x 2 = lim
x0 x x0 x2 sin2 x
x2 (x x3 /6 + O(x5 ))2
= lim
x0 x2 (x + O(x3 ))2
x2 (x2 x4 /3 + O(x6 ))
= lim
x0 x4 + O(x6 )
1
= lim + O(x2 )
x0 3
1
=
3
Solution 3.21
To evaluate the first limit, we use the identity ab = eb ln a and then apply LHospitals rule.
a ln x
lim xa/x = lim e x
x x
a ln x
= exp lim
x x
a/x
= exp lim
x 1
0
=e
lim xa/x = 1
x
69
a bx
lim 1+ = eab
x x
70
3.11 Quiz
Problem 3.1
Define continuity.
Problem 3.2
Fill in the blank with necessary, sufficient or necessary and sufficient.
Continuity is a condition for differentiability.
Differentiability is a condition for continuity.
f (x+x)f (x)
Existence of limx0 x is a condition for differentiability.
Problem 3.3
d
Evaluate dx f (g(x)h(x)).
Problem 3.4
d
Evaluate dx f (x)g(x) .
Problem 3.5
State the Theorem of the Mean. Interpret the theorem physically.
Problem 3.6
State Taylors Theorem of the Mean.
Problem 3.7
Evaluate limx0 (sin x)sin x .
71
3.12 Quiz Solutions
Solution 3.1
A function y(x) is said to be continuous at x = if limx y(x) = y().
Solution 3.2
Continuity is a necessary condition for differentiability.
Differentiability is a sufficient condition for continuity.
Existence of limx0 f (x+x)fx
(x)
is a necessary and sufficient condition for differentiability.
Solution 3.3
d d
f (g(x)h(x)) = f 0 (g(x)h(x)) (g(x)h(x)) = f 0 (g(x)h(x))(g 0 (x)h(x) + g(x)h0 (x))
dx dx
Solution 3.4
d d g(x) ln f (x)
f (x)g(x) = e
dx dx
d
= eg(x) ln f (x) (g(x) ln f (x))
dx
f 0 (x)
g(x) 0
= f (x) g (x) ln f (x) + g(x)
f (x)
Solution 3.5
If f (x) is continuous in [a..b] and differentiable in (a..b) then there exists a point x = such that
f (b) f (a)
f 0 () = .
ba
That is, there is a point where the instantaneous velocity is equal to the average velocity on the
interval.
Solution 3.6
If f (x) is n + 1 times continuously differentiable in (a..b) then there exists a point x = (a..b)
such that
(b a)2 00 (b a)n (n) (b a)n+1 (n+1)
f (b) = f (a) + (b a)f 0 (a) + f (a) + + f (a) + f ().
2! n! (n + 1)!
Solution 3.7
Consider limx0 (sin x)sin x . This is an indeterminate of the form 00 . The limit of the logarithm of
the expression is limx0 sin x ln(sin x). This is an indeterminate of the form 0 . We can rearrange
the expression to obtain an indeterminate of the form and then apply LHospitals rule.
72
Chapter 4
Integral Calculus
Zero Slope Implies a Constant Function. If the value of a functions derivative is identically
zero, df
dx = 0, then the function is a constant, f (x) = c. To prove this, we assume that there exists
a non-constant differentiable function whose derivative is zero and obtain a contradiction. Let f (x)
be such a function. Since f (x) is non-constant, there exist points a and b such that f (a) 6= f (b). By
the Mean Value Theorem of differential calculus, there exists a point (a, b) such that
f (b) f (a)
f 0 () = 6= 0,
ba
which contradicts that the derivative is everywhere zero.
Indefinite Integrals Differ by an Additive Constant. Suppose that F (x) and G(x) are in-
definite integrals of f (x). Then we have
d
(F (x) G(x)) = F 0 (x) G0 (x) = f (x) f (x) = 0.
dx
Thus we see that FR (x) G(x) = c and the two indefinite integrals must differ by a constant. For
example, we have sin x dx = cos x + c. While every function that can be expressed in terms of
elementary functions, (the exponent, logarithm, trigonometric functions, etc.), has a derivative that
can be written
R explicitly in terms of elementary functions, the same is not true of integrals. For
example, sin(sin x) dx cannot be written explicitly in terms of elementary functions.
Properties. Since the derivative is linear, so is the indefinite integral. That is,
Z Z Z
(af (x) + bg(x)) dx = a f (x) dx + b g(x) dx.
73
For each derivative identity there is a corresponding integral identity. Consider the power law
d
identity, dx (f (x))a = a(f (x))a1 f 0 (x). The corresponding integral identity is
(f (x))a+1
Z
(f (x))a f 0 (x) dx = + c, a 6= 1,
a+1
where we require that a 6= 1 to avoid division by zero. From the derivative of a logarithm,
d f 0 (x)
dx ln(f (x)) = f (x) , we obtain,
Z 0
f (x)
dx = ln |f (x)| + c.
f (x)
d 1
Note the absolute value signs. This is because dx ln |x| = x for x 6= 0. In Figure 4.1 is a plot of
ln |x| and x1 to reinforce this.
sin x
Z
I= dx = ln | cos x| + c.
cos x
Change of Variable. The differential of a function g(x) is dg = g 0 (x) dx. Thus one might suspect
that for = g(x), Z Z
f () d = f (g(x))g 0 (x) dx, (4.1)
74
since d = dg = g 0 (x) dx. This turns out to be true. To prove it we will appeal to the the chain rule
for differentiation. Let be a function of x. The chain rule is
d
f () = f 0 () 0 (x),
dx
d df d
f () = .
dx d dx
We can also write this as
df dx df
= ,
d d dx
or in operator notation,
d dx d
= .
d d dx
Now were ready to start. The derivative of the left side of Equation 4.1 is
Z
d
f () d = f ().
d
Next we differentiate the right side,
Z Z
d 0 dx d
f (g(x))g (x) dx = f (g(x))g 0 (x) dx
d d dx
dx
= f (g(x))g 0 (x)
d
dx dg
= f (g(x))
dg dx
= f (g(x))
= f ()
to see that it is in fact an identity for = g(x).
Integration by Parts. The product rule for differentiation gives us an identity called integration
by parts. We start with the product rule and then integrate both sides of the equation.
d
(u(x)v(x)) = u0 (x)v(x) + u(x)v 0 (x)
Z dx
(u0 (x)v(x) + u(x)v 0 (x)) dx = u(x)v(x) + c
Z Z
u (x)v(x) dx + u(x)v 0 (x)) dx = u(x)v(x)
0
Z Z
u(x)v 0 (x)) dx = u(x)v(x) v(x)u0 (x) dx
75
The theorem is most often written in the form
Z Z
u dv = uv v du.
So what is the usefulness of this? Well, it may happen for some integrals and a good choice of u and
v that the integral on the right is easier to evaluate than the integral on the left.
When applying integration by parts, one must choose u and dv wisely. As general rules of thumb:
Also note that you may have to apply integration by parts several times to evaluate some integrals.
The area is signed, that is, if f (x) is negative, then the area is negative. We measure the area
with a divide-and-conquer strategy. First partition the interval (a, b) with a = x0 < x1 < <
xn1 < xn = b. Note that the area under the curve on the subinterval is approximately the area of
a rectangle of base xi = xi+1 xi and height f (i ), where i [xi , xi+1 ]. If we add up the areas
of the rectangles, we get an approximation of the area under the curve. See Figure 4.2
Z b n1
X
f (x) dx f (i )xi
a i=0
As the xi s get smaller, we expect the approximation of the area to get better. Let x =
max0in1 xi . We define the definite integral as the sum of the areas of the rectangles in the
limit that x 0.
Z b n1
X
f (x) dx = lim f (i )xi
a x0
i=0
The integral is defined when the limit exists. This is known as the Riemann integral of f (x). f (x)
is called the integrand.
76
f(1 )
a x1 x2 x3 x n-2 x n-1 b
xi
4.2.2 Properties
Linearity and the Basics. Because summation is a linear operator, that is
n1
X n1
X n1
X
(cfi + dgi ) = c fi + d gi ,
i=0 i=0 i=0
We assume that each of the above integrals exist. If a b, and we integrate from b to a, then each
of the xi will be negative. From this observation, it is clear that
Z b Z a
f (x) dx = f (x) dx.
a b
If we integrate any function from a point a to that same point a, then all the xi are zero and
Z a
f (x) dx = 0.
a
implies that
Z b
(b a)m f (x) dx (b a)M.
a
77
Since n1 n1
X X
fi |fi |,
i=0 i=0
we have Z Z
b b
f (x) dx |f (x)| dx.
a a
Mean Value Theorem of Integral Calculus. Let f (x) be continuous. We know from above
that Z b
(b a)m f (x) dx (b a)M.
a
Therefore there exists a constant c [m, M ] satisfying
Z b
f (x) dx = (b a)c.
a
Since f (x) is continuous, there is a point [a, b] such that f () = c. Thus we see that
Z b
f (x) dx = (b a)f (),
a
is an anti-derivative of f (x), that is F 0 (x) = f (x). To show this we apply the definition of differen-
tiation and the integral mean value theorem.
F (x + x) F (x)
F 0 (x) = lim
x0 x
R x+x Rx
a
f (t) dt a f (t) dt
= lim
x0 x
R x+x
f (t) dt
= lim x
x0 x
f ()x
= lim , [x, x + x]
x0 x
= f (x)
The Fundamental Theorem of Integral Calculus. Let F (x) be any anti-derivative of f (x).
Noting that all anti-derivatives of f (x) differ by a constant and replacing x by b in Equation 4.2, we
see that there exists a constant c such that
Z b
f (x) dx = F (b) + c.
a
78
we see that c = F (a). This gives us a result known as the Fundamental Theorem of Integral
Calculus.
Z b
f (x) dx = F (b) F (a).
a
Example 4.3.1
Z
sin x dx = [ cos x]0 = cos() + cos(0) = 2
0
where the ak s are constants and the last ellipses represents the partial fractions expansion of the
roots of r(x). The coefficients are
1 dk
p(x)
ak = .
k! dxk r(x)
x=
1 + x + x2
.
(x 1)3
79
Example 4.4.2 Suppose we want to evaluate
1 + x + x2
Z
dx.
(x 1)3
If we expand the integrand in a partial fraction expansion, then the integral becomes easy.
1 + x + x2
Z Z
3 3 1
dx. = + + dx
(x 1)3 (x 1)3 (x 1)2 x1
3 3
= 2
+ ln(x 1)
2(x 1) (x 1)
1 + x + x2
.
x2 (x 1)2
Thus we have
1 + x + x2 1 3 3 3
= 2+ + .
x2 (x 1)2 x x (x 1)2 x1
If the rational function has real coefficients and the denominator has complex roots, then you
can reduce the work in finding the partial fraction expansion with the following trick: Let and
be complex conjugate pairs of roots of the denominator.
p(x) a0 a1 an1
= + + +
(x )n (x )n r(x) (x )n (x )n1 x
a0 a1 an1
+ + + + + ( )
(x )n (x )n1 x
Thus we dont have to calculate the coefficients for the root at . We just take the complex conjugate
of the coefficients for .
80
The coefficients are
1 1+x 1
a0 = = (1 i),
0! x+i
x=i 2
1 1
a0 = (1 i) = (1 + i)
2 2
Thus we have
1+x 1i 1+i
= + .
x2 + 1 2(x i) 2(x + i)
Example 4.5.1 Consider the integral of ln x on the interval [0, 1]. Since the logarithm has a singu-
larity at x = 0, this is an improper integral. We write the integral in terms of a limit and evaluate
the limit with LHospitals rule.
Z 1 Z 1
ln x dx = lim ln x dx
0 0
= lim [x ln x x]1
0
= 1 ln(1) 1 lim ( ln )
0
= 1 lim ( ln )
0
ln
= 1 lim
0 1/
1/
= 1 lim
0 1/ 2
= 1
Example 4.5.2 Consider the integral of xa on the range [0, 1]. If a < 0 then there is a singularity
at x = 0. First assume that a 6= 1.
Z 1 a+1 1
x
xa dx = lim+
0 0 a +1
1 a+1
lim+
=
a + 1 0 a + 1
This limit exists only for a > 1. Now consider the case that a = 1.
Z 1
1
x1 dx = lim [ln x]
0 0+
= ln(0) lim+ ln
0
81
This limit does not exist. We obtain the result,
Z 1
1
xa dx = , for a > 1.
0 a+1
Infinite Limits of Integration. If the range of integration is infinite, say [a, ) then we define
the integral as Z Z
f (x) dx = lim f (x) dx,
a a
Example 4.5.3
d 1
Z Z
ln x
dx = ln x dx
1 x2 1 dx x
Z
1 1 1
= ln x dx
x 1 1 x x
ln x 1
= lim
x+ x x 1
1/x 1
= lim lim +1
x+ 1 x x
=1
82
4.6 Exercises
4.6.1 The Indefinite Integral
ExerciseR4.1 (mathematica/calculus/integral/fundamental.nb)
Evaluate (2x + 3)10 dx.
ExerciseR4.2 (mathematica/calculus/integral/fundamental.nb)
2
Evaluate (lnxx) dx.
ExerciseR4.3(mathematica/calculus/integral/fundamental.nb)
Evaluate x x2 + 3 dx.
ExerciseR4.4 (mathematica/calculus/integral/fundamental.nb)
Evaluate cos x
sin x dx.
ExerciseR4.5 (mathematica/calculus/integral/fundamental.nb)
2
Evaluate x3x5 dx.
ba
where x = N and xn = a + nx, to show that
Z 1
1
x dx = .
0 2
83
c. Using induction, show that
Z x
1 1 1 n (n+1)
0
f (x) = f (0) + xf (0) + x2 f 00 (0) + + xn f (n) (0) + f (x ) d.
2 n! 0 n!
Exercise 4.11
Find a function f (x) whose arc length from 0 to x is 2x.
Exercise 4.12
Consider a curve C, bounded by 1 and 1, on the interval (1 . . . 1). Can the length of C be
unbounded? What if we change to the closed interval [1 . . . 1]?
ExerciseR4.14 (mathematica/calculus/integral/parts.nb)
Evaluate x3 e2x dx.
ExerciseR4.15 (mathematica/calculus/integral/partial.nb)
Evaluate x214 dx.
ExerciseR4.16 (mathematica/calculus/integral/partial.nb)
x+1
Evaluate x3 +x2 6x dx.
ExerciseR4.18 (mathematica/calculus/integral/improper.nb)
1
Evaluate 0 1x dx.
ExerciseR4.19 (mathematica/calculus/integral/improper.nb)
Evaluate 0 x21+4 dx.
84
4.7 Hints
Hint 4.1
Make the change of variables u = 2x + 3.
Hint 4.2
Make the change of variables u = ln x.
Hint 4.3
Make the change of variables u = x2 + 3.
Hint 4.4
Make the change of variables u = sin x.
Hint 4.5
Make the change of variables u = x3 5.
Hint 4.6
Z 1 N
X 1
x dx = lim xn x
0 N
n=0
N
X 1
= lim (nx)x
N
n=0
Hint 4.7 R
Let u = sin x and dv = sin x dx. Integration by parts will give you an equation for 0 sin2 x dx.
Hint 4.8
Let H 0 (x) = h(x) and evaluate the integral in terms of H(x).
Hint 4.9
CONTINUE
Hint 4.10
a. Evaluate the integral.
Hint 4.11
The arc length from 0 to x is
Z x p
1 + (f 0 ())2 d (4.3)
0
First show that the arc length of f (x) from a to b is 2(b a). Then conclude that the integrand in
Equation 4.3 must everywhere be 2.
Hint 4.12
CONTINUE
Hint 4.13
Let u = x, and dv = sin x dx.
85
Hint 4.14
Perform integration by parts three successive times. For the first one let u = x3 and dv = e2x dx.
Hint 4.15
Expanding the integrand in partial fractions,
1 1 a b
= = +
x2 4 (x 2)(x + 2) (x 2) (x + 2)
1 = a(x + 2) + b(x 2)
Set x = 2 and x = 2 to solve for a and b.
Hint 4.16
Expanding the integral in partial fractions,
x+1 x+1 a b c
= = + +
x3 + x2 6x x(x 2)(x + 3) x x2 x+3
Hint 4.17
Z 4 Z 1 Z 4
1 1 1
dx = lim+ dx + lim+ dx
0 (x 1)2 0 0 (x 1)2 0 1+ (x 1)2
Hint 4.18
Z 1 Z 1
1 1
dx = lim dx
0 x 0+ x
Hint 4.19
Z
1 1 x
dx = arctan
x2 + a2 a a
86
4.8 Solutions
Solution 4.1
Z
(2x + 3)10 dx
Let u = 2x + 3, g(u) = x = u3
2 , g 0 (u) = 12 .
Z Z
1
(2x + 3)10 dx = u10 du
2
11
u 1
=
11 2
(2x + 3)11
=
22
Solution 4.2
(ln x)2
Z Z
d(ln x)
dx = (ln x)2 dx
x dx
(ln x)3
=
3
Solution 4.3
1 d(x2 )
Z p Z p
x x2 + 3 dx = x2 + 3 dx
2 dx
1 (x2 + 3)3/2
=
2 3/2
(x2 + 3)3/2
=
3
Solution 4.4
Z Z
cos x 1 d(sin x)
dx = dx
sin x sin x dx
= ln | sin x|
Solution 4.5
x2 1 1 d(x3 )
Z Z
3
dx = dx
x 5 x3 5 3 dx
1
= ln |x3 5|
3
87
Solution 4.6
Z 1 N
X 1
x dx = lim xn x
0 N
n=0
N
X 1
= lim (nx)x
N
n=0
N
X 1
= lim x2 n
N
n=0
N (N 1)
= lim x2
N 2
N (N 1)
= lim
N 2N 2
1
=
2
Solution 4.7
Let u = sin x and dv = sin x dx. Then du = cos x dx and v = cos x.
Z Z
2
cos2 x dx
sin x dx = sin x cos x 0 +
0 0
Z
= cos2 x dx
0
Z
= (1 sin2 x) dx
0
Z
= sin2 x dx
0
Z
2 sin2 x dx =
0
Z
sin2 x dx =
0 2
Solution 4.8
Let H 0 (x) = h(x).
Z f (x)
d d
h() d = (H(f (x)) H(g(x)))
dx g(x) dx
= H 0 (f (x))f 0 (x) H 0 (g(x))g 0 (x)
= h(f (x))f 0 (x) h(g(x))g 0 (x)
Solution 4.9
First we compute the area for positive integer n.
1 1
x2 xn+1
Z
1 1
An = (x xn ) dx = =
0 2 n+1 0 2 n+1
88
In Figure 4.3 we plot the functions x1 , x2 , x4 , x8 , . . . , x1024 . In the limit as n , xn on the interval
[0 . . . 1] tends to the function (
0 0x<1
1 x=1
Thus the area tends to the area of the right triangle with unit base and height.
0.8
0.6
0.4
0.2
Solution 4.10
1.
Z x
x
f (0) + f 0 (x ) d = f (0) + [f (x )]0
0
= f (0) f (0) + f (x)
= f (x)
2.
Z x Z x
x
f (0) + xf 0 (0) + f 00 (x ) d = f (0) + xf 0 (0) + [f 0 (x )]0 f 0 (x ) d
0 0
x
= f (0) + xf 0 (0) xf 0 (0) [f (x )]0
= f (0) f (0) + f (x)
= f (x)
3. Above we showed that the hypothesis holds for n = 0 and n = 1. Assume that it holds for
some n = m 0.
Z x
0 1 2 00 1 n (n) 1 n (n+1)
f (x) = f (0) + xf (0) + x f (0) + + x f (0) + f (x ) d
2 n! n!
0 x
0 1 2 00 1 n (n) 1 n+1 (n+1)
= f (0) + xf (0) + x f (0) + + x f (0) + f (x )
2 n! (n + 1)! 0
Z x
1
n+1 f (n+2) (x ) d
0 (n + 1)!
1 1 1
= f (0) + xf 0 (0) + x2 f 00 (0) + + xn f (n) (0) + xn+1 f (n+1) (0)
2 n! (n + 1)!
Z x
1
+ n+1 f (n+2) (x ) d
0 (n + 1)!
89
This shows that the hypothesis holds for n = m + 1. By induction, the hypothesis hold for all
n 0.
Solution 4.11
First note that the arc length from a to b is 2(b a).
Z b p Z b p Z a p
0 2
1 + (f (x)) dx = 1+ (f 0 (x))2 dx 1 + (f 0 (x))2 dx = 2b 2a
a 0 0
Since a and b are arbitrary, we conclude that the integrand must everywhere be 2.
p
1 + (f 0 (x))2 = 2
f 0 (x) = 3
f (x) is a continuous, piecewise differentiable function which satisfies f 0 (x) = 3 at the points
where it is differentiable. One example is
f (x) = 3x
Solution 4.12
CONTINUE
Solution 4.13
Let u = x, and dv = sin x dx. Then du = dx and v = cos x.
Z Z
x sin x dx = x cos x + cos x dx
= x cos x + sin x + C
Solution 4.14
Let u = x3 and dv = e2x dx. Then du = 3x2 dx and v = 12 e2x .
Z Z
1 3
x3 e2x dx = x3 e2x x2 e2x dx
2 2
Solution 4.15
Expanding the integrand in partial fractions,
1 1 A B
= = +
x2 4 (x 2)(x + 2) (x 2) (x + 2)
90
1 = A(x + 2) + B(x 2)
Solution 4.16
Expanding the integral in partial fractions,
x+1 x+1 A B C
= = + +
x3 + x2 6x x(x 2)(x + 3) x x2 x+3
Solution 4.17
Z 4 Z 1 Z 4
1 1 1
dx = lim dx + lim dx
0 (x 1)2 0+ 0 (x 1)2 0 +
1+ (x 1)2
1 4
1 1
= lim+ + lim+
0 x1 0 0 x 1 1+
1 1 1
= lim+ 1 + lim+ +
0 0 3
=+
Solution 4.18
Z 1 Z 1
1 1
dx = lim dx
0 x 0+ x
1
= lim 2 x
0+
= lim+ 2(1 )
0
=2
91
Solution 4.19
Z Z
1 1
dx = lim dx
0 x2 + 4 0 x 2+4
1 x
= lim arctan
2 2 0
1
= 0
2 2
=
4
92
4.9 Quiz
Problem 4.1 Rb
Write the limit-sum definition of a f (x) dx.
ProblemR 4.2p
2
Evaluate 1 |x| dx.
Problem 4.3 R x2
d
Evaluate dx x
f () d.
ProblemR 4.4
2
Evaluate 1+x+x
(x+1)3 dx.
Problem 4.5
State the integral mean value theorem.
Problem 4.6
1
What is the partial fraction expansion of x(x1)(x2)(x3) ?
93
4.10 Quiz Solutions
Solution 4.1
Let a = x0 < x1 < < xn1 < xn = b be a partition of the interval (a..b). We define xi =
xi+1 xi and x = maxi xi and choose i [xi ..xi+1 ].
Z b n1
X
f (x) dx = lim f (i )xi
a x0
i=0
Solution 4.2
Z 2 p Z 0
Z 2
|x| dx = x dx + x dx
1 1 0
Z 1 2
Z
= x dx + x dx
0 0
1 2
2 3/2 2
= x + x3/2
3 0 3 0
2 2 3/2
= + 2
3 3
2
= (1 + 2 2)
3
Solution 4.3
Z x2
d d 2 d
f () d = f (x2 ) (x ) f (x) (x)
dx x dx dx
= 2xf (x2 ) f (x)
Solution 4.4
First we expand the integrand in partial fractions.
1 + x + x2 a b c
= + +
(x + 1)3 (x + 1)3 (x + 1)2 x+1
a = (1 + x + x2 )x=1 = 1
1 d 2
b= (1 + x + x ) = (1 + 2x) x=1 = 1
1! dx x=1
2
1 d 1
(1 + x + x2 )
c= 2
= (2) x=1 = 1
2! dx x=1 2
1 + x + x2
Z Z
1 1 1
dx = + dx
(x + 1)3 (x + 1)3 (x + 1)2 x+1
1 1
= + + ln |x + 1|
2(x + 1)2 x+1
x + 1/2
= + ln |x + 1|
(x + 1)2
94
Solution 4.5
Let f (x) be continuous. Then
Z b
f (x) dx = (b a)f (),
a
Solution 4.6
1 a b c d
= + + +
x(x 1)(x 2)(x 3) x x1 x2 x3
1 1
a= =
(0 1)(0 2)(0 3) 6
1 1
b= =
(1)(1 2)(1 3) 2
1 1
c= =
(2)(2 1)(2 3) 2
1 1
d= =
(3)(3 1)(3 2) 6
1 1 1 1 1
= + +
x(x 1)(x 2)(x 3) 6x 2(x 1) 2(x 2) 6(x 3)
95
96
Chapter 5
Vector Calculus
lim r(t) = r( ).
t
This occurs if and only if the component functions are continuous. The function is differentiable if
dr r(t + t) r(t)
lim
dt t0 t
exists. This occurs if and only if the component functions are differentiable.
If r(t) represents the position of a particle at time t, then the velocity and acceleration of the
particle are
dr d2 r
and ,
dt dt2
respectively. The speed of the particle is |r0 (t)|.
Differentiation Formulas. Let f (t) and g(t) be vector functions and a(t) be a scalar function.
By writing out components you can verify the differentiation formulas:
d
(f g) = f 0 g + f g0
dt
d
(f g) = f 0 g + f g0
dt
d
(af ) = a0 f + af 0
dt
97
can graph a vector field in two or three dimension by drawing vectors at regularly spaced points.
(See Figure 5.1 for a vector field in two dimensions.)
1
0.5
6
0
-0.5
4
-1
0
2 2
4
6 0
Partial Derivatives of Scalar Fields. Consider a scalar field u(x). The partial derivative of u
with respect to xk is the derivative of u in which xk is considered to be a variable and the remaining
arguments are considered to be parameters. The partial derivative is denoted x k u(x), x u
k
or uxk
and is defined
u u(x1 , . . . , xk + x, . . . , xn ) u(x1 , . . . , xk , . . . , xn )
lim .
xk x0 x
98
Consider a scalar field in R3 , u(x, y, z). Higher derivatives of u are denoted:
2u u
uxx ,
x2 x x
2
u u
uxy ,
xy x y
4u 2 u
uxxyz .
x2 yz x2 y z
Partial Derivatives of Vector Fields. Consider a vector field u(x). The partial derivative of u
with respect to xk is denoted x k u(x), x
u
k
or uxk and is defined
u u(x1 , . . . , xk + x, . . . , xn ) u(x1 , . . . , xk , . . . , xn )
lim .
xk x0 x
Partial derivatives of vector fields have the same differentiation formulas as ordinary derivatives.
e1 + + en ,
x1 xn
u u
u e1 + + en ,
x1 xn
Directional Derivative. Suppose you are standing on some terrain. The slope of the ground
in a particular direction is the directional derivative of the elevation in that direction. Consider a
differentiable scalar field, u(x). The derivative of the function in the direction of the unit vector a is
the rate of change of the function in that direction. Thus the directional derivative, Da u, is defined:
Da u(x) = u(x) a.
99
Tangent to a Surface. The gradient, f , is orthogonal to the surface f (x) = 0. Consider a
point on the surface. Let the differential dr = dx1 e1 + dxn en lie in the tangent plane at .
Then
f f
df = dx1 + + dxn = 0
x1 xn
since f (x) = 0 on the surface. Then
f f
f dr = e1 + + en (dx1 e1 + + dxn en )
x1 xn
f f
= dx1 + + dxn
x1 xn
=0
Example 5.2.1 Consider the paraboloid, x2 + y 2 z = 0. We want to find the tangent plane to
the surface at the point (1, 1, 2). The gradient is
f = 2xi + 2yj k.
The gradient of the function f (x) = 0, f (x), is in the direction of the maximum directional
derivative. The magnitude of the gradient, |f (x)|, is the value of the directional derivative in that
direction. To derive this, note that
Da f = f a = |f | cos ,
where is the angle between f and a. Da f is maximum when = 0, i.e. when a is the same
direction as f . In this direction, Da f = |f |. To use the elevation example, f points in the
uphill direction and |f | is the uphill slope.
Example 5.2.2 Suppose that the two surfaces f (x) = 0 and g(x) = 0 intersect at the point x = .
What is the angle between their tangent planes at that point? First we note that the angle between
the tangent planes is by definition the angle between their normals. These normals are in the
direction of f () and g(). (We assume these are nonzero.) The angle, , between the tangent
planes to the surfaces is
f () g()
= arccos .
|f ()| |g()|
100
The gradient of u, (x), is a unit vector in the direction of x. The gradient is:
x1 xn xi ei
u(x) = ,..., = .
xx xx xj xj
This is a unit vector because the sum of the squared components sums to unity.
xi ei xk ek xi xi
u u = =1
xj xj xl xl xj xj
x2 y2
+ = 1.
a2 b2
We can also express an ellipse as u(x, y) + v(x, y) = c where u and v are the distance from the two
foci. That is, an ellipse is the set of points such that the sum of the distances from the two foci is a
constant. Let n = (u + v). This is a vector which is orthogonal to the ellipse when evaluated on
the surface. Let t be a unit tangent to the surface. Since n and t are orthogonal,
nt=0
(u + v) t = 0
u t = v (t).
Since these are unit vectors, the angle between u and t is equal to the angle between v and
t. In other words: If we draw rays from the foci to a point on the ellipse, the rays make equal
angles with the ellipse. If the ellipse were a reflective surface, a wave starting at one focus would
be reflected from the ellipse and travel to the other focus. See Figure 6.4. This result also holds for
101
n
v
-t u
t
u v
102
5.3 Exercises
Vector Functions
Exercise 5.1
Consider the parametric curve
t t
r = cos i + sin j.
2 2
dr d2 r
Calculate dt and dt2 . Plot the position and some velocity and acceleration vectors.
Exercise 5.2
Let r(t) be the position of an object moving with constant speed. Show that the acceleration of the
object is orthogonal to the velocity of the object.
Vector Fields
Exercise 5.3
Consider the paraboloid x2 + y 2 z = 0. What is the angle between the two tangent planes that
touch the surface at (1, 1, 2) and (1, 1, 2)? What are the equations of the tangent planes at these
points?
Exercise 5.4
Consider the paraboloid x2 + y 2 z = 0. What is the point on the paraboloid that is closest to
(1, 0, 0)?
Exercise 5.5
Consider the region R defined by x2 + xy + y 2 9. What is the volume of the solid obtained by
rotating R about the y axis?
Is this the same as the volume of the solid obtained by rotating R about the x axis? Give
geometric and algebraic explanations of this.
Exercise 5.6
Two cylinders of unit radius intersect at right angles as shown in Figure 5.5. What is the volume of
the solid enclosed by the cylinders?
Exercise 5.7
Consider the curve f (x) = 1/x on the interval [1 . . . ). Let S be the solid obtained by rotating
f (x) about the x axis. (See Figure 5.6.) Show that the length of f (x) and the lateral area of S are
103
1
infinite. Find the volume of S.
1 1-1
2
3 0
4
5 -1
Exercise 5.8
Suppose that a deposit of oil looks like a cone in the ground as illustrated in Figure 5.7. Suppose
that the oil has a density of 800kg/m3 and its vertical depth is 12m. How much work2 would it
take to get the oil to the surface.
surface
32 m
12 m
12 m
ground
Exercise 5.9
Find the area and volume of a sphere of radius R by integrating in spherical coordinates.
1 You could fill S with a finite amount of paint, but it would take an infinite amount of paint to cover its surface.
2 Recall that work = force distance and force = mass acceleration.
104
5.4 Hints
Vector Functions
Hint 5.1
Plot the velocity and acceleration vectors at regular intervals along the path of motion.
Hint 5.2
If r(t) has constant speed, then |r0 (t)| = c. The condition that the acceleration is orthogonal to
the velocity can be stated mathematically in terms of the dot product, r00 (t) r0 (t) = 0. Write the
condition of constant speed in terms of a dot product and go from there.
Vector Fields
Hint 5.3
The angle between two planes is the angle between the vectors orthogonal to the planes. The angle
between the two vectors is
h2, 2, 1i h2, 2, 1i
= arccos
|h2, 2, 1i||h2, 2, 1i|
The equation of a line orthogonal to a and passing through the point b is a x = a b.
Hint 5.4
Since the paraboloid is a differentiable surface, the normal to the surface at the closest point will be
parallel to the vector from the closest point to (1, 0, 0). We can express this using the gradient and
the cross product. If (x, y, z) is the closest point on the paraboloid, then a vector orthogonal to the
surface there is f = h2x, 2y, 1i. The vector from the surface to the point (1, 0, 0) is h1x, y, zi.
These two vectors are parallel if their cross product is zero.
Hint 5.5
CONTINUE
Hint 5.6
CONTINUE
Hint 5.7
CONTINUE
Hint 5.8
Start with the formula for the work required to move the oil to the surface. Integrate over the mass
of the oil. Z
Work = (acceleration) (distance) d(mass)
Here (distance) is the distance of the differential of mass from the surface. The acceleration is that
of gravity, g.
Hint 5.9
CONTINUE
105
5.5 Solutions
Vector Functions
Solution 5.1
The velocity is
1 t 1 t
r0 = sin i + cos j.
2 2 2 2
The acceleration is
0 1 t 1 t
r = cos i sin j.
4 2 4 2
See Figure 5.8 for plots of position, velocity and acceleration.
Figure 5.8: A Graph of Position and Velocity and of Position and Acceleration
Solution 5.2
If r(t) has constant speed, then |r0 (t)| = c. The condition that the acceleration is orthogonal to the
velocity can be stated mathematically in terms of the dot product, r00 (t) r0 (t) = 0. Note that we
can write the condition of constant speed in terms of a dot product,
p
r0 (t) r0 (t) = c,
r0 (t) r0 (t) = c2 .
Differentiating this equation yields,
Vector Fields
Solution 5.3
The gradient, which is orthogonal to the surface when evaluated there is f = 2xi + 2yj k.
2i + 2j k and 2i 2j k are orthogonal to the paraboloid, (and hence the tangent planes), at
the points (1, 1, 2) and (1, 1, 2), respectively. The angle between the tangent planes is the angle
between the vectors orthogonal to the planes. The angle between the two vectors is
h2, 2, 1i h2, 2, 1i
= arccos
|h2, 2, 1i||h2, 2, 1i|
106
1
= arccos 1.45946.
9
Recall that the equation of a line orthogonal to a and passing through the point b is a x = a b.
The equations of the tangent planes are
h2, 2, 1i hx, y, zi = h2, 2, 1i h1, 1, 2i,
2x 2y z = 2.
The paraboloid and the tangent planes are shown in Figure 5.9.
-1
0
1
1
0
-1
Solution 5.4
Since the paraboloid is a differentiable surface, the normal to the surface at the closest point will be
parallel to the vector from the closest point to (1, 0, 0). We can express this using the gradient and
the cross product. If (x, y, z) is the closest point on the paraboloid, then a vector orthogonal to the
surface there is f = h2x, 2y, 1i. The vector from the surface to the point (1, 0, 0) is h1x, y, zi.
These two vectors are parallel if their cross product is zero,
h2x, 2y, 1i h1 x, y, zi = hy 2yz, 1 + x + 2xz, 2yi = 0.
This gives us the three equations,
y 2yz = 0,
1 + x + 2xz = 0,
2y = 0.
The third equation requires that y = 0. The first equation then becomes trivial and we are left with
the second equation,
1 + x + 2xz = 0.
Substituting z = x2 + y 2 into this equation yields,
2x3 + x 1 = 0.
The only real valued solution of this polynomial is
2/3
62/3 9 + 87 61/3
x= 1/3 0.589755.
9 + 87
Thus the closest point to (1, 0, 0) on the paraboloid is
2
2/3
2/3 1/3
2/3
2/3 1/3
6 9 + 87 6 6 9 + 87 6
1/3 , 0, 1/3 (0.589755, 0, 0.34781).
9 + 87 9 + 87
107
1-1
1
0.5 -0.5
0 0
-0.5 0.5
-1 1
2
1.5
0.5
Figure 5.10: Paraboloid, Tangent Plane and Line Connecting (1, 0, 0) to Closest Point
Solution 5.5
We consider the region R defined by x2 + xy + y 2 9. The boundary of the region is an ellipse. (See
Figure 5.11 for the ellipse and the solid obtained by rotating the region.) Note that in rotating the
2
3 0
-2
2
1
2
-3 -2 -1 1 2 3 0
-1
-2
-2
-2
-3 0
2
region about the y axis, only the portions in the second and fourth quadrants make a contribution.
Since the solid is symmetric across the xz plane, we will find the volume of the top half and then
double this to get the volume of the whole solid. Now we consider rotating the region in the second
quadrant about the y axis. In the equation for the ellipse, x2 + xy + y 2 = 9, we solve for x.
1 p
x= y 3 12 y 2
2
p
In the second quadrant, the curve (y 3 12 y 2 )/2 is defined on y [0 . . . 12] and the curve
p
(y 3 12 y 2 )/2 is defined on y [3 . . . 12]. (See Figure 5.12.) We find the volume obtained
108
3.5
2.5
1.5
0.5
p p
Figure 5.12: (y 3 12 y 2 )/2 in red and (y + 3 12 y 2 )/2 in green.
by rotating the first curve and subtract the volume from rotating the second curve.
p Z 12 p
!2 !2
12
y 3 12 y 2 y + 3 12 y 2
Z
V = 2 dy dy
0 2 3 2
Z 12 p 2 Z 12 p 2
!
V = y + 3 12 y 2 dy y + 3 12 y 2 dy
2 0 3
!
Z 12
2
p Z 12
2
p
V = 2y + 12y 12 y2 + 36 dy 2y 12y 12 y 2 + 36 dy
2 0 3
12 12 !
2 3 2 3/2 2 2 3/2
y 12 y 2 y 3 + 12 y 2
V = + 36y + 36y
2 3 3 0 3 3 3
V = 72
Now consider the volume of the solid obtained by rotating R about the x axis? This as the same
as the volume of the solid obtained by rotating R about the y axis. Geometrically we know this
because R is symmetric about the line y = x.
Now we justify it algebraically. Consider the phrase: Rotate the region x2 + xy + y 2 9 about
the x axis. We formally swap x and y to obtain: Rotate the region y 2 + yx + x2 9 about the y
axis. Which is the original problem.
Solution 5.6
We find of the volume of the intersecting cylinders by summing the volumes of the two cylinders
and then subracting the volume of their intersection. The volume of each of the cylinders is 2.
The intersection is shownin Figure 5.13. If we slice this solid along the plane z = const we have a
square with side length 2 1 z 2 . The volume of the intersection of the cylinders is
Z 1
4 1 z 2 dz.
1
109
1
0.5
0
-0.5
-1
1
0.5
-0.5
-1
-1
-0.5
0
0.5
1
Z 1
4 1 z 2 dz
V = 2(2) 2
0
16
V = 4
3
Solution 5.7
The length of f (x) is
Z p
L= 1 + 1/x2 dx.
1
p
Since 1 + 1/x2 > 1/x, the integral diverges. The length is infinite.
We find the area of S by integrating the length of circles.
Z
2
A= dx
1 x
Solution 5.8
First we write the formula for the work required to move the oil to the surface. We integrate over
the mass of the oil.
Z
Work = (acceleration) (distance) d(mass)
Here (distance) is the distance of the differential of mass from the surface. The acceleration is that
of gravity, g. The differential of mass can be represented an a differential of volume time the density
of the oil, 800 kg/m3 .
Z
Work = 800g(distance) d(volume)
We place the coordinate axis so that z = 0 coincides with the bottom of the cone. The oil lies
between z = 0 and z = 12. The cross sectional area of the oil deposit at a fixed depth is z 2 . Thus
110
the differential of volume is z 2 dz. This oil must me raised a distance of 24 z.
Z 12
W = 800 g (24 z) z 2 dz
0
W = 6912000g
kg m2
W 2.13 108
s2
Solution 5.9
The Jacobian in spherical coordinates is r2 sin .
Z 2 Z
area = R2 sin d d
0 0
Z
= 2R2 sin d
0
= 2R2 [ cos ]0
area = 4R2
Z R Z 2 Z
volume = r2 sin d d dr
0 0 0
Z R Z
= 2 r2 sin d dr
0 0
3 R
r
= 2 [ cos ]0
3 0
4 3
volume = R
3
111
5.6 Quiz
Problem 5.1
What is the distance from the origin to the plane x + 2y + 3z = 4?
Problem 5.2
A bead of mass m slides frictionlessly on a wire determined parametrically by w(s). The bead moves
under the force of gravity. What is the acceleration of the bead as a function of the parameter s?
112
5.7 Quiz Solutions
Solution 5.1
Recall that the equation of a plane is x n = a n where a is a point in the plane and n is normal
to the plane. We are considering the plane x + 2y + 3z = 4. A normal to the plane is h1, 2, 3i. The
unit normal is
1
n = h1, 2, 3i.
15
By substituting in x = y = 0, we see that a point in the plane is a = h0, 0, 4/3i. The distance of the
plane from the origin is a n = 415 .
Solution 5.2
The force of gravity is gk. The unit tangent to the wire is w0 (s)/|w0 (s)|. The component of the
gravitational force in the tangential direction is gk w0 (s)/|w0 (s)|. Thus the acceleration of the
bead is
gk w0 (s)
.
m|w0 (s)|
113
114
Part III
115
Chapter 6
Complex Numbers
Im sorry. You have reached an imaginary number. Please rotate your phone 90 degrees and dial
again.
x2 + 1 = 0 has no solutions.
This is a little unsatisfactory. We can formally solve the general quadratic equation.
x2 + 2ax + b = 0
(x + a)2 = a2 b
p
x = a a2 b
However, the solutions aredefined only when the discriminant, a2 b is positive. This is because
the square root function, x, is a bijection from R0+ to R0+ . (See Figure 6.1.)
Figure 6.1: y = x
117
A New Mathematical Constant. We cannot solve x2 = 1 because 1 is not defined. To
overcome
this apparent shortcoming of the real number system, we create a new symbolic constant
1. Note
that we can express the square root of any negative real number in terms
of 1:
r = 1 r. Now we can express the solutions of x2 = 1 as x = 1 and x = 1. These
2 2
satisfy the equation since 1 = 1 and 1 = 1.
Eulers Notation. Euler introduced the notation of using the letter i to denote 1. We will
use the symbol , an i without a dot, to denote 1. This helps us distinguish it from i used as a
variable or index.1 We call any number of the form b, b R, a pure imaginary number.2 We call
numbers of the form a + b, where a, b R, complex numbers 3
The Quadratic. Now we return to the quadratic with real coefficients, x2 +2ax+b = 0. It has the
solutions x = a a2 b. The solutions are real-valued only if a2 b 0. If not, then
we can define
solutions as complex numbers. If the discriminant is negative, we write x = a b a2 . Thus
every quadratic polynomial with real coefficients has exactly two solutions, counting multiplicities.
The fundamental theorem of algebra states that an nth degree polynomial with complex coefficients
has n, not necessarily distinct, complex roots. We will prove this result later using the theory of
functions of a complex variable.
Component Operations. Consider the complex number z = x + y, (x, y R). The real part
of z is <(z) = x; the imaginary part of z is =(z) = y. Two complex numbers, z1 = x1 + y1 and
z2 = x2 + y2 , are equal if and only if x1 = x2 and y1 = y2 . The complex conjugate 4 of z = x + y is
z x y. The notation z x y is also used.
Field Properties. The set of complex numbers, C, form a field. That essentially means that we
can do arithmetic with complex numbers. We treat as a symbolic constant with the property that
2 = 1. The field of complex numbers satisfy the following properties: (Let z, z1 , z2 , z3 C.)
1. Closure under addition and multiplication.
z1 + z2 = (x1 + y1 ) + (x2 + y2 )
= (x1 + x2 ) + (y1 + y2 ) C
z1 z2 = (x1 + y1 ) (x2 + y2 )
= (x1 x2 y1 y2 ) + (x1 y2 + x2 y1 ) C
118
Properties of the Complex Conjugate. Using the field properties of complex numbers, we can
derive the following properties of the complex conjugate, z = x y.
1. (z) = z,
2. z + = z + ,
3. z = z,
z z
4. = .
Im(z)
(x,y)
r
Re(z)
Recall that there are two ways of describing a point in the complex plane: an ordered pair of
coordinates (x, y) that give the horizontal and vertical offset from the origin or the distance r from
the origin and the angle from the positive horizontal axis. The angle is not unique. It is only
determined up to an additive integer multiple of 2.
Modulus. The magnitude or moduluspof a complex number is the distance of the point from the
origin. It is defined as |z| = |x + y| = x2 + y 2 . Note that zz = (x + y)(x y) = x2 + y 2 = |z|2 .
The modulus has the following properties.
1. |z1 z2 | = |z1 | |z2 |
z1 |z1 |
2. = for z2 6= 0.
z2 |z2 |
3. |z1 + z2 | |z1 | + |z2 |
4. |z1 + z2 | ||z1 | |z2 ||
We could prove the first two properties by expanding in x + y form, but it would be fairly messy.
The proofs will become simple after polar form has been introduced. The second two properties
follow from the triangle inequalities in geometry. This will become apparent after the relationship
between complex numbers and vectors is introduced. One can show that
|z1 z2 zn | = |z1 | |z2 | |zn |
and
|z1 + z2 + + zn | |z1 | + |z2 | + + |zn |
with proof by induction.
119
Argument. The argument of a complex number is the angle that the vector with tail at the origin
and head at z = x+y makes with the positive x-axis. The argument is denoted arg(z). Note that the
argument is defined for all nonzero numbers and is only determined up to an additive integer multiple
of 2. That is, the argument of a complex number is the set of values: { + 2n | n Z}. The
principal argument of a complex number is that angle in the set arg(z) which lies in the range (, ].
The principal argument is denoted Arg(z). We prove the following identities in Exercise 6.10.
Example 6.2.1 Consider the equation |z 1 | = 2. The set of points satisfying this equation is a
circle of radius 2 and center at 1 + in the complex plane. You can see this by noting that |z 1 |
is the distance from the point (1, 1). (See Figure 6.3.)
-1 1 2 3
-1
|x + y 1 | = 2
p
(x 1)2 + (y 1)2 = 2
(x 1)2 + (y 1)2 = 4
This is the analytic geometry equation for a circle of radius 2 centered about (1, 1).
|z| + |z 2| = 4.
Note that |z| is the distance from the origin in the complex plane and |z 2| is the distance from
z = 2. The equation is
Fromgeometry, we know that this is an ellipse with foci at (0, 0) and (2, 0), major axis 2, and minor
axis 3. (See Figure 6.4.)
120
2
-1 1 2 3
-1
-2
|z| + |z 2| = 4
|x + y| + |x + y 2| = 4
p p
x2 + y 2 + (x 2)2 + y 2 = 4
p
x2 + y 2 = 16 8 (x 2)2 + y 2 + x2 4x + 4 + y 2
p
x 5 = 2 (x 2)2 + y 2
x2 10x + 25 = 4x2 16x + 16 + 4y 2
1 1
(x 1)2 + y 2 = 1
4 3
Thus we have the standard form for an equation describing an ellipse.
Im( z ) (x,y)
r r sin
Re(z )
r cos
The Arctangent. Note that arctan(x, y) is not the same thing as the old arctangent that you
learned about in trigonometry arctan(x, y) is sensitive to the quadrant of the point (x, y), while
arctan xy is not. For example,
3
arctan(1, 1) = + 2n and arctan(1, 1) = + 2n,
4 4
121
whereas
1 1
arctan = arctan = arctan(1).
1 1
Eulers Formula. Eulers formula, e = cos + sin ,5 allows us to write the polar form more
compactly. Expressing the polar form in terms of the exponential function of imaginary argument
makes arithmetic with complex numbers much more convenient.
z = r(cos + sin ) = r e
The exponential of an imaginary argument has all the nice properties that we know from studying
functions of a real variable, like ea eb = e(a+b) . Later on we will introduce the exponential of a
complex number.
Using Eulers Formula, we can express the cosine and sine in terms of the exponential.
Arithmetic With Complex Numbers. Note that it is convenient to add complex numbers in
Cartesian form.
(x1 + y1 ) + (x2 + y2 ) = (x1 + x2 ) + (y1 + y2 )
r1 e1 r2 e2 = r1 r2 e(1 +2 )
r1 e1 r1 (1 2 )
= e
r2 e2 r2
Keeping this in mind will make working with complex numbers a shade or two less grungy.
122
Result 6.3.1 Eulers formula is
e = cos + sin .
r e = r cos + r sin ,
p
x + y = x2 + y 2 e arctan(x,y) .
Cartesian form is convenient for addition. Polar form is convenient for multi-
plication and division.
n
By the definition of exponentiation, we have en = e We apply Eulers formula to obtain a
result which is useful in deriving trigonometric identities.
cos(n) + sin(n) = (cos() + sin())n
123
Example 6.3.3 We will express cos(5) in terms of cos and sin(5) in terms of sin . We start
with DeMoivres theorem. 5
e5 = e
{ + + 2n : n Z} = { + 2n : n Z} + { + 2n : n Z}
The same is not true of the principal argument. In general, Arg(z) 6= Arg(z) + Arg(). Consider
the case z = = e3/4 . Then Arg(z) = Arg() = 3/4, however, Arg(z) = /2.
124
z =(xy )+i(x+y )
=r e i(+)
z+ =(x+ )+i(y+ ) =+i=ei
z=x+iy z=x+iy =rei
=+i z=x+iy =re i
z=xiy
=re i(+ )
= e i
z=re i z=x+iy=re i
z=re i
_z = _r e i ()
_1 = e
1 i
z r
_
z=xiy=rei
6 No,I have no idea why we would want to do that. Just humor me. If you pretend that youre interested, Ill do
the same. Believe me, expressing your real feelings here isnt going to do anyone any good.
125
2n
the powers of the form 3+ by successive squaring.
2
3+ = 2 + 2 3
4
3+ = 8 + 8 3
8
3 + = 128 128 3
16
3+ = 32768 + 32768 3
4 16
Next we multiply 3+ and 3+ to obtain the answer.
20
3+ = 32768 + 32768 3 8 + 8 3 = 524288 524288 3
Since we know that arctan 3, 1 = /6, it is easiest to do this problem by first changing to
modulus-argument form.
r !20
20 2
3+ = 3 + 12 e arctan( 3,1)
20
= 2 e/6
= 220 e4/3
!
1 3
= 1048576
2 2
= 524288 524288 3
Example 6.5.2 Consider (5 + 7)11 . We will do the exponentiation in polar form and write the
result in Cartesian form.
11
(5 + 7)11 = 74 e arctan(5,7)
= 745 74(cos(11 arctan(5, 7)) + sin(11 arctan(5, 7)))
= 2219006624 74 cos(11 arctan(5, 7)) + 2219006624 74 sin(11 arctan(5, 7))
The result is correct, but not very satisfying. This expression could be simplified. You could evaluate
the trigonometric functions with some fairly messy trigonometric identities. This would take much
more work than directly multiplying (5 + 7)11 .
126
We can find these values by writing z in modulus-argument form.
zn = 1
rn en = 1
n
r =1 n = 0 mod 2
r=1 = 2k for k Z
n o
11/n = e2k/n | k Z
These values are equally spaced points on the unit circle in the complex plane.
-1 1
-1
The nth roots of the complex number c = e are the set of numbers z = r e such that
z n = c = e
rn en = e
n
r= n = mod 2
r= n
= ( + 2k)/n for k = 0, . . . , n 1.
Thus
n o np o
c1/n = n
e(+2k)/n | k = 0, . . . , n 1 = n |c| e(Arg(c)+2k)/n | k = 0, . . . , n 1
127
Thus the principal root has the property
/n < Arg n
z /n.
This is consistent with the notation from functions of a real variable: n x denotes the positive nth
root of a positive real number.
We adopt the convention that z 1/n denotes the nth roots of z, which
is a set of n numbers and z is the principal nth root of z, which is a single number. The nth roots
n
of z are the principal nth root of z times the nth roots of unity.
n o
z 1/n = n r e(Arg(z)+2k)/n | k = 0, . . . , n 1
n o
z 1/n = n z e2k/n | k = 0, . . . , n 1
z 1/n = n z11/n
Rational Exponents. We interpret z p/q to mean z (p/q) . That is, we first simplify the exponent,
i.e. reduce the fraction, before carrying out the exponentiation. Therefore z 2/4 = z 1/2 and z 10/5 =
z 2 . If p/q is a reduced fraction, (p and q are relatively prime, in other words, they have no common
factors), then
1/q
z p/q (z p ) .
Thus z p/q is a set of q values. Note that for an un-reduced fraction r/s,
r
1/s
(z r ) 6= z 1/s .
The former expression is a set of s values while the latter is a set of no more that s values. For
1/2 2
instance, 12 = 11/2 = 1 and 11/2 = (1)2 = 1.
1/3
(1 + )1/3 = 2 e/4
2 e/12 e2k/3 ,
6
= for k = 0, 1, 2
5/6
(2 + )5/6 = 5 e Arctan(2,1)
1/6
= 55 e5 Arctan(2,1)
12 5
= 55 e 6 Arctan(2,1) ek/3 , for k = 0, 1, 2, 3, 4, 5
128
6.7 Exercises
Complex Numbers
Exercise 6.1
If z = x + y, write the following in the form a + b:
1. (1 + 2)7
1
2.
zz
z + z
3.
(3 + )9
Exercise 6.2
Verify that:
1 + 2 2 2
1. + =
3 4 5 5
2. (1 )4 = 4
Exercise 6.3
Write the following complex numbers in the form a + b.
10
1. 1 + 3
2. (11 + 4)2
Exercise 6.4
Write the following complex numbers in the form a + b
2
2+
1.
6 (1 2)
2. (1 )7
Exercise 6.5
If z = x + y, write the following in the form u(x, y) + v(x, y).
z
1.
z
z + 2
2.
2 z
Exercise 6.6
Quaternions are sometimes used as a generalization of complex numbers. A quaternion u may be
defined as
u = u0 + u1 + u2 + ku3
where u0 , u1 , u2 and u3 are real numbers and , and k are objects which satisfy
2 = 2 = k 2 = 1, = k, = k
and the usual associative and distributive laws. Show that for any quaternions u, w there exists a
quaternion v such that
uv = w
except for the case u0 = u1 = u2 = u3 .
129
Exercise 6.7
Let 6= 0, 6= 0 be two complex numbers. Show that = t for some real number t (i.e. the
vectors defined by and are parallel) if and only if = = 0.
1. (1 + )1/3
2. 1/4
Exercise 6.9
Sketch the regions of the complex plane:
1. |<(z)| + 2|=(z)| 1
2. 1 |z | 2
3. |z | |z + |
Exercise 6.10
Prove the following identities.
Exercise 6.11
Show, both by geometric and algebraic arguments, that for complex numbers z1 and z2 the inequal-
ities
||z1 | |z2 || |z1 + z2 | |z1 | + |z2 |
hold.
Exercise 6.12
Find all the values of
1. (1)3/4
2. 81/6
Exercise 6.13
Find all values of
1. (1)1/4
2. 161/8
Exercise 6.14
Sketch the regions or curves described by
130
1. 1 < |z 2| < 2
2. |<(z)| + 5|=(z)| = 1
3. |z | = |z + |
Exercise 6.15
Sketch the regions or curves described by
1. |z 1 + | 1
2. <(z) =(z) = 5
3. |z | + |z + | = 1
Exercise 6.16
Solve the equation
| e 1| = 2
for (0 ) and verify the solution geometrically.
Polar Form
Exercise 6.17
Show that Eulers formula, e = cos + sin , is formally consistent with the standard Taylor series
expansions for the real functions ex , cos x and sin x. Consider the Taylor series of ex about x = 0 to
be the definition of the exponential function for complex argument.
Exercise 6.18
Use de Moivres formula to derive the trigonometric identity
Exercise 6.19
Establish the formula
1 z n+1
1 + z + z2 + + zn = , (z 6= 1),
1z
for the sum of a finite geometric series; then derive the formulas
1 sin((n + 1/2))
1. 1 + cos() + cos(2) + + cos(n) = +
2 2 sin(/2)
1 cos((n + 1/2))
2. sin() + sin(2) + + sin(n) = cot
2 2 2 sin(/2)
where 0 < < 2.
Exercise 6.21
Prove that
2 2 2 2
|z + | + |z | = 2 |z| + || .
131
Integer Exponents
Exercise 6.22
Write (1 + )10 in Cartesian form with the following two methods:
1. Just do the multiplication. If it takes you more than four multiplications, you suck.
2. Do the multiplication in polar form.
Rational Exponents
Exercise 6.23 1/2
Show that each of the numbers z = a + a2 b satisfies the equation z 2 + 2az + b = 0.
132
6.8 Hints
Complex Numbers
Hint 6.1
Hint 6.2
Hint 6.3
Hint 6.4
Hint 6.5
Hint 6.6
Hint 6.7
Hint 6.9
Hint 6.10
Write the multivaluedness explicitly.
Hint 6.11
Consider a triangle with vertices at 0, z1 and z1 + z2 .
Hint 6.12
Hint 6.13
Hint 6.14
Hint 6.15
Hint 6.16
Polar Form
133
Hint 6.17
Find the Taylor series of e , cos and sin . Note that 2n = (1)n .
Hint 6.18
Hint 6.19
Hint 6.21
Consider the parallelogram defined by z and .
Integer Exponents
Hint 6.22
For the first part,
2 2
(1 + )10 = (1 + )2 (1 + )2 .
Rational Exponents
Hint 6.23
Substitite the numbers into the equation.
134
6.9 Solutions
Complex Numbers
Solution 6.1
1. We can do the exponentiation by directly multiplying.
7
(1 + 2)7 = 5 e arctan(1,2)
= 125 5 e7 arctan(1,2)
= 125 5 cos(7 arctan(1, 2)) + 125 5 sin(7 arctan(1, 2))
2.
1 1
=
zz (x y)2
1 (x + y)2
=
(x y)2 (x + y)2
(x + y)2
= 2
(x + y 2 )2
x2 y 2 2xy
= 2 2 2
+ 2
(x + y ) (x + y 2 )2
z + z
= (y + x + x y)(3 + )9
(3 + )9
9
= (1 + )(x y) 10 e arctan(3,1)
1
= (1 + )(x y) e9 arctan(3,1)
10000 10
(1 + )(x y)
= (cos(9 arctan(3, 1)) sin(9 arctan(3, 1)))
10000 10
(x y)
= (cos(9 arctan(3, 1)) + sin(9 arctan(3, 1)))
10000 10
(x y)
+ (cos(9 arctan(3, 1)) sin(9 arctan(3, 1)))
10000 10
135
We can also do this problem by directly multiplying but its a little grungy.
z + z (y + x + x y)(3 )9
=
(3 + )9 109
2 2
(1 + )(x y)(3 ) (3 )2
=
109
2
2
(1 + )(x y)(3 ) (8 6)
=
109
(1 + )(x y)(3 )(28 96)2
=
109
(1 + )(x y)(3 )(8432 5376)
=
109
(x y)(22976 38368)
=
109
359(y x) 1199(y x)
= +
15625000 31250000
Solution 6.2
1.
1 + 2 2 1 + 2 3 + 4 2
+ = +
3 4 5 3 4 3 + 4 5
5 + 10 1 2
= +
25 5
2
=
5
2.
(1 )4 = (2)2 = 4
Solution 6.3
1. First we do the multiplication in Cartesian form.
10 2 8 1
1+ 3 = 1+ 3 1+ 3
4 1
= 2 + 2 3 2 + 2 3
2 1
= 2 + 2 3 8 8 3
1
= 2 + 2 3 128 + 128 3
1
= 512 512 3
1 1
=
512 1 + 3
1 1 1 3
=
512 1 + 3 1 3
1 3
= +
2048 2048
136
Now we do the multiplication in modulus-argument, (polar), form.
10 /3 10
1+ 3 = 2e
= 210 e10/3
1 10 10
= cos + sin
1024 3 3
1 4 4
= cos sin
1024 3 3
!
1 1 3
= +
1024 2 2
1 3
= +
2048 2048
2.
(11 + 4)2 = 105 + 88
Solution 6.4
1.
2 2
2+ 2+
=
6 (1 2) 1 + 8
3 + 4
=
63 16
3 + 4 63 + 16
=
63 16 63 + 16
253 204
=
4225 4225
2.
2
(1 )7 = (1 )2 (1 )2 (1 )
= (2)2 (2)(1 )
= (4)(2 2)
= 8 + 8
Solution 6.5
1.
z x + y
=
z x + y
x y
=
x + y
x + y
=
x y
x + y x + y
=
x y x + y
x2 y 2 2xy
= 2 + 2
x + y2 x + y2
137
2.
z + 2 x + y + 2
=
2 z 2 (x y)
x + (y + 2)
=
2 y x
x + (y + 2) 2 y + x
=
2 y x 2 y + x
x(2 y) (y + 2)x x2 + (y + 2)(2 y)
= +
(2 y)2 + x2 (2 y)2 + x2
2xy 4 + x y2
2
= +
(2 y)2 + x2 (2 y)2 + x2
Solution 6.6
Method 1. We expand the equation uv = w in its components.
uv = w
(u0 + u1 + u2 + ku3 ) (v0 + v1 + v2 + kv3 ) = w0 + w1 + w2 + kw3
+ (u0 u2 + u1 u3 u2 u0 u3 u1 ) + k (u0 u3 u1 u2 + u2 u1 u3 u0 )
= u20 + u21 + u22 + u23
138
Solution 6.7
If = t, then = t||2 , which is a real number. Hence = = 0.
Now assume that = = 0. This implies that = r for some r R. We multiply by and
simplify.
||2 = r
r
=
||2
r
By taking t = ||2 We see that = t for some real number t.
-1 1
-1
2.
1/4
1/4 = e/2
= e/8 11/4
= e/8 e2k/4 , k = 0, 1, 2, 3
n o
= e/8 , e5/8 , e9/8 , e13/8
139
1
-1 1
-1
Solution 6.9
1.
|<(z)| + 2|=(z)| 1
|x| + 2|y| 1
In the first quadrant, this is the triangle below the line y = (1 x)/2. We reflect this triangle
across the coordinate axes to obtain triangles in the other quadrants. Explicitly, we have the
set of points: {z = x + y | 1 x 1 |y| (1 |x|)/2}. See Figure 6.11.
1 1
2. |z | is the distance from the point in the complex plane. Thus 1 < |z | < 2 is an annulus
centered at z = between the radii 1 and 2. See Figure 6.12.
3. The points which are closer to z = than z = are those points in the upper half plane. See
Figure 6.13.
Solution 6.10
Let z = r e and = e .
1.
140
4
3
2
1
-3 -2 -1 1 2 3
-1
-2
1 1
2.
Arg(z) 6= Arg(z) + Arg()
Consider z = = 1. Arg(z) = Arg() = , however Arg(z) = Arg(1) = 0. The identity
becomes 0 6= 2.
3.
Solution 6.11
Consider a triangle in the complex plane with vertices at 0, z1 and z1 + z2 . (See Figure 6.14.)
z1+z2
|z2|
z1
|z | |z +z2|
1 1
141
The lengths of the sides of the triangle are |z1 |, |z2 | and |z1 + z2 | The second inequality shows
that one side of the triangle must be less than or equal to the sum of the other two sides.
The first inequality shows that the length of one side of the triangle must be greater than or equal
to the difference in the length of the other two sides.
Now we prove the inequalities algebraically. We will reduce the inequality to an identity. Let
z1 = r1 e1 , z2 = r2 e2 .
r12 + r22 2r1 r2 r12 + r22 + r1 r2 e(1 2 ) +r1 r2 e(1 +2 ) r12 + r22 + 2r1 r2
2r1 r2 2r1 r2 cos (1 2 ) 2r1 r2
1 cos (1 2 ) 1
Solution 6.12
1.
1/4
(1)3/4 = (1)3
= (1)1/4
1/4
= (e )
= e/4 11/4
= e/4 ek/2 , k = 0, 1, 2, 3
n o
= e/4 , e3/4 , e5/4 , e7/4
1 + 1 + 1 1
= , , ,
2 2 2 2
-1 1
-1
142
2.
81/6 = 811/6
6
= 2 ek/3 , k = 0, 1, 2, 3, 4, 5
n o
= 2, 2 e/3 , 2 e2/3 , 2 e , 2 e4/3 , 2 e5/3
( )
1 + 3 1 + 3 1 3 1 3
= 2, , , 2, ,
2 2 2 2
-2 -1 1 2
-1
-2
Solution 6.13
1.
2.
161/8 = 1611/8
8
= 2 ek/4 , k = 0, 1, 2, 3, 4, 5, 6, 7
n o
= 2, 2 e/4 , 2 e/2 , 2 e3/4 , 2 e , 2 e5/4 , 2 e3/2 , 2 e7/4
n o
= 2, 1 + , 2, 1 + , 2, 1 , 2, 1
143
1
-1 1
-1
-1 1
-1
5
4
3
2
1
-3 -2 -1 1 2 3
-1
Solution 6.14
1. |z 2| is the distance from the point 2 in the complex plane. Thus 1 < |z 2| < 2 is an
annulus. See Figure 6.19.
2.
|<(z)| + 5|=(z)| = 1
|x| + 5|y| = 1
In the first quadrant this is the line y = (1 x)/5. We reflect this line segment across the
coordinate axes to obtain line segments in the other quadrants. Explicitly, we have the set of
points: {z = x + y | 1 < x < 1 y = (1 |x|)/5}. See Figure 6.20.
144
0.4
0.2
-1 1
-0.2
-0.4
3. The set of points equidistant from and is the real axis. See Figure 6.21.
1
-1 1
-1
Figure 6.21: |z | = |z + |
Solution 6.15
1. |z 1 + | is the distance from the point (1 ). Thus |z 1 + | 1 is the disk of unit radius
centered at (1 ). See Figure 6.22.
-1 1 2 3
-1
-2
-3
2.
<(z) =(z) = 5
xy =5
y =x5
See Figure 6.23.
3. Since |z | + |z + | 2, there are no solutions of |z | + |z + | = 1.
145
5
-10 -5 5 10
-5
-10
-15
Solution 6.16
| e 1| = 2
e 1 e 1 = 4
1 e e +1 = 4
2 cos() = 2
=
e | 0 is a unit semi-circle in the upper half of the complex plane from 1 to 1. The
only point on this semi-circle that is a distance 2 from the point 1 is the point 1, which corresponds
to = .
Polar Form
Solution 6.17
We recall the Taylor series expansion of ex about x = 0.
x
X xn
e = .
n=0
n!
We take this as the definition of the exponential function for complex argument.
X ()n
e =
n=0
n!
n
X n
=
n=0
n!
X (1)n 2n X (1)n 2n+1
= +
n=0
(2n)! n=0
(2n + 1)!
We compare this expression to the Taylor series for the sine and cosine.
X (1)n 2n X (1)n 2n+1
cos = , sin = ,
n=0
(2n)! n=0
(2n + 1)!
Thus e and cos + sin have the same Taylor series expansions about = 0.
e = cos + sin
146
Solution 6.18
Solution 6.19
Define the partial sum,
n
X
Sn (z) = zk .
k=0
1 z n+1
Sn (z) =
1z
1 z n+1
1 + z + z2 + + zn = , (z 6= 1)
1z
Now consider z = e where 0 < < 2 so that z is not unity.
n n+1
X
k
1 e
e =
1 e
k=0
n
X 1 e(n+1)
ek =
1 e
k=0
In order to get sin(/2) in the denominator, we multiply top and bottom by e/2 .
n
X e/2 e(n+1/2)
(cos(k) + sin(k)) =
e/2 e/2
k=0
n n
X X cos(/2) sin(/2) cos((n + 1/2)) sin((n + 1/2))
cos(k) + sin(k) =
2 sin(/2)
k=0 k=0
n n
X X 1 sin((n + 1/2)) 1 cos((n + 1/2))
cos(k) + sin(k) = + + cot(/2)
2 sin(/2) 2 sin(/2)
k=0 k=1
1. We take the real and imaginary part of this to obtain the identities.
n
X 1 sin((n + 1/2))
cos(k) = +
2 2 sin(/2)
k=0
147
2.
n
X 1 cos((n + 1/2))
sin(k) = cot(/2)
2 2 sin(/2)
k=1
|z1 z2 | = |r1 e1 r2 e2 |
= |r1 r2 e(1 +2 ) |
= |r1 r2 |
= |r1 ||r2 |
= |z1 ||z2 |
z1 r1 e1
=
z2 r2 e2
r1 ( )
= e
1 2
r2
r1
=
r2
|r1 |
=
|r2 |
|z1 |
=
|z2 |
Solution 6.21
2 2
|z + | + |z | = (z + ) z + + (z ) z
= zz + z + z + + zz z z +
2 2
= 2 |z| + ||
Consider the parallelogram defined by the vectors z and . The lengths of the sides are z and
and the lengths of the diagonals are z + and z . We know from geometry that the sum of the
squared lengths of the diagonals of a parallelogram is equal to the sum of the squared lengths of the
four sides. (See Figure 6.24.)
z-
z
z+
148
Integer Exponents
Solution 6.22
1.
2 2
(1 + )10 = (1 + )2 (1 + )2
2
2
= (2) (2)
2
= (4) (2)
= 16(2)
= 32
2.
10
(1 + )10 = 2 e/4
10
= 2 e10/4
= 32 e/2
= 32
Rational Exponents
Solution 6.23
We substitite the numbers into the equation to obtain an identity.
z 2 + 2az + b = 0
1/2 2
1/2
a + a2 b + 2a a + a2 b +b=0
1/2 1/2
a2 2a a2 b + a2 b 2a2 + 2a a2 b
+b=0
0=0
149
150
Chapter 7
-Tim Mauch
In this chapter we introduce the algebra of functions of a complex variable. We will cover the
trigonometric and inverse trigonometric functions. The properties of trigonometric functions carry
over directly from real-variable theory. However, because of multi-valuedness, the inverse trigono-
metric functions are significantly trickier than their real-variable counterparts.
Curves. Consider two continuous functions, x(t) and y(t), defined on the interval t [t0 . . . t1 ].
The set of points in the complex plane
defines a continuous curve or simply a curve. If the endpoints coincide, z (t0 ) = z (t1 ), it is a closed
curve. (We assume that t0 6= t1 .) If the curve does not intersect itself, then it is said to be a simple
curve.
If x(t) and y(t) have continuous derivatives and the derivatives do not both vanish at any point1
, then it is a smooth curve. This essentially means that the curve does not have any corners or other
nastiness.
A continuous curve which is composed of a finite number of smooth curves is called a piecewise
smooth curve. We will use the word contour as a synonym for a piecewise smooth curve.
See Figure 7.1 for a smooth curve, a piecewise smooth curve, a simple closed curve and a non-
simple closed curve.
Regions. A region R is connected if any two points in R can be connected by a curve which
lies entirely in R. A region is simply-connected if every closed curve in R can be continuously
shrunk to a point without leaving R. A region which is not simply-connected is said to be multiply-
connected region. Another way of defining simply-connected is that a path connecting two points in
R can be continuously deformed into any other path that connects those points. Figure 7.2 shows a
simply-connected region with two paths which can be continuously deformed into one another and
a multiply-connected region with paths which cannot be deformed into one another.
1 Why is it necessary that the derivatives do not both vanish?
151
(a) (b) (c) (d)
Figure 7.1: (a) Smooth Curve, (b) Piecewise Smooth Curve, (c) Simple Closed Curve, (d) Non-
Simple Closed Curve
Jordan Curve Theorem. A continuous, simple, closed curve is known as a Jordan curve. The
Jordan Curve Theorem, which seems intuitively obvious but is difficult to prove, states that a Jordan
curve divides the plane into a simply-connected, bounded region and an unbounded region. These
two regions are called the interior and exterior regions, respectively. The two regions share the curve
as a boundary. Points in the interior are said to be inside the curve; points in the exterior are said
to be outside the curve.
Traversal of a Contour. Consider a Jordan curve. If you traverse the curve in the positive
direction, then the inside is to your left. If you traverse the curve in the opposite direction, then
the outside will be to your left and you will go around the curve in the negative direction. For
circles, the positive direction is the counter-clockwise direction. The positive direction is consistent
with the way angles are measured in a right-handed coordinate system, i.e. for a circle centered on
the origin, the positive direction is the direction of increasing angle. For an oriented contour C, we
denote the contour with opposite orientation as C.
Two Interpretations of a Curve. Consider a simple closed curve as depicted in Figure 7.4a. By
giving it an orientation, we can make a contour that either encloses the bounded domain Figure 7.4b
or the unbounded domain Figure 7.4c. Thus a curve has two interpretations. It can be thought of
as enclosing either the points which are inside or the points which are outside.2
2 A farmer wanted to know the most efficient way to build a pen to enclose his sheep, so he consulted an engineer,
a physicist and a mathematician. The engineer suggested that he build a circular pen to get the maximum area for
any given perimeter. The physicist suggested that he build a fence at infinity and then shrink it to fit the sheep. The
mathematician constructed a little fence around himself and then defined himself to be outside.
152
Figure 7.3: Traversing the boundary in the positive direction.
Stereographic Projection. We can visualize the point at infinity with the stereographic projec-
tion. We place a unit sphere on top of the complex plane so that the south pole of the sphere is at
the origin. Consider a line passing through the north pole and a point z = x + y in the complex
plane. In the stereographic projection, the point point z is mapped to the point where the line
intersects the sphere. (See Figure 7.5.) Each point z = x + y in the complex plane is mapped to a
unique point (X, Y, Z) on the sphere.
4x 4y 2|z|2
X= , Y = , Z=
|z|2 + 4 |z|2 + 4 |z|2 + 4
The origin is mapped to the south pole. The point at infinity, |z| = , is mapped to the north pole.
In the stereographic projection, circles in the complex plane are mapped to circles on the unit
sphere. Figure ?? shows circles along the real and imaginary axes under the mapping. Lines in the
complex plane are also mapped to circles on the unit sphere. The right diagram in Figure ?? shows
lines emanating from the origin under the mapping.
153
y
154
7.3 Cartesian and Modulus-Argument Form
We can write a function of a complex variable z as a function of x and y or as a function of r and
with the substitutions z = x + y and z = r e , respectively. Then we can separate the real and
imaginary components or write the function in modulus-argument form,
f (z) = u(x, y) + v(x, y), or f (z) = u(r, ) + v(r, ),
f (z) = (x, y) e(x,y) , or f (z) = (r, ) e(r,) .
1
Example 7.3.1 Consider the functions f (z) = z, f (z) = z 3 and f (z) = 1z . We write the functions
in terms of x and y and separate them into their real and imaginary components.
f (z) = z
= x + y
f (z) = z 3
= (x + y)3
= x3 + x2 y xy 2 y 3
= x3 xy 2 + x2 y y 3
1
f (z) =
1z
1
=
1 x y
1 1 x + y
=
1 x y 1 x + y
1x y
= 2 2
+
(1 x) + y (1 x)2 + y 2
1
Example 7.3.2 Consider the functions f (z) = z, f (z) = z 3 and f (z) = 1z . We write the functions
in terms of r and and write them in modulus-argument form.
f (z) = z
= r e
f (z) = z 3
3
= r e
= r3 e3
1
f (z) =
1z
1
=
1 r e
1 1
=
1 r e 1 r e
1 r e
=
1 r e r e +r2
1 r cos + r sin
=
1 2r cos + r2
155
Note that the denominator is real and non-negative.
1
= |1 r cos + r sin | e arctan(1r cos ,r sin )
1 2r cos + r2
1
q
= (1 r cos )2 + r2 sin2 e arctan(1r cos ,r sin )
1 2r cos + r2
1 p
= 1 2r cos + r2 cos2 + r2 sin2 e arctan(1r cos ,r sin )
1 2r cos + r2
1
= e arctan(1r cos ,r sin )
1 2r cos + r2
Example 7.4.1 Consider the identity function, f (z) = z. In Cartesian coordinates and Cartesian
form, the function is f (z) = x + y. The real and imaginary components are u(x, y) = x and
v(x, y) = y. (See Figure 7.7.) In modulus argument form the function is
2 2
1 2 1 2
0 0
-1 1 -1 1
-2 -2
-2 0 y -2 0 y
-1 -1
-1 -1
x0 1 x0 1
2-2 2-2
p
f (z) = z = r e = x2 + y 2 e arctan(x,y) .
The modulus of f (z) is a single-valued function which is the distance from the origin. The argument
of f (z) is a multi-valued function. Recall that arctan(x, y) has an infinite number of values each of
which differ by an integer multiple of 2. A few branches of arg(f (z)) are plotted in Figure 7.8. The
y 12
0
-1
-2
5
0
-5
-2
-1
0
x 1
2
156
2 2 2 2
1 0
0 1 -2 1
-2 0y -2 0y
-1 -1 -1 -1
0 0
x 1 x 1
2-2 2 -2
Example 7.4.2 Consider the function f (z) = z 2 . In Cartesian coordinates and separated into its
real and imaginary components the function is
Figure 7.10 shows surface plots of the real and imaginary parts of z 2 . The magnitude of z 2 is
4
2 5
2 2
0 0
-2 1 -5 1
-4
-2 0 y -2 0 y
-1 -1
0 -1 0 -1
x 1 x 1
2 -2 2 -2
Figure 7.10: Plots of < z 2 and = z 2
p
|z 2 | = z 2 z 2 = zz = (x + y)(x y) = x2 + y 2 .
Note that 2
z 2 = r e
= r2 e2 .
In Figure 7.11 are plots of |z 2 | and a branch of arg z 2 .
8 5
6 2 2
4 0
2 1 1
0 -5
-2 0 y -2 0 y
-1 -1
0 -1 0 -1
x 1 x 1
2 -2 2 -2
Figure 7.11: Plots of |z 2 | and a branch of arg z 2
157
7.5 Trigonometric Functions
The Exponential Function. Consider the exponential function ez . We can use Eulers formula
to write ez = ex+y in terms of its real and imaginary parts.
From this we see that the exponential function is 2 periodic: ez+2 = ez , and odd periodic:
ez+ = ez . Figure 7.12 has surface plots of the real and imaginary parts of ez which show this
periodicity.
20 20
10 10
0 5 0 5
-10 -10
-20 -20
0 y 0 y
-2 -2
0 0
x -5 x -5
2 2
|ez | = ex+y = ex
20 5
15 5 0 5
10
5 -5
0
0 y 0 y
-2 -2
x0 -5 x0 -5
2 2
Example 7.5.1 Show that the transformation w = ez maps the infinite strip, < x < ,
0 < y < , onto the upper half-plane.
Method 1. Consider the line z = x + c, < x < . Under the transformation, this is
mapped to
w = ex+c = ec ex , < x < .
This is a ray from the origin to infinity in the direction of ec . Thus we see that z = x is mapped to
the positive, real w axis, z = x + is mapped to the negative, real axis, and z = x + c, 0 < c <
158
3 3
2 2
1 1
-3 -2 -1 1 2 3 -3 -2 -1 1 2 3
is mapped to a ray with angle c in the upper half-plane. Thus the strip is mapped to the upper
half-plane. See Figure 7.14.
Method 2. Consider the line z = c + y, 0 < y < . Under the transformation, this is mapped
to
w = ec+y + ec ey , 0 < y < .
This is a semi-circle in the upper half-plane of radius ec . As c , the radius goes to zero.
As c , the radius goes to infinity. Thus the strip is mapped to the upper half-plane. See
Figure 7.15.
3 3
2 2
1 1
-1 1 -3 -2 -1 1 2 3
The Sine and Cosine. We can write the sine and cosine in terms of the exponential function.
ez + ez cos(z) + sin(z) + cos(z) + sin(z)
=
2 2
cos(z) + sin(z) + cos(z) sin(z)
=
2
= cos z
We separate the sine and cosine into their real and imaginary parts.
cos z = cos x cosh y sin x sinh y sin z = sin x cosh y + cos x sinh y
For fixed y, the sine and cosine are oscillatory in x. The amplitude of the oscillations grows with
increasing |y|. See Figure 7.16 and Figure 7.17 for plots of the real and imaginary parts of the cosine
and sine, respectively. Figure 7.18 shows the modulus of the cosine and the sine.
159
5 5
2.5 2.5
0 2 0 2
-2.5 1 -2.5 1
-5 -5
0 y 0 y
-2 -2
0 -1 0 -1
x x
2 -2 2 -2
5 5
2.5 2.5
0 2 0 2
-2.5 1 -2.5 1
-5 -5
0 y 0 y
-2 -2
0 -1 0 -1
x x
2 -2 2 -2
4 4
2 2
2 2
1 1
0
0 y 0 y
-2 -2
0 -1 0 -1
x x
2 -2 2 -2
160
The Hyperbolic Sine and Cosine. The hyperbolic sine and cosine have the familiar definitions
in terms of the exponential function. Thus not surprisingly, we can write the sine in terms of the
hyperbolic sine and write the cosine in terms of the hyperbolic cosine. Below is a collection of
trigonometric identities.
Result 7.5.1
ez = ex (cos y + sin y)
ez + ez ez ez
cos z = sin z =
2 2
cos z = cos x cosh y sin x sinh y sin z = sin x cosh y + cos x sinh y
ez + ez ez ez
cosh z = sinh z =
2 2
cosh z = cosh x cos y + sinh x sin y sinh z = sinh x cos y + cosh x sin y
sin(z) = sinh z sinh(z) = sin z
cos(z) = cosh z cosh(z) = cos z
log z = ln |z| + arg(z) = ln |z| + Arg(z) + 2n, n Z
This is because elog z is single-valued but log (ez ) is not. Because ez is 2 periodic, the logarithm
of a number is a set of numbers which differ by integer multiples of 2. For instance, e2n = 1 so
that log(1) = {2n : n Z}. The logarithmic function has an infinite number of branches. The
value of the function on the branches differs by integer multiples of 2. It has singularities at zero
and infinity. | log(z)| as either z 0 or z .
We will derive the formula for the complex variable logarithm. For now, let ln(x) denote the real
variable logarithm that is defined for positive real numbers. Consider w = log z. This means that
ew = z. We write w = u + v in Cartesian form and z = r e in polar form.
eu+v = r e
If we write out the multi-valuedness of the argument function we note that this has the form that
we expected.
log z = ln |z| + (Arg(z) + 2n), n Z
We check that our formula is correct by showing that elog z = z
elog z = eln |z|+ arg(z) = eln r++2n = r e = z
161
Note again that log (ez ) 6= z.
log (ez ) = ln | ez | + arg (ez ) = ln (ex ) + arg ex+y = x + (y + 2n) = z + 2n 6= z
The real part of the logarithm is the single-valued ln r; the imaginary part is the multi-valued
arg(z). We define the principal branch of the logarithm Log z to be the branch that satisfies <
=(Log z) . For positive, real numbers the principal branch, Log x is real-valued. We can write
Log z in terms of the principal argument, Arg z.
Log z = ln |z| + Arg(z)
See Figure 7.19 for plots of the real and imaginary part of Log z.
1
2
0 2 2
0
-1 1 1
-2
-2
-2 0 y -2 0 y
-1 -1
0 -1 0 -1
x 1 x 1
2 -2 2 -2
The Form: ab . Consider ab where a and b are complex and a is nonzero. We define this expression
in terms of the exponential and the logarithm as
ab = eb log a .
Note that the multi-valuedness of the logarithm may make ab multi-valued. First consider the case
that the exponent is an integer.
am = em log a = em(Log a+2n) = em Log a e2mn = em Log a
Thus we see that am has a single value where m is an integer.
Now consider the case that the exponent is a rational number. Let p/q be a rational number in
reduced form. p p p
ap/q = e q log a = e q (Log a+2n) = e q Log a e2np/q .
This expression has q distinct values as
e2np/q = e2mp/q if and only if n = m mod q.
Finally consider the case that the exponent b is an irrational number.
ab = eb log a = eb(Log a+2n) = eb Log a e2bn
Note that e2bn and e2bm are equal if and only if 2bn and 2bm differ by an integer multiple
of 2, which means that bn and bm differ by an integer. This occurs only when n = m. Thus
e2bn has a distinct value for each different integer n. We conclude that ab has an infinite number
of values.
You may have noticed something a little fishy. If b is not an integer and a is any non-zero complex
number, then ab is multi-valued. Then why have we been treating eb as single-valued, when it is
merely the case a = e? The answer is that in the realm of functions of a complex variable, ez is an
abuse of notation. We write ez when we mean exp(z), the single-valued exponential function. Thus
when we write ez we do not mean the number e raised to the z power, we mean the exponential
function of z. We denote the former scenario as (e)z , which is multi-valued.
162
Logarithmic Identities. Back in high school trigonometry when you thought that the logarithm
was only defined for positive real numbers you learned the identity log xa = a log x. This identity
doesnt hold when the logarithm is defined for nonzero complex numbers. Consider the logarithm
of z a .
log z a = Log z a + 2n
a log z = a(Log z + 2n) = a Log z + 2an
Note that
log z a 6= a log z
Furthermore, since
Log z a 6= a Log z.
There is not an analogous identity for the principal branch of the logarithm since Arg(ab) is not in
general the same as Arg(a) + Arg(b). Pn
Using log(ab) = log(a) + log(b) we can deduce that log (an ) = k=1 log a = n log a, where n is a
positive integer. This result is simple,
straightforward and wrong. I have led you down the merry
path to damnation.3 In fact, log a2 6= 2 log a. Just write the multi-valuedness explicitly,
For general values of a, log z a 6= a log z. However, for some values of a, equality holds. We already
know that a = 1 and a = 1 work. To determine if equality holds for other values of a, we explicitly
write the multi-valuedness.
The sets are equal if and only if a = 1/n, n Z . Thus we have the identity:
1
log z 1/n = log z, n Z
n
3 Dont feel bad if you fell for it. The logarithm is a tricky bastard.
163
Result 7.6.1 Logarithmic Identities.
ab = eb log a
elog z = eLog z = z
log(ab) = log a + log b
log(1/a) = log a
log(a/b) = log a log b
1
log z 1/n = log z, n Z
n
Logarithmic Inequalities.
Log(uv) 6= Log(u) + Log(v)
log z a 6= a log z
Log z a 6= a Log z
log ez 6= z
ez ez
sin z = =0
2
ez = ez
e2z = 1
2z mod 2 = 0
z = n, nZ
Equivalently, we could use the identity
sin z = sin x cosh y + cos x sinh y = 0.
This becomes the two equations (for the real and imaginary parts)
sin x cosh y = 0 and cos x sinh y = 0.
Since cosh is real-valued and positive for real argument, the first equation dictates that x = n,
n Z. Since cos(n) = (1)n for n Z, the second equation implies that sinh y = 0. For real
argument, sinh y is only zero at y = 0. Thus the zeros are
z = n, nZ
164
Example 7.6.3 Since we can express sin z in terms of the exponential function, one would expect
that we could express the sin1 z in terms of the logarithm.
w = sin1 z
z = sin w
ew ew
z=
2
e2w 2z ew 1 = 0
p
ew = z 1 z 2
p
w = log z 1 z 2
p
sin1 z = log z 1 z 2
sin3 z = 1
sin z = 11/3
ez ez
= 11/3
2
ez 2(1)1/3 ez = 0
e2z 2(1)1/3 ez 1 = 0
p
2(1)1/3 4(1)2/3 + 4
ez =
q2
ez = (1)1/3 1 (1)2/3
p
z = log (1)1/3 1 12/3
Note that there are three sources of multi-valuedness in the expression for z. The two values of the
square root are shown explicitly. There are three cube roots of unity. Finally, the logarithm has an
infinite number of branches. To show this multi-valuedness explicitly, we could write
p
z = Log e2m/3 1 e4m/3 + 2n, m = 0, 1, 2, n = . . . , 1, 0, 1, . . .
z = 1
z
e/2 = 1
165
Use the fact that z is an integer.
ez/2 = 1
z/2 = 2n, for some n Z
z = 4n, nZ
Here is a different approach. We write down the multi-valued form of z . We solve the equation
by requiring that all the values of z are 1.
z = 1
ez log = 1
z log = 2n, for some n Z
z + 2m = 2n, m Z, for some n Z
2
z + 2mz = 2n, m Z, for some n Z
2
The only solutions that satisfy the above equation are
z = 4k, k Z.
Now lets consider a slightly different problem: 1 z . For what values of z does z have 1 as
one of its values.
1 z
1 ez log
1 {ez(/2+2n) | n Z}
z(/2 + 2n) = 2m, m, n Z
4m
z= , m, n Z
1 + 4n
There are an infinite set of rational numbers for which z has 1 as one of its values. For example,
n o
4/5 = 11/5 = 1, e2/5 , e4/5 , e6/5 , e8/5
This multi-valuedness makes it hard to work with the logarithm. We would like to select one of
the branches of the logarithm. One way of doing this is to decompose the z-plane into an infinite
number of sheets. The sheets lie above one another and are labeled with the integers, n Z. (See
Figure 7.20.) We label the point z on the nth sheet as (z, n). Now each point (z, n) maps to a single
point in the w-plane. For instance, we can make the zeroth sheet map to the principal branch of
the logarithm. This would give us the following mapping.
log(z, n) = Log z + 2n
This is a nice idea, but it has some problems. The mappings are not continuous. Consider the
mapping on the zeroth sheet. As we approach the negative real axis from above z is mapped to
166
2
1
0
-1
-2
ln |z| + as we approach from below it is mapped to ln |z| . (Recall Figure 7.19.) The mapping
is not continuous across the negative real axis.
Lets go back to the regular z-plane for a moment. We start at the point z = 1 and selecting
the branch of the logarithm that maps to zero. (log(1) = 2n). We make the logarithm vary
continuously as we walk around the origin once in the positive direction and return to the point
z = 1. Since the argument of z has increased by 2, the value of the logarithm has changed to 2.
If we walk around the origin again we will have log(1) = 4. Our flat sheet decomposition of the
z-plane does not reflect this property. We need a decomposition with a geometry that makes the
mapping continuous and connects the various branches of the logarithm.
Drawing inspiration from the plot of arg(z), Figure 7.8, we decompose the z-plane into an infinite
corkscrew with axis at the origin. (See Figure 7.21.) We define the mapping so that the logarithm
varies continuously on this surface. Consider a point z on one of the sheets. The value of the
logarithm at that same point on sheet directly above it is 2 more than the original value. We call
this surface, the Riemann surface for the logarithm. The mapping from the Riemann surface to the
w-plane is continuous and one-to-one.
167
Figure 7.22 shows the real and imaginary parts of z 1/2 from three different viewpoints. The second
and third views are looking down the x axis and y axis, respectively. Consider < z 1/2 . This is a
double layered sheet which intersects itself on the negative real axis. (=(z 1/2 ) has a similar structure,
but intersects itself on the positive real axis.) Lets start at a point on the positive real axis on the
lower sheet. If we walk around the origin once and return to the positive real axis, we will be on the
upper sheet. If we do this again, we will return to the lower sheet.
Suppose we are at a point in the complex plane. We pick one of the two values of z 1/2 . If the
function varies continuously as we walk around the origin and back to our starting point, the value
of z 1/2 will have changed. We will be on the other branch. Because walking around the point z = 0
takes us to a different branch of the function, we refer to z = 0 as a branch point.
1 1
2 2
0 0
-1 1 -1 1
-2 0 y -2 0 y
-1 -1
0 -1 0 -1
x 1 x 1
2 -2 2 -2
-20121
-1 0121
-1
-2
x x
0 0
-1 -1
-2 -1 0 1 2 -2 -1 0 1 2
y y
1210-1
-2 1210-1
-2
y y
0 0
-1 -1
2 1 0 -1 -2 2 1 0 -1 -2
x x
Figure 7.22: Plots of < z 1/2 (left) and = z 1/2 (right) from three viewpoints.
168
1 2 2 2
0.5 0
0 1 -2 1
-2-1 0y -2 0y
0 -1 -1 -1
x 1 0
2 -2 x 1
2 -2
Figure 7.23: Plots of |z 1/2 | and Arg z 1/2 .
Im(z) Im(z)
Re(z) Re(z)
Figure 7.24: A path that does not encircle the origin and a path around the origin
As we return to the point z = 1, the argument of the function has changed by and the value of the
function has changed from 1 to 1. If we were to walk along the circular path again, the argument
of z would increase by another 2. The argument of the function would increase by another and
the value of the function would return to 1.
1/2
e4 = e2 = 1
In general, any time we walk around the origin, the value of z 1/2 changes by the factor 1. We
call z = 0 a branch point. If we want a single-valued square root, we need something to prevent
us from walking around the origin. We achieve this by introducing a branch cut. Suppose we have
the complex plane drawn on an infinite sheet of paper. With a scissors we cut the paper from the
origin to along the real axis. Then if we start at z = e0 , and draw a continuous line without
leaving the paper, the argument of z will always be in the range < arg z < . This means
that 2 < arg z 1/2 < 2 . No matter what path we follow in this cut plane, z = 1 has argument
zero and (1)1/2 = 1. By never crossing the negative real axis, we have constructed a single valued
branch of the square root function. We call the cut along the negative real axis a branch cut.
Example 7.8.2 Consider the logarithmic function log z. For each value of z, there are an infinite
number of values of log z. We write log z in Cartesian form.
Figure 7.25 shows the real and imaginary parts of the logarithm. The real part is single-valued. The
imaginary part is multi-valued and has an infinite number of branches. The values of the logarithm
form an infinite-layered sheet. If we start on one of the sheets and walk around the origin once in
169
1
0 2 5 2
-1 0
1 -5 1
-2
-2 0 y -2 0 y
-1 -1
0 -1 0 -1
x 1 x 1
2-2 2 -2
the positive direction, then the value of the logarithm increases by 2 and we move to the next
branch. z = 0 is a branch point of the logarithm.
The logarithm is a continuous function except at z = 0. Suppose we start at z = 1 = e0 and the
function value log e0 = ln(1) + 0 = 0. If we follow the first path in Figure 7.24, the argument of
z and thus the imaginary part of the logarithm varies from up to about 4 , down to about 4 and
back to 0. The value of the logarithm is still 0.
Now suppose we follow a circular path around the origin in the positive direction. (See the second
path in Figure 7.24.) The argument of z increases by 2. The value of the logarithm at half turns
on the path is
log e0 = 0,
log (e ) = ,
log e2 = 2
As we return to the point z = 1, the value of the logarithm has changed by 2. If we were to walk
along the circular path again, the argument of z would increase by another 2 and the value of the
logarithm would increase by another 2.
Branch Points at Infinity : Mapping Infinity to the Origin. Up to this point we have
considered only branch points in the finite plane. Now we consider the possibility of a branch point
at infinity. As a first method of approaching this problem we map the point at infinity to the origin
with the transformation = 1/z and examine the point = 0.
Example 7.8.3 Again consider the function z 1/2 . Mapping the point at infinity to the origin, we
have f () = (1/)1/2 = 1/2 . For each value of , there are two values of 1/2 . We write 1/2 in
modulus-argument form.
1
1/2 = p e arg()/2
||
Like z 1/2 , 1/2 has a double-layered sheet of values. Figure 7.26 shows the modulus and the
principal argument of 1/2 . We see that each time we walk around the origin, the argument of
1/2 changes by . This means that the value of the function changes by the factor e = 1,
i.e. the function changes sign. If we walk around the origin twice, the argument changes by 2,
so that the value of the function does not change, e2 = 1.
Since 1/2 has a branch point at zero, we conclude that z 1/2 has a branch point at infinity.
170
3
2.5 2
2 2 0 2
1.5 1 1
1 -2
-2 0 y -2 0 y
-1 -1
0 -1 0 -1
x 1 x 1
2 -2
2 -2
Example 7.8.4 Again consider the logarithmic function log z. Mapping the point at infinity to
the origin, we have f () = log(1/) = log(). From Example 7.8.2 we known that log() has a
branch point at = 0. Thus log z has a branch point at infinity.
Branch Points at Infinity : Paths Around Infinity. We can also check for a branch point at
infinity by following a path that encloses the point at infinity and no other singularities. Just draw
a simple closed curve that separates the complex plane into a bounded component that contains all
the singularities of the function in the finite plane. Then, depending on orientation, the curve is a
contour enclosing all the finite singularities, or the point at infinity and no other singularities.
Example 7.8.5 Once again consider the function z 1/2 . We know that the function changes value
on a curve that goes once around the origin. Such a curve can be considered to be either a path
around the origin or a path around infinity. In either case the path encloses one singularity. There
are branch points at the origin and at infinity. Now consider a curve that does not go around the
origin. Such a curve can be considered to be either a path around neither of the branch points or
both of them. Thus we see that z 1/2 does not change value when we follow a path that encloses
neither or both of its branch points.
1/2
Example 7.8.6 Consider f (z) = z 2 1 . We factor the function.
Since f 1 does not have a branch point at = 0, f (z) does not have a branch point at infinity.
We could reach the same conclusion by considering a path around infinity. Consider a path that
circles the branch points at z = 1 once in the positive direction. Such a path circles the point at
infinity once in the negative direction. In traversing this path, the value of f (z) is multiplied by the
1/2 2 1/2
factor e2 e = e2 = 1. Thus the value of the function does not change. There is no
branch point at infinity.
Diagnosing Branch Points. We have the definition of a branch point, but we do not have a
convenient criterion for determining if a particular function has a branch point. We have seen that
log z and z for non-integer have branch points at zero and infinity. The inverse trigonometric
functions like the arcsine also have branch points, but they can be written in terms of the logarithm
and the square root. In fact all the elementary functions with branch points can be written in terms
171
of the functions log z and z . Furthermore, note that the multi-valuedness of z comes from the
logarithm, z = e log z . This gives us a way of quickly determining if and where a function may
have branch points.
Result 7.8.2 Let f (z) be a single-valued function. Then log(f (z)) and
(f (z)) may have branch points only where f (z) is zero or singular.
1. 1/2
z2 = z 2 = z
Because of the ()1/2 , the function is multi-valued. The only possible branch points are at zero
2 1/2 2 1/2 1/2
and infinity. If e0 = 1, then e2 = e4 = e2 = 1. Thus we see that
the function does not change value when we walk around the origin. We can also consider this
to be a path around infinity. This function is multi-valued, but has no branch points.
2. 2 2
z 1/2 = z =z
1 1
Example 7.8.8 Consider the function f (z) = log z1 . Since z1 is only zero at infinity and its
only singularity is at z = 1, the only possibilities for branch points are at z = 1 and z = . Since
1
log = log(z 1)
z1
and log w has branch points at zero and infinity, we see that f (z) has branch points at z = 1 and
z = .
172
Are they multi-valued? Do they have branch points?
1.
elog z = exp(Log z + 2n) = eLog z e2n = z
This function is single-valued.
2.
log ez = Log ez +2n = z + 2m
This function is multi-valued. It may have branch points only where ez is zero or infinite. This
only occurs at z = . Thus there are no branch points in the finite plane. The function does
not change when traversing a simple closed path. Since this path can be considered to enclose
infinity, there is no branch point at infinity.
Consider (f (z)) where f (z) is single-valued and f (z) has either a zero or a singularity at z = z0 .
(f (z)) may have a branch point at z = z0 . If f (z) is not a power of z, then it may be difficult to
tell if (f (z)) changes value when we walk around z0 . Factor f (z) into f (z) = g(z)h(z) where h(z)
is nonzero and finite at z0 . Then g(z) captures the important behavior of f (z) at the z0 . g(z) tells
us how fast f (z) vanishes or blows up. Since (f (z)) = (g(z)) (h(z)) and (h(z)) does not have a
branch point at z0 , (f (z)) has a branch point at z0 if and only if (g(z)) has a branch point there.
Similarly, we can decompose
to see that log(f (z)) has a branch point at z0 if and only if log(g(z)) has a branch point there.
Result 7.8.3 Consider a single-valued function f (z) that has either a zero or
a singularity at z = z0 . Let f (z) = g(z)h(z) where h(z) is nonzero and finite.
(f (z)) has a branch point at z = z0 if and only if (g(z)) has a branch point
there. log(f (z)) has a branch point at z = z0 if and only if log(g(z)) has a
branch point there.
1. sin z 1/2
2. (sin z)1/2
1.
sin z 1/2 = sin z = sin z
sin z 1/2 is multi-valued. It has two branches. There may be branch points at zero and infinity.
0
1/2
Consider the unit circle which is a path around the origin or infinity. If sin e = sin(1),
1/2
then sin e2 = sin (e ) = sin(1) = sin(1). There are branch points at the origin
and infinity.
2.
(sin z)1/2 = sin z
173
The function is multi-valued with two branches. The sine vanishes at z = n and is singular
at infinity. There could be branch points at these locations. Consider the point z = n. We
can write
sin z
sin z = (z n)
z n
sin z
Note that zn is nonzero and has a removable singularity at z = n.
sin z cos z
lim = lim = (1)n
zn z n zn 1
Since (z n)1/2 has a branch point at z = n, (sin z)1/2 has branch points at z = n.
Since the branch points at z = n go all the way out to infinity. It is not possible to make a
path that encloses infinity and no other singularities. The point at infinity is a non-isolated
singularity. A point can be a branch point only if it is an isolated singularity.
3.
z 1/2 sin z 1/2 = z sin z
= z sin z
= z sin z
4. 1/2
sin z 2 = sin z 2
This function is multi-valued. Since sin z 2 = 0 at z = (n)1/2 , there may be branch points
there. First consider the point z = 0. We can write
sin z 2
sin z 2 = z 2
z2
where sin z 2 /z 2 is nonzero and has a removable singularity at z = 0.
sin z 2 2z cos z 2
lim = lim = 1.
z0 z 2 z0 2z
1/2 1/2
Since z 2 does not have a branch point at z = 0, sin z 2 does not have a branch point
there either.
Now consider the point z = n.
sin z 2
sin z 2 = z n
z n
sin z 2 / (z n) in nonzero and has a removable singularity at z = n.
sin z 2 2z cos z 2
lim
= lim
= 2 n(1)n
z n z n z n 1
1/2 1/2
Since (z has a branch point at z = n, sin z 2
n) also has a branch point there.
1/2
Thus we see that sin z 2 has branch points at z = (n)1/2 for n Z \ {0}. This is the
set of numbers: { , 2, . . . , , 2, . . .}. The point at infinity is a non-isolated
singularity.
174
Example 7.8.11 Find the branch points of
1/3
f (z) = z 3 z .
3
Introduce branch cuts. If f (2) = 6 then what is f (2)?
We expand f (z).
f (z) = z 1/3 (z 1)1/3 (z + 1)1/3 .
There are branch points at z = 1, 0, 1. We consider the point at infinity.
1/3 1/3 1/3
1 1 1 1
f = 1 +1
1 1/3 1/3
= (1 ) (1 + )
Since f (1/) does not have a branch point at = 0, f (z) does not have a branch point at infinity.
Consider the three possible branch cuts in Figure 7.27.
1/3
Figure 7.27: Three Possible Branch Cuts for f (z) = z 3 z
The first and the third branch cuts will make the function single valued, the second will not. It
is clear that the first set makes the function single valued since it is not possible to walk around any
of the branch points.
The second set of branch cuts would allow you to walk around the branch points at z = 1. If
you walked around these two once in the positive direction, the value of the function would change
by the factor e4/3 .
The third set of branch cuts would allow you to walk around all three branch points together.
You can verify that if you walk around the three branch points, the value of the function will not
change (e6/3 = e2 = 1).
Suppose we introduce the third set of branch cuts and are on the branch with f (2) = 3 6.
1/3 1/3 1/3
f (2) = 2 e0 1 e0 3 e0
3
= 6
= 6 e
3
3
= 6.
Example 7.8.12 Find the branch points and number of branches for
2
f (z) = z z .
2
z z = exp z 2 log z
175
There may be branch points at the origin and infinity due to the logarithm. Consider walking around
a circle of radius r centered at the origin in the positive direction. Since the logarithm changes by
2
2, the value of f (z) changes by the factor e2r . There are branch points at the origin and infinity.
The function has an infinite number of branches.
such that
1
f (0) = 1 + 3 .
2
First we factor f (z).
f (z) = (z )1/3 (z + )1/3
There are branch points at z = . Figure 7.28 shows one way to introduce branch cuts.
1/3
Figure 7.28: Branch Cuts for f (z) = z 2 + 1
Since it is not possible to walk around any branch point, these cuts make the function single
valued. We introduce the coordinates:
z = e , z + = r e .
1/3 1/3
f (z) = e r e
= 3 r e(+)/3
The condition
1
f (0) = 1 + 3 = e(2/3+2n)
2
can be stated
1 e(+)/3 = e(2/3+2n)
3
+ = 2 + 6n
5 3
<< , << .
2 2 2 2
176
Principal Branches. We construct the principal branch of the logarithm by putting a branch cut
on the negative real axis choose z = r e , (, ). Thus the principal branch of the logarithm
is
Log z = ln r + , < < .
Note that the if x is a negative real number, (and thus lies on the branch cut), then Log x is
undefined.
The principal branch of z is
z = e Log z .
Note that there is a branch cut on the negative real axis.
The principal branch of the z 1/2 is denoted z. The principal branch of z 1/n is denoted n
z.
1/2
Example 7.8.14 Construct 1 z 2 , the principal branch of 1 z 2 .
2 1/2
1/2 1/2
First note that since 1 z = (1 z) (1 + z) there are branch points at z = 1 and
z = 1. The principal branch of the square root has a branch cut on the negative real axis. 1 z 2
is a negative real number for z ( . . . 1) (1 . . . ). Thus we put branch cuts on ( . . . 1]
and [1 . . . ).
177
7.9 Exercises
Cartesian and Modulus-Argument Form
Exercise 7.1
Find the image of the strip 2 < x < 3 under the mapping w = f (z) = z 2 . Does the image constitute
a domain?
Exercise 7.2
For a given real number , 0 < 2, find the image of the sector 0 arg(z) < under the
transformation w = z 4 . How large should be so that the w plane is covered exactly once?
Trigonometric Functions
Exercise 7.3
In Cartesian coordinates, z = x + y, write sin(z) in Cartesian and modulus-argument form.
Exercise 7.4
Show that ez is nonzero for all finite z.
Exercise 7.5
Show that 2 2
e e|z| .
z
Exercise 7.6
Solve coth(z) = 1.
Exercise 7.7
Solve 2 2z . That is, for what values of z is 2 one of the values of 2z ? Derive this result then verify
your answer by evaluating 2z for the solutions that your find.
Exercise 7.8
Solve 1 1z . That is, for what values of z is 1 one of the values of 1z ? Derive this result then verify
your answer by evaluating 1z for the solutions that your find.
Logarithmic Identities
Exercise 7.9
Show that if < (z1 ) > 0 and < (z2 ) > 0 then
Log(z1 z2 ) = Log(z1 ) + Log(z2 )
and illustrate that this relationship does not hold in general.
Exercise 7.10
Find the fallacy in the following arguments:
1
1. log(1) = log 1 = log(1) log(1) = log(1), therefore, log(1) = 0.
Exercise 7.11
Write the following expressions in modulus-argument or Cartesian form. Denote any multi-valuedness
explicitly.
1/4
22/5 , 31+ , 3 , 1/4 .
178
Exercise 7.12
Solve cos z = 69.
Exercise 7.13
Solve cot z = 47.
Exercise 7.14
Determine all values of
1. log()
2. ()
3. 3
4. log(log())
and plot them in the complex plane.
Exercise 7.15
Evaluate and plot the following in the complex plane:
1. (cosh())2
1
2. log
1+
3. arctan(3)
Exercise 7.16
Determine all values of and log ((1 + ) ) and plot them in the complex plane.
Exercise 7.17
Find all z for which
1. ez =
2. cos z = sin z
3. tan2 z = 1
Exercise 7.18
Prove the following identities and identify the branch points of the functions in the extended complex
plane.
+z
1. arctan(z) = log
2 z
1 1+z
2. arctanh(z) = log
2 1z
1/2
3. arccosh(z) = log z + z 2 1
179
Exercise 7.20
Identify all the branch points of the function
1/2
w = f (z) = z 3 + z 2 6z
in the extended complex plane. Give a polar description of f (z) and specify branch cuts so that
your choice of angles gives a single-valued function that is continuous at z = 1 with f (1) = 6.
Sketch the branch cuts in the stereographic projection.
Exercise 7.21
Consider the mapping w = f (z) = z 1/3 and the inverse mapping z = g(w) = w3 .
1. Describe the multiple-valuedness of f (z).
2. Describe a region of the w-plane that g(w) maps one-to-one to the whole z-plane.
3. Describe and attempt to draw a Riemann surface on which f (z) is single-valued and to which
g(w) maps one-to-one. Comment on the misleading nature of your picture.
4. Identify the branch points of f (z) and introduce a branch cut to make f (z) single-valued.
Exercise 7.22
Determine the branch points of the function
1/2
f (z) = z 3 1 .
Construct cuts and define a branch so that z = 0 and z = 1 do not lie on a cut, and such that
f (0) = . What is f (1) for this branch?
Exercise 7.23
Determine the branch points of the function
1/2
w(z) = ((z 1)(z 6)(z + 2))
Construct cuts and define a branch so that z = 4 does not lie on a cut, and such that w = 6 when
z = 4.
Exercise 7.24
Give the number of branches and locations of the branch points for the functions
1. cos z 1/2
2. (z + )z
Exercise 7.25
Find the branch points of the following functions in the extended complex plane, (the complex plane
including the point at infinity).
1/2
1. z 2 + 1
1/2
2. z 3 z
3. log z 2 1
z+1
4. log
z1
Introduce branch cuts to make the functions single valued.
180
Exercise 7.26
Find all branch points and introduce cuts to make the following functions single-valued: For the
first function, choose cuts so that there is no cut within the disk |z| < 2.
1/2
1. f (z) = z 3 + 8
1/2 !
z+1
2. f (z) = log 5 +
z1
3. f (z) = (z + 3)1/2
Exercise 7.27
Let f (z) have branch points at z = 0 and z = , but nowhere else in the extended complex plane.
How does the value and argument of f (z) change while traversing the contour in Figure 7.29? Does
the branch cut in Figure 7.29 make the function single-valued?
Figure 7.29: Contour Around the Branch Points and Branch Cut.
Exercise 7.28
Let f (z) be analytic except for no more than a countably infinite number of singularities. Suppose
that f (z) has only one branch point in the finite complex plane. Does f (z) have a branch point at
infinity? Now suppose that f (z) has two or more branch points in the finite complex plane. Does
f (z) have a branch point at infinity?
1/4
Figure 7.30: Four Candidate Sets of Branch Cuts for z 4 + 1
Exercise 7.30
Find the branch points of
1/3
z
f (z) =
z2 + 1
181
in the extended complex plane. Introduce branch cuts that make the function single-valued and
such that the function is defined on the positive real axis. Define a branch such that f (1) = 1/ 3 2.
Write down an explicit formula for the value of the branch. What is f (1 + )? What is the value of
f (z) on either side of the branch cuts?
Exercise 7.31
Find all branch points of
f (z) = ((z 1)(z 2)(z 3))1/2
in the extended complex plane. Which of the branch cuts in Figure 7.31 will make the function
single-valued. Using the first set of branch cuts in this figure define a branch on which f (0) = 6.
Write out an explicit formula for the value of the function on this branch.
Figure 7.31: Four Candidate Sets of Branch Cuts for ((z 1)(z 2)(z 3))1/2
Exercise 7.32
Determine the branch points of the function
1/3
z 2 2 (z + 2)
w= .
Construct and define a branch so that the resulting cut is one line of finite extent and w(2) = 2.
What is w(3) for this branch? What are the limiting values of w on either side of the branch cut?
Exercise 7.33
Construct the principal branch of arccos(z). (Arccos(z) has the property that if x [1, 1] then
Arccos(x) [0, ]. In particular, Arccos(0) = 2 ).
Exercise 7.35
For the linkage illustrated in Figure 7.32, use complex variables to outline a scheme for expressing
the angular position, velocity and acceleration of arm c in terms of those of arm a. (You neednt
work out the equations.)
Exercise 7.36
Find the image of the strip |<(z)| < 1 and of the strip 1 < =(z) < 2 under the transformations:
1. w = 2z 2
z+1
2. w = z1
182
b
a
c
l
Exercise 7.37
Locate and classify all the singularities of the following functions:
(z + 1)1/2
1.
z+2
1
2. cos
1+z
1
3. 2
(1 ez )
In each case discuss the possibility of a singularity at the point .
Exercise 7.38
Describe how the mapping w = sinh(z) transforms the infinite strip < x < , 0 < y < into
the w-plane. Find cuts in the w-plane which make the mapping continuous both ways. What are
the images of the lines (a) y = /4; (b) x = 1?
183
7.10 Hints
Cartesian and Modulus-Argument Form
Hint 7.1
Hint 7.2
Trigonometric Functions
Hint 7.3
Recall that sin(z) = 1
2 (ez ez ). Use Result 6.3.1 to convert between Cartesian and modulus-
argument form.
Hint 7.4
Write ez in polar form.
Hint 7.5
The exponential is an increasing function for real variables.
Hint 7.6
Write the hyperbolic cotangent in terms of exponentials.
Hint 7.7
Write out the multi-valuedness of 2z . There is a doubly-infinite set of solutions to this problem.
Hint 7.8
Write out the multi-valuedness of 1z .
Logarithmic Identities
Hint 7.9
Hint 7.10
Write out the multi-valuedness of the expressions.
Hint 7.11
Do the exponentiations in polar form.
Hint 7.12
Write the cosine in terms of exponentials. Multiply by ez to get a quadratic equation for ez .
Hint 7.13
Write the cotangent in terms of exponentials. Get a quadratic equation for ez .
Hint 7.14
Hint 7.15
184
Hint 7.16
has an infinite number of real, positive values. = e log . log ((1 + ) ) has a doubly infinite set
of values. log ((1 + ) ) = log(exp( log(1 + ))).
Hint 7.17
Hint 7.18
Hint 7.20
Hint 7.21
Hint 7.22
Hint 7.23
Hint 7.24
Hint 7.25
1/2
1. z 2 + 1 = (z )1/2 (z + )1/2
1/2
2. z 3 z = z 1/2 (z 1)1/2 (z + 1)1/2
3. log z 2 1 = log(z 1) + log(z + 1)
z+1
4. log z1 = log(z + 1) log(z 1)
Hint 7.26
Hint 7.27
Reverse the orientation of the contour so that it encircles infinity and does not contain any branch
points.
Hint 7.28
Consider a contour that encircles all the branch points in the finite complex plane. Reverse the
orientation of the contour so that it contains the point at infinity and does not contain any branch
points in the finite complex plane.
Hint 7.29
Factor the polynomial. The argument of z 1/4 changes by /2 on a contour that goes around the
origin once in the positive direction.
185
Hint 7.30
Hint 7.31
To define the branch, define angles from each of the branch points in the finite complex plane.
Hint 7.32
Hint 7.33
Hint 7.34
Hint 7.35
Hint 7.36
Hint 7.37
Hint 7.38
186
7.11 Solutions
Cartesian and Modulus-Argument Form
Solution 7.1
Let w = u + v. We consider the strip 2 < x < 3 as composed of vertical lines. Consider the vertical
line: z = c + y, y R for constant c. We find the image of this line under the mapping.
w = (c + y)2
w = c2 y 2 + 2cy
u = c2 y 2 , v = 2cy
This is a parabola that opens to the left. We can parameterize the curve in terms of v.
1 2
u = c2 v , vR
4c2
The boundaries of the region, x = 2 and x = 3, are respectively mapped to the parabolas:
1 2 1 2
u=4 v , vR and u = 9 v , vR
16 36
We write the image of the mapping in set notation.
1 1
w = u + v : v R and 4 v 2 < u < 9 v 2 .
16 36
See Figure 7.33 for depictions of the strip and its image under the mapping. The mapping is
one-to-one. Since the image of the strip is open and connected, it is a domain.
3 10
2
5
1
-1 1 2 3 4 5 -5 5 10 15
-1
-5
-2
-3 -10
Figure 7.33: The domain 2 < x < 3 and its image under the mapping w = z 2 .
Solution 7.2
We write the mapping w = z 4 in polar coordinates.
4
w = z 4 = r e = r4 e4
If = /2, the sector will be mapped exactly to the whole complex plane.
187
Trigonometric Functions
Solution 7.3
1 z
e ez
sin z =
2
1 y+x
eyx
= e
2
1 y
e (cos x + sin x) ey (cos x sin x)
=
2
1 y
e (sin x cos x) + ey (sin x + cos x)
=
2
= sin x cosh y + cos x sinh y
q
sin z = sin2 x cosh2 y + cos2 x sinh2 y exp( arctan(sin x cosh y, cos x sinh y))
q
= cosh2 y cos2 x exp( arctan(sin x cosh y, cos x sinh y))
r
1
= (cosh(2y) cos(2x)) exp( arctan(sin x cosh y, cos x sinh y))
2
Solution 7.4
In order that ez be zero, the modulus, ex must be zero. Since ex has no finite solutions, ez = 0 has
no finite solutions.
Solution 7.5
We write the expressions in terms of Cartesian coordinates.
2
z (x+y)2
e = e
2 2
= ex y +2xy
2
y 2
= ex
2 2 2
+y 2
e|z| = e|x+y| = ex
Solution 7.6
coth(z) = 1
(e + ez ) /2
z
=1
(ez ez ) /2
ez + ez = ez ez
ez = 0
188
Solution 7.7
We write out the multi-valuedness of 2z .
2 2z
eln 2 ez log(2)
eln 2 {ez(ln(2)+2n) | n Z}
ln 2 z{ln 2 + 2n + 2m | m, n Z}
ln(2) + 2m
z= | m, n Z
ln(2) + 2n
We verify this solution. Consider m and n to be fixed integers. We express the multi-valuedness in
terms of k.
Solution 7.8
We write out the multi-valuedness of 1z .
1 1z
1 ez log(1)
1 {ez2n | n Z}
z C.
1z = ez log(1) = ez2n
Logarithmic Identities
Solution 7.9
We write the relationship in terms of the natural logarithm and the principal argument.
< (zk ) > 0 implies that Arg(zk ) (/2 . . . /2). Thus Arg(z1 ) + Arg(z2 ) ( . . . ). In this case
the relationship holds.
The relationship does not hold in general because Arg(z1 ) + Arg(z2 ) is not necessarily in the
interval ( . . . ]. Consider z1 = z2 = 1.
189
Solution 7.10
1. The algebraic manipulations are fine. We write out the multi-valuedness of the logarithms.
1
log(1) = log = log(1) log(1) = log(1)
1
{ + 2n : n Z} = { + 2n : n Z}
= {2n : n Z} { + 2n : n Z} = { 2n : n Z}
Thus log(1) = log(1). However this does not imply that log(1) = 0. This is because
the logarithm is a set-valued function log(1) = log(1) is really saying:
{ + 2n : n Z} = { 2n : n Z}
2. We consider
1 = 11/2 = ((1)(1))1/2 = (1)1/2 (1)1/2 = = 1.
There are three multi-valued expressions above.
11/2 = 1
((1)(1))1/2 = 1
(1)1/2 (1)1/2 = ()() = 1
Thus we see that the first and fourth equalities are incorrect.
Solution 7.11
22/5 = 41/5
= 411/5
5
= 4 e2n/5 ,
5
n = 0, 1, 2, 3, 4
1/4 1/4
3 = 2 e/6
= 2 e/24 11/4
4
= 2 e(n/2/24) ,
4
n = 0, 1, 2, 3
190
Solution 7.12
cos z = 69
ez + ez
= 69
2
e2z 138 ez +1 = 0
1 p
ez = 138 1382 4
2
z = log 69 2 1190
z = ln 69 2 1190 + 2n
z = 2n ln 69 2 1190 , n Z
Solution 7.13
cot z = 47
(e + ez ) /2
z
= 47
(ez ez ) /(2)
ez + ez = 47 ez ez
46 e2z 48 = 0
24
2z = log
23
24
z = log
2 23
24
z= ln + 2n , nZ
2 23
24
z = n ln , nZ
2 23
Solution 7.14
1.
log() = ln | | + arg()
= ln(1) + + 2n , nZ
2
log() = + 2n, nZ
2
These are equally spaced points in the imaginary axis. See Figure 7.34.
2.
() = e log()
= e(/2+2n) , nZ
() = e/2+2n , nZ
These are points on the positive real axis with an accumulation point at the origin. See
Figure 7.35.
191
10
-1 1
-10
-1
Figure 7.35: ()
3.
3 = e log(3)
= e(ln(3)+ arg(3))
3 = e(ln(3)+2n) , nZ
These points all lie on the circle of radius |e | centered about the origin in the complex plane.
See Figure 7.36.
10
-10 -5 5 10
-5
-10
Figure 7.36: 3
4.
log(log()) = log + 2m , m Z
2
= ln + 2m + Arg + 2m + 2n, m, n Z
2 2
= ln + 2m + sign(1 + 4m) + 2n, m, n Z
2 2
These points all lie in the right half-plane. See Figure 7.37.
192
20
10
1 2 3 4 5
-10
-20
Solution 7.15
1.
2
e + e
2
(cosh()) =
2
= (1)2
= e2 log(1)
= e2(ln(1)++2n) , nZ
2(1+2n)
=e , nZ
These are points on the positive real axis with an accumulation point at the origin. See
Figure 7.38.
1000
-1
2.
1
log = log(1 + )
1+
= log 2 e/4
1
= ln(2) log e/4
2
1
= ln(2) /4 + 2n, nZ
2
These are points on a vertical line in the complex plane. See Figure 7.39.
193
10
-1 1
-10
1
Figure 7.39: The values of log 1+ .
3.
1 3
arctan(3) = log
2 + 3
1 1
= log
2 2
1 1
= ln + + 2n , n Z
2 2
= + n + ln(2)
2 2
These are points on a horizontal line in the complex plane. See Figure 7.40.
1
-5 5
-1
Solution 7.16
= e log()
= e(ln ||+ Arg()+2n) , nZ
(/2+2n)
=e , nZ
(1/2+2n)
=e , nZ
These are points on the positive real axis. There is an accumulation point at z = 0. See Figure 7.41.
log ((1 + ) ) = log e log(1+)
= log(1 + ) + 2n, n Z
= (ln |1 + | + Arg(1 + ) + 2m) + 2n, m, n Z
1
= ln 2 + + 2m + 2n, m, n Z
2 4
1 1
= 2 + 2m + ln 2 + 2n , m, n Z
4 2
194
1
25 50 75 100
-1
Figure 7.41:
10
5
-40 -20 20
-5
-10
Solution 7.17
1.
ez =
z = log
z = ln || + arg()
z = ln(1) + + 2n , n Z
2
z = + 2n, n Z
2
2. We can solve the equation by writing the cosine and sine in terms of the exponential.
cos z = sin z
ez + ez ez ez
=
2 2
(1 + ) ez = (1 + ) ez
1 +
e2z =
1+
e2z =
2z = log()
2z = + 2n, n Z
2
z = + n, n Z
4
195
3.
tan2 z = 1
sin2 z = cos2 z
cos z = sin z
ez + ez ez ez
=
2 2
ez = ez or ez = ez
ez = 0 or ez = 0
eyx = 0 or ey+x = 0
ey = 0 or ey = 0
z=
Solution 7.18
1.
w = arctan(z)
z = tan(w)
sin(w)
z=
cos(w)
(ew ew ) /(2)
z=
(ew + ew ) /2
w
w
z e +z e = ew + ew
( + z) e2w = ( z)
1/2
z
ew =
+z
1/2
z
w = log
+z
+z
arctan(z) = log
2 z
As 0, the argument of the logarithm term tends to 1 The logarithm does not have a
branch point at that point. Since arctan(1/) does not have a branch point at = 0, arctan(z)
does not have a branch point at infinity.
196
2.
w = arctanh(z)
z = tanh(w)
sinh(w)
z=
cosh(w)
(ew ew ) /2
z= w
(e + ew ) /2
z e +z ew = ew ew
w
(z 1) e2w = z 1
1/2
w z 1
e =
z1
1/2
z+1
w = log
1z
1 1+z
arctanh(z) = log
2 1z
First we consider branch points due to the square root. There are branch points at z = 1 due
to the square root terms. If we walk around the singularity at z = 1 and no other singularities,
197
1/2
the z 2 1 term changes sign. This will change the value of arccosh(z). The same is true
1/2
for the point z = 1. The point at infinity is not a branch point for z 2 1 . We factor
the expression to verify this.
1/2 1/2 1/2
z2 1 = z2 1 z 2
1/2
z2 does not have a branch point at infinity. It is multi-valued, but it has no branch points.
2 1/2
1z does not have a branch point at infinity, The argument of the square root function
tends to unity there. In summary, there are branch points at z = 1 due to the square root. If
we walk around either one of the these branch points. the square root term will change value.
If we walk around both of these points, the square root term will not change value.
Now we consider branch points due to logarithm. There may be branch points where the
argument of the logarithm vanishes or tends to infinity. We see if the argument of the logarithm
vanishes.
1/2
z + z2 1 =0
z2 = z2 1
1/2
z + z2 1 is non-zero and finite everywhere in the complex plane. The only possibility
for a branch point in the logarithm term is the point at infinity. We see if the argument of
1/2
z + z2 1 changes when we walk around infinity but no other singularity. We consider a
circular path with center at the origin and radius greater than unity. We can either say that
this path encloses the two branch points at z = 1 and no other singularities or we can say
that this path encloses the point at infinity and no other singularities. We examine the value
of the argument of the logarithm on this path.
1/2 1/2 1/2
z + z2 1 = z + z2 1 z 2
1/2 1/2
Neither z 2 nor 1 z 2 changes value as we walk the path. Thus we can use the
principal branch of the square root in the expression.
1/2 p p
z + z2 1 = z z 1 z 2 = z 1 1 z 2
As we walk the path around infinity, the argument of z changes by 2 while the argument of
1/2
1 + 1 z 2 does not change. Thus the argument of z + z 2 1 changes by 2 when
we go around infinity. This makes the value of the logarithm change by 2. There is a branch
point at infinity.
First consider the branch.
p
2
1 2 4
z 1 1z =z 1 1 z +O z
2
1 2 4
=z z +O z
2
1
= z 1 1 + O z 2
2
As we walk the path around infinity, the argument of z 1 changes by 2 while the argument
1/2
of 1 + O z 2 does not change. Thus the argument of z + z 2 1
changes by 2
198
when we go around infinity. This makes the value of the logarithm change by 2. Again we
conclude that there is a branch point at infinity.
For the sole purpose of overkill, lets repeat the above analysis from a geometric viewpoint.
Again we consider the possibility of a branch point at infinity due to the logarithm. We walk
along the circle shown in the first plot of Figure 7.43. Traversing this path, we go around
1/2
infinity, but no other singularities. We consider the mapping w = z + z 2 1 . Depending
on the branch of the square root, the circle is mapped to one one of the contours shown in
the second plot. For each branch, the argument of w changes
by 2 aswe traverse the circle
1/2
in the z-plane. Therefore the value of arccosh(z) = log z + z 2 1 changes by 2 as
we traverse the circle. We again conclude that there is a branch point at infinity due to the
logarithm.
-1 1 -1 1
-1
-1
1/2
Figure 7.43: The mapping of a circle under w = z + z 2 1 .
To summarize: There are branch points at z = 1 due to the square root and a branch point
at infinity due to the logarithm.
The are branch points at z = 1, 0, 1. Now we examine the point at infinity. We make the change
of variables z = 1/.
1 (1/)(1/ + 1)
f = log
(1/ 1)
1 (1 +
= log
1
= log(1 + ) log(1 ) log()
log() has a branch point at = 0. The other terms do not have branch points there. Since f (1/)
has a branch point at = 0 f (z) has a branch point at infinity.
Note that in walking around either z = 1 or z = 0 once in the positive direction, the argument
of z(z + 1)/(z 1) changes by 2. In walking around z = 1, the argument of z(z + 1)/(z 1) changes
by 2. This argument does not change if we walk around both z = 0 and z = 1. Thus we put a
branch cut between z = 0 and z = 1. Next be put a branch cut between z = 1 and the point at
infinity. This prevents us from walking around either of these branch points. These two branch cuts
separate the branches of the function. See Figure 7.44
199
-3 -2 -1 1 2
z(z+1)
Figure 7.44: Branch cuts for log z1
Solution 7.20
First we factor the function.
1/2
f (z) = (z(z + 3)(z 2)) = z 1/2 (z + 3)1/2 (z 2)1/2
Since 3/2 has a branch point at = 0 and the rest of the terms are analytic there, f (z) has a
branch point at infinity.
Consider the set of branch cuts in Figure 7.45. These cuts do not permit us to walk around any
single branch point. We can only walk around none or all of the branch points, (which is the same
thing). The cuts can be used to define a single-valued branch of the function.
-4 -2 2 4
-1
-2
-3
1/2
Figure 7.45: Branch Cuts for z 3 + z 2 6z
z + 3 = r1 e1 , < 1 <
3
z = r2 e2 , < 2 <
2 2
z 2 = r3 e3 , 0 < 3 < 2
200
We see that our choice of angles gives us the desired branch.
The stereographic projection is the projection from the complex plane onto a unit sphere with
south pole at the origin. The point z = x + y is mapped to the point (X, Y, Z) on the sphere with
4x 4y 2|z|2
X= 2
, Y = 2
, Z= .
|z| + 4 |z| + 4 |z|2 + 4
Figure 7.46 first shows the branch cuts and their stereographic projections and then shows the
stereographic projections alone.
1
0
-1
2
2 4
0 1
-4 0 0
0 -1
0
4 -4 1
1/2
Figure 7.46: Branch cuts for z 3 + z 2 6z and their stereographic projections.
Solution 7.21
1. For each value of z, f (z) = z 1/3 has three values.
f (z) = z 1/3 = 3
z ek2/3 , k = 0, 1, 2
2.
g(w) = w3 = |w|3 e3 arg(w)
Any sector of the w plane of angle 2/3 maps one-to-one to the whole z-plane.
g : r e | r 0, 0 < 0 + 2/3 7 C
See Figure 7.47 to see how g(w) maps the sector 0 < 2/3.
3. See Figure 7.48 for a depiction of the Riemann surface for f (z) = z 1/3 . We show two views of
the surface and a curve that traces the edge of the shown portion of the surface. The depiction
is misleading because the surface is not self-intersecting. We would need four dimensions to
properly visualize the this Riemann surface.
4. f (z) = z 1/3 has branch points at z = 0 and z = . Any branch cut which connects these two
points would prevent us from walking around the points singly and would thus separate the
branches of the function. For example, we could put a branch cut on the negative real axis.
Defining the angle < < for the mapping
f r e = 3 r e/3
Solution 7.22
The cube roots of 1 are
( )
n o 1 + 3 1 3
1, e2/3 , e4/3 = 1, , .
2 2
201
Figure 7.47: The mapping g(w) = w3 maps the sector 0 < 2/3 one-to-one to the whole z-plane.
202
We factor the polynomial.
!1/2 !1/2
3
1/2 1 3 1+ 3
z 1 = (z 1)1/2 z+ z+
2 2
Now we examine the point at infinity. We make the change of variables z = 1/.
1/2 1/2
f (1/) = 1/ 3 1 = 3/2 1 3
1/2
3/2 has a branch point at = 0, while 1 3 is not singular there. Since f (1/) has a branch
point at = 0, f (z) has a branch point at infinity.
There are several ways of introducing branch cuts to separate the branches of the function. The
easiest approach is to put a branch cut from each of the three branch points in the finite complex
plane out to the branch point at infinity. See Figure 7.49a. Clearly this makes the function single
valued as it is impossible to walk around any of the branch points. Another approach is to have a
branch cut from one of the branch points in the finite plane to the branch point at infinity and a
branch cut connecting the remaining two branch points. See Figure 7.49bcd. Note that in walking
around any one of the finite branch points, (in the positive direction), the argument of the function
changes by . This means that the value of the function changes by e , which is to say the value
of the function changes sign. In walking around any two of the finite branch points, (again in the
positive direction), the argument of the function changes by 2. This means that the value of the
function changes by e2 , which is to say that the value of the function does not change. This
demonstrates that the latter branch cut approach makes the function single-valued.
a b c d
1/2
Figure 7.49: z 3 1
Now we construct a branch. We will use the branch cuts in Figure 7.49a. We introduce variables
to measure radii and angles from the three finite branch points.
z 1 = r1 e1 , 0 < 1 < 2
1 3 2
z+ = r2 e2 , < 2 <
2 3 3
1+ 3 2
z+ = r3 e3 , < 3 <
2 3 3
We compute f (0) to see if it has the desired value.
f (z) = r1 r2 r3 e(1 +2 +3 )/2
f (0) = e(/3+/3)/2 =
Since it does not have the desired value, we change the range of 1 .
z 1 = r1 e1 , 2 < 1 < 4
203
f (0) now has the desired value.
f (0) = e(3/3+/3)/2 =
We compute f (1).
f (1) = 2 e(32/3+2/3)/2 = 2
Solution 7.23
First we factor the function.
1/2
w(z) = ((z + 2)(z 1)(z 6)) = (z + 2)1/2 (z 1)1/2 (z 6)1/2
Since 3/2 has a branch point at = 0 and the rest of the terms are analytic there, w(z) has a
branch point at infinity.
Consider the set of branch cuts in Figure 7.50. These cuts let us walk around the branch points
at z = 2 and z = 1 together or if we change our perspective, we would be walking around the
branch points at z = 6 and z = together. Consider a contour in this cut plane that encircles the
1/2
branch points at z = 2 and z = 1. Since the argument of (z z0 ) changes by when we walk
around z0 , the argument of w(z) changes by 2 when we traverse the contour. Thus the value of
the function does not change and it is a valid set of branch cuts.
1/2
Figure 7.50: Branch Cuts for ((z + 2)(z 1)(z 6))
z + 2 = r1 e1 , 1 = 2 for z (1 . . . 6),
2
z 1 = r2 e , 2 = 1 for z (1 . . . 6),
3
z 6 = r3 e , 0 < 3 < 2
Solution 7.24
1.
cos z 1/2 = cos z = cos z
204
2.
(z + )z = ez log(z+)
= ez(ln |z+|+ Arg(z+)+2n) , nZ
Solution 7.25
1.
1/2
f (z) = z 2 + 1 = (z + )1/2 (z )1/2
We see that there are branch points at z = . To examine the point at infinity, we substitute
z = 1/ and examine the point = 0.
2 !1/2
1 1 1/2
+1 = 1/2
1 + 2
( 2 )
2. 1/2
f (z) = z 3 z = z 1/2 (z 1)1/2 (z + 1)1/2
There are branch points at z = 1, 0, 1. Now we consider the point at infinity.
3 !1/2
1 1 1 1/2
f = = 3/2 1 2
3.
f (z) = log z 2 1 = log(z 1) + log(z + 1)
Every time we walk around the point = 0 in the positive direction, the value of the function
changes by 4. f (z) has a branch point at infinity.
We can make the function single-valued by introducing two branch cuts that start at z = 1
and each go to infinity.
4.
z+1
f (z) = log = log(z + 1) log(z 1)
z1
205
There are branch points at z = 1.
1 1/ + 1 1+
f = log = log
1/ 1 1
Solution 7.26
1. The cube roots of 8 are
n o n o
2, 2 e2/3 , 2 e4/3 = 2, 1 + 3, 1 3 .
Since f (1/) has a branch point at = 0, f (z) has a branch point at infinity.
There are several ways of introducing branch cuts outside of the disk |z| < 2 to separate the
branches of the function. The easiest approach is to put a branch cut from each of the three
branch points in the finite complex plane out to the branch point at infinity. See Figure 7.51a.
Clearly this makes the function single valued as it is impossible to walk around any of the
branch points. Another approach is to have a branch cut from one of the branch points in
the finite plane to the branch point at infinity and a branch cut connecting the remaining two
branch points. See Figure 7.51bcd. Note that in walking around any one of the finite branch
points, (in the positive direction), the argument of the function changes by . This means that
the value of the function changes by e , which is to say the value of the function changes sign.
In walking around any two of the finite branch points, (again in the positive direction), the
argument of the function changes by 2. This means that the value of the function changes by
e2 , which is to say that the value of the function does not change. This demonstrates that
the latter branch cut approach makes the function single-valued.
a b c d
1/2
Figure 7.51: z 3 + 8
206
2.
1/2 !
z+1
f (z) = log 5 +
z1
Since g(1/) has no branch point at = 0, g(z) has no branch point at infinity. This means
that if we walk around both of the branch points at z = 1, the function does not change
value. We can verify this with another method: When we walk around the point z = 1 once
in the positive direction, the argument of z + 1 changes by 2, the argument of (z + 1)1/2
changes by and thus the value of (z + 1)1/2 changes by e = 1. When we walk around the
point z = 1 once in the positive direction, the argument of z 1 changes by 2, the argument
of (z 1)1/2 changes by and thus the value of (z 1)1/2 changes by e = 1. f (z)
has branch points at z = 1. When we walk around both points z = 1 once in the positive
1/2
z+1
direction, the value of z1 does not change. Thus we can make the function single-valued
with a branch cut which enables us to walk around either none or both of these branch points.
We put a branch cut from 1 to 1 on the real axis.
f (z) has branch points where
1/2
z+1
5+
z1
is either zero or infinite. The only place in the extended complex plane where the expression
becomes infinite is at z = 1. Now we look for the zeros.
1/2
z+1
5+ =0
z1
1/2
z+1
= 5
z1
z+1
= 25
z1
z + 1 = 25z 25
13
z=
12
Note that
1/2
13/12 + 1
= 251/2 = 5.
13/12 1
On one branch, (which we call the positive branch), of the function g(z) the quantity
1/2
z+1
5+
z1
is always nonzero. On the other (negative) branch of the function, this quantity has a zero at
z = 13/12.
207
The logarithm introduces branch points at z = 1 on both the positive and negative branch of
g(z). It introduces a branch point at z = 13/12 on the negative branch of g(z). To determine
if additional branch cuts are needed to separate the branches, we consider
1/2
z+1
w =5+
z1
and see where the branch cut between 1 gets mapped to in the w plane. We rewrite the
mapping.
1/2
2
w =5+ 1+
z1
The mapping is the following sequence of simple transformations:
(a) z 7 z 1
1
(b) z 7
z
(c) z 7 2z
(d) z 7 z + 1
(e) z 7 z 1/2
(f) z 7 z + 5
-1 1 -2 0 -1/2 -1
1
z 7 z1 z 7 z 7 2z
z
z 7 z + 1 z 7 z 1/2 z 7 z + 5
For the positive branch of g(z), the branch cut is mapped to the line x = 5 and the z plane is
mapped to the half-plane x > 5. log(w) has branch points at w = 0 and w = . It is possible
to walk around only one of these points in the half-plane x > 5. Thus no additional branch
cuts are needed in the positive sheet of g(z).
For the negative branch of g(z), the branch cut is mapped to the line x = 5 and the z plane
is mapped to the half-plane x < 5. It is possible to walk around either w = 0 or w = alone
in this half-plane. Thus we need an additional branch cut. On the negative sheet of g(z), we
put a branch cut beteen z = 1 and z = 13/12. This puts a branch cut between w = and
w = 0 and thus separates the branches of the logarithm.
Figure 7.52 shows the branch cuts in the positive and negative sheets of g(z).
3. The function f (z) = (z + 3)1/2 has a branch point at z = 3. The function is made single-
valued by connecting this point and the point at infinity with a branch cut.
Solution 7.27
Note that the curve with opposite orientation goes around infinity in the positive direction and does
not enclose any branch points. Thus the value of the function does not change when traversing
208
Im(z) Im(z)
g(13/12)=5 g(13/12)=-5
Re(z) Re(z)
1/2
z+1
Figure 7.52: The branch cuts for f (z) = log 5 + z1 .
the curve, (with either orientation, of course). This means that the argument of the function must
change my an integer multiple of 2. Since the branch cut only allows us to encircle all three or
none of the branch points, it makes the function single valued.
Solution 7.28
We suppose that f (z) has only one branch point in the finite complex plane. Consider any contour
that encircles this branch point in the positive direction. f (z) changes value if we traverse the
contour. If we reverse the orientation of the contour, then it encircles infinity in the positive direction,
but contains no branch points in the finite complex plane. Since the function changes value when
we traverse the contour, we conclude that the point at infinity must be a branch point. If f (z) has
only a single branch point in the finite complex plane then it must have a branch point at infinity.
If f (z) has two or more branch points in the finite complex plane then it may or may not have
a branch point at infinity. This is because the value of the function may or may not change on a
contour that encircles all the branch points in the finite complex plane.
Solution 7.29
First we factor the function,
1/4 1/4 1/4 1/4
1/4 1+ 1 + 1 1
f (z) = z 4 + 1 = z z z z .
2 2 2 2
1
There are branch points at z = .
2
We make the substitution z = 1/ to examine the point at
infinity.
1/4
1 1
f = +1
4
1 4 1/4
= 1/4
1 +
( 4 )
4 1/4
1/4 has a removable singularity at the point = 0, but no branch point there. Thus z 4 + 1
has no branch point at infinity.
1/4
Note that the argument of z 4 z0 changes by /2 on a contour that goes around the point
1/4
z0 once in the positive direction. The argument of z 4 + 1 changes by n/2 on a contour that
goes around n of its branch points. Thus any set of branch cuts that permit you to walk around
only one, two or three of the branch points will not make the function single valued. A set of branch
cuts that permit us to walk around only zero or all four of the branch points will make the function
single-valued. Thus we see that the first two sets of branch cuts in Figure 7.30 will make the function
single-valued, while the remaining two will not.
Consider the contour in Figure ??. There are two ways to see that the function does not change
value while traversing the contour. The first is to note that each of the branch points makes the
1/4
argument of the function increase by /2. Thus the argument of z 4 + 1 changes by 4(/2) = 2
on the contour. This means that the value of the function changes by the factor e2 = 1. If we
change the orientation of the contour, then it is a contour that encircles infinity once in the positive
direction. There are no branch points inside the this contour with opposite orientation. (Recall that
209
the inside of a contour lies to your left as you walk around it.) Since there are no branch points
inside this contour, the function cannot change value as we traverse it.
Solution 7.30
1/3
z
f (z) = 2
= z 1/3 (z )1/3 (z + )1/3
z +1
There are branch points at z = 0, .
1/3
1/3
1 1/
f = =
(1/)2 + 1 (1 + 2 )
1/3
z = r e < < ,
(z ) = s e 3/2 < < /2,
(z + ) = t e /2 < < 3/2.
With
Consider the value of the function above and below the branch cut on the negative real axis. Above
the branch cut the function is
r
x
f (x + 0) = 3 e()/3
x + 1 x2 + 1
2
210
For the branch cut along the positive imaginary axis,
r
y
f (y + 0) = 3 e(/2/2/2)/3
(y 1)(y + 1)
r
y
= 3 e/6
(y 1)(y + 1)
3
r
y
= 3 ,
(y 1)(y + 1) 2
r
y
f (y 0) = 3 e(/2(3/2)/2)/3
(y 1)(y + 1)
r
y
= 3 e/2
(y 1)(y + 1)
r
y
=3 .
(y 1)(y + 1)
r
y
f (y 0) = 3 e(/2(/2)(3/2))/3
(y + 1)(y 1)
r
y
= 3 e/2
(y + 1)(y 1)
r
y
= 3 .
(y + 1)(y 1)
Solution 7.31
First we factor the function.
1/2
f (z) = ((z 1)(z 2)(z 3)) = (z 1)1/2 (z 2)1/2 (z 3)1/2
Since 3/2 has a branch point at = 0 and the rest of the terms are analytic there, f (z) has a
branch point at infinity.
The first two sets of branch cuts in Figure 7.31 do not permit us to walk around any of the branch
points, including the point at infinity, and thus make the function single-valued. The third set of
branch cuts lets us walk around the branch points at z = 1 and z = 2 together or if we change our
perspective, we would be walking around the branch points at z = 3 and z = together. Consider
a contour in this cut plane that encircles the branch points at z = 1 and z = 2. Since the argument
1/2
of (z z0 ) changes by when we walk around z0 , the argument of f (z) changes by 2 when we
traverse the contour. Thus the value of the function does not change and it is a valid set of branch
211
cuts. Clearly the fourth set of branch cuts does not make the function single-valued as there are
contours that encircle the branch point at infinity and no other branch points. The other way to see
this is to note that the argument of f (z) changes by 3 as we traverse a contour that goes around
the branch points at z = 1, 2, 3 once in the positive direction.
Now to define the branch. We make the preliminary choice of angles,
z 1 = r1 e1 , 0 < 1 < 2,
2
z 2 = r2 e , 0 < 2 < 2,
3
z 3 = r3 e , 0 < 3 < 2.
The function is
1/2
f (z) = r1 e1 r2 e2 r3 e3 = r1 r2 r3 e(1 +2 +3 )/2 .
which is not what we wanted. We will change range of one of the angles to get the desired result.
z 1 = r1 e1 , 0 < 1 < 2,
2
z 2 = r2 e , 0 < 2 < 2,
3
z 3 = r3 e , 2 < 3 < 4.
f (0) = 6 e(5)/2 = 6,
Solution 7.32
We constrain the angles as follows: On the positive real axis, = = . See Figure 7.53.
Im(z)
c b a
Re(z)
1/3
Figure 7.53: A branch of z 2 2 (z + 2) .
212
Now we determine w(2).
1/3 1/3
w(2) = 2 2 2+ 2 (2 + 2)1/3
q q
3 3
= 2 2 e0 2 + 2 e0 4 e0
3
3
3
= 2 4
= 2.
Note that we didnt have to choose the angle from each of the branch points as zero. Choosing any
integer multiple of 2 would give us the same result.
1/3 1/3
w(3) = 3 2 3 + 2 (3 + 2)1/3
/3 3
q q
3
3 2 e/3 1 e/3
3
= 3 + 2e
= 7 e
3
3
= 7
As we approach the branch cut from below, the function has the value,
r
/3
2 x x + 2 (x + 2) e/3 .
3
w = abc e = 3
Consider the interval 2 . . . 2 . As we approach the branch cut from above, the function
has the value,
r
3 2/3
w = abc e = 3 2 x x 2 (x + 2) e2/3 .
As we approach the branch cut from below, the function has the value,
r
2/3
2 x x 2 (x + 2) e2/3 .
3
w = abc e = 3
Solution 7.33
Arccos(x) is shown in Figure 7.54 for real variables in the range [1 . . . 1].
3
2.5
2
1.5
1
0.5
-1 -0.5 0.5 1
213
First we write arccos(z) in terms of log(z). If cos(w) = z, then w = arccos(z).
cos(w) = z
ew + ew
=z
2
2
(ew ) 2z ew +1 = 0
1/2
ew = z + z 2 1
1/2
w = log z + z 2 1
Thus we have
1/2
arccos(z) = log z + z 2 1 .
Since Arccos(0) = 2, we must find the branch such that
1/2
log 0 + 02 1 =0
log (1)1/2 = 0.
Since
log() = + 2n = + 2n
2 2
and
log() = + 2n = + 2n
2 2
we must choose the branch of the square root such that (1)1/2 = and the branch of the logarithm
such that log() = 2 .
First we construct the branch of the square root.
1/2
z2 1 = (z + 1)1/2 (z 1)1/2
We see that there are branch points at z = 1 and z = 1. In particular we want the Arccos to be
defined for z = x, x [1 . . . 1]. Hence we introduce branch cuts on the lines < x 1 and
1 x < . Define the local coordinates
z + 1 = r e , z 1 = e .
With the given branch cuts, the angles have the possible ranges
Now we choose ranges for and and see if we get the desired branch. If not, we choose a different
range for one of the angles. First we choose the ranges
( . . . ), (0 . . . 2).
If we substitute in z = 0 we get
1/2 1/2 1/2
02 1 = 1 e0 (1 e ) = e0 e/2 =
Thus we see that this choice of angles gives us the desired branch.
Now we go back to the expression
1/2
arccos(z) = log z + z 2 1 .
214
= =0
= =2
1/2
Figure 7.55: Branch Cuts and Angles for z 2 1
1/2
We have already seen that there are branch points at z = 1 and z = 1 because of z 2 1 . Now
we must determine if the logarithm introduces additional branch points. The only possibilities for
branch points are where the argument of the logarithm is zero.
1/2
z + z2 1 =0
z2 = z2 1
0 = 1
We see that the argument of the logarithm is nonzero and thus there are no additional branch points.
1/2
Introduce the variable, w = z + z 2 1 . What is the image of the branch cuts in the w plane?
We parameterize the branch cut connecting z = 1 and z = + with z = r + 1, r [0 . . . ).
1/2
w = r + 1 + (r + 1)2 1
p
= r + 1 r(r + 2)
p
= r 1 r 1 + 2/r + 1
p p
r 1 + 1 + 2/r + 1 is the interval [1 . . . ); r 1 1 + 2/r + 1 is the interval (0 . . . 1]. Thus
we see that this branch cut is mapped to the interval (0 . . . ) in the w plane. Similarly, we could
show that the branch cut ( . . . 1] in the z plane is mapped to ( . . . 0) in the w plane. In the
w plane there is a branch cut along the real w axis from to . Thus cut makes the logarithm
single-valued. For the branch of the square root that we chose, all the points in the z plane get
mapped to the upper half of the w plane.
With the branch cuts we have introduced so far and the chosen branch of the square root we
have
1/2
arccos(0) = log 0 + 02 1
= log
= + 2n
2
= + 2n
2
Choosing the n = 0 branch of the logarithm will give us Arccos(z). We see that we can write
1/2
Arccos(z) = Log z + z 2 1 .
215
1/2 1/2
1 =1 1 =-1
1/2
Figure 7.56: Branch Cuts for z 1/2 1
Solution 7.35
The distance between the end of rod a and the end of rod c is b. In the complex plane, these points
are a e and l + c e , respectively. We write this out mathematically.
l + c e a e = b
l + c e a e l + c e a e = b2
This equation relates the two angular positions. One could differentiate the equation to relate the
velocities and accelerations.
Solution 7.36
1. Let w = u + v. First we do the strip: |<(z)| < 1. Consider the vertical line: z = c + y, y R.
This line is mapped to
w = 2(c + y)2
w = 2c2 2y 2 + 4cy
u = 2c2 2y 2 , v = 4cy
This is a parabola that opens to the left. For the case c = 0 it is the negative u axis. We can
parametrize the curve in terms of v.
1 2
u = 2c2 v , vR
8c2
The boundaries of the region are both mapped to the parabolas:
1
u = 2 v2 , v R.
8
The image of the mapping is
1 2
w = u + v : v R and u < 2 v .
8
w = 2(x + c)2
w = 2x2 2c2 + 4cx
u = 2x2 2c2 , v = 4cx
216
This is a parabola that opens upward. We can parametrize the curve in terms of v.
1 2
u= v 2c2 , vR
8c2
The boundary =(z) = 1 is mapped to
1 2
u= v 2, v R.
8
The boundary =(z) = 2 is mapped to
1 2
u= v 8, vR
32
The image of the mapping is
1 2 1 2
w = u + v : v R and v 8<u< v 2 .
32 8
Now consider the strip 1 < =(z) < 2. The translation by 1 does not change the domain.
Now we do the inversion. The bottom edge, =(z) = 1, is mapped to the circle |z + /2| = 1/2.
The top edge, =(z) = 2, is mapped to the circle |z + /4| = 1/4. Thus the current image is the
region between two circles:
1 1
z + < and z + > .
2 2 4 4
The magnification by 2 yields
1
|z + | < 1 and z + > .
2 2
The final step is a translation by 1.
1
|z 1 + | < 1 and z 1 + > .
2 2
217
Solution 7.37
1. There is a simple pole at z = 2. The function has a branch point at z = 1. Since this is
the only branch point in the finite complex plane there is also a branch point at infinity. We
can verify this with the substitution z = 1/.
(1/ + 1)1/2
1
f =
1/ + 2
1/2 (1 + )1/2
=
1 + 2
Since f (1/) has a branch point at = 0, f (z) has a branch point at infinity.
2. cos z is an entire function with an essential singularity at infinity. Thus f (z) has singularities
only where 1/(1 + z) has singularities. 1/(1 + z) has a first order pole at z = 1. It is analytic
everywhere else, including the point at infinity. Thus we conclude that f (z) has an essential
singularity at z = 1 and is analytic elsewhere. To explicitly show that z = 1 is an essential
singularity, we can find the Laurent series expansion of f (z) about z = 1.
(1)n
1 X
cos = (z + 1)2n
1+z n=0
(2n)!
3. 1 ez has simple zeros at z = 2n, n Z. Thus f (z) has second order poles at those points.
The point at infinity is a non-isolated singularity. To justify this: Note that
1
f (z) = 2
(1 ez )
has second order poles at z = 2n, n Z. This means that f (1/) has second order poles at
1
= 2n , n Z. These second order poles get arbitrarily close to = 0. There is no deleted
neighborhood around = 0 in which f (1/) is analytic. Thus the point = 0, (z = ), is a
non-isolated singularity. There is no Laurent series expansion about the point = 0, (z = ).
The point at infinity is neither a branch point nor a removable singularity. It is not a pole
either. If it were, there would be an n such that limz z n f (z) = const 6= 0. Since z n f (z)
has second order poles in every deleted neighborhood of infinity, the above limit does not exist.
Thus we conclude that the point at infinity is an essential singularity.
Solution 7.38
We write sinh z in Cartesian form.
This is the parametric equation for the upper half of an ellipse. Also note that u and v satisfy the
equation for an ellipse.
u2 v2
2 + =1
sinh c cosh2 c
The ellipse starts at the point (sinh(c), 0), passes through the point (0, cosh(c)) and ends at (sinh(c), 0).
As c varies from zero to or from zero to , the semi-ellipses cover the upper half w plane. Thus
the mapping is 2-to-1.
Consider the infinite line y = c, x ( . . . ).Its image is
218
This is the parametric equation for the upper half of a hyperbola. Also note that u and v satisfy
the equation for a hyperbola.
u2 v2
2 + =1
cos c sin2 c
As c varies from 0 to /2 or from /2 to , the semi-hyperbola cover the upper half w plane. Thus
the mapping is 2-to-1.
We look for branch points of sinh1 w.
w = sinh z
ez ez
w=
2
e2z 2w ez 1 = 0
1/2
ez = w + w 2 + 1
z = log w + (w )1/2 (w + )1/2
1/2
There are branch points at w = . Since w + w2 + 1 is nonzero and finite in the finite complex
plane, the logarithm does not introduce any branch points in the finite plane. Thus the only branch
point in the upper half w plane is at w = . Any branch cut that connects w = with the boundary
of =(w) > 0 will separate the branches under the inverse mapping.
Consider the line y = /4. The image under the mapping is the upper half of the hyperbola
2u2 + 2v 2 = 1.
Consider the segment x = 1.The image under the mapping is the upper half of the ellipse
u2 v2
+ = 1.
sinh 1 cosh2 1
2
219
220
Chapter 8
Analytic Functions
Students need encouragement. So if a student gets an answer right, tell them it was a lucky guess.
That way, they develop a good, lucky feeling.1
-Jack Handey
|0 + x| |0|
lim =1
x0+ x
and
|0 + x| |0|
lim = 1.
x0 x
Analyticity. The complex derivative, (or simply derivative if the context is clear), is defined,
d f (z + z) f (z)
f (z) = lim .
dz z0 z
The complex derivative exists if this limit exists. This means that the value of the limit is independent
of the manner in which z 0. If the complex derivative exists at a point, then we say that the
function is complex differentiable there.
A function of a complex variable is analytic at a point z0 if the complex derivative exists in
a neighborhood about that point. The function is analytic in an open set if it has a complex
derivative at each point in that set. Note that complex differentiable has a different meaning than
analytic. Analyticity refers to the behavior of a function on an open set. A function can be complex
differentiable at isolated points, but the function would not be analytic at those points. Analytic
functions are also called regular or holomorphic. If a function is analytic everywhere in the finite
complex plane, it is called entire.
1 Quote slightly modified.
221
Example 8.1.1 Consider z n , n Z+ , Is the function differentiable? Is it analytic? What is the
value of the derivative?
We determine differentiability by trying to differentiate the function. We use the limit definition
of differentiation. We will use Newtons binomial formula to expand (z + z)n .
d n (z + z)n z n
z = lim
dz z0
z
n(n1) n2
z n + nz n1 z + 2 z z 2 + + z n z n
= lim
z0
z
n1 n(n 1) n2 n1
= lim nz + z z + + z
z0 2
= nz n1
The derivative exists everywhere. The function is analytic in the whole complex plane so it is entire.
d
The value of the derivative is dz = nz n1 .
Example 8.1.2 We will show that f (z) = z is not differentiable. Consider its derivative.
d f (z + z) f (z)
f (z) = lim .
dz z0 z
d z + z z
z = lim
dz z0 z
z
= lim
z0 z
222
Example 8.1.3 Consider the Cartesian coordinates z = x + y. We write the complex derivative
as derivatives in the coordinate directions for f (z) = (x, y).
1
df (x + y)
= =
dz x x x
1
df (x + y)
= =
dz y y y
Example 8.1.4 In Example 8.1.1 we showed that z n , n Z+ , is an entire function and that
d n n1
dz z = nz . Now we corroborate this by calculating the complex derivative in the Cartesian
coordinate directions.
d n
z = (x + y)n
dz x
= n(x + y)n1
= nz n1
d n
z = (x + y)n
dz y
= n(x + y)n1
= nz n1
Complex Derivatives are Not the Same as Partial Derivatives Recall from calculus that
f g s g t
f (x, y) = g(s, t) = +
x s x t x
Do not make the mistake of using a similar formula for functions of a complex variable. If f (z) =
(x, y) then
df x y
6= + .
dz x z y z
d
This is because the dz operator means The derivative in any direction in the complex plane. Since
0
f (z) is analytic, f (z) is the same no matter in which direction we take the derivative.
Rules of Differentiation. For an analytic function defined in terms of z we can calculate the
complex derivative using all the usual rules of differentiation that we know from calculus like the
product rule,
d
f (z)g(z) = f 0 (z)g(z) + f (z)g 0 (z),
dz
or the chain rule,
d
f (g(z)) = f 0 (g(z))g 0 (z).
dz
This is because the complex derivative derives its properties from properties of limits, just like its
real variable counterpart.
223
Result 8.1.1 The complex derivative is,
d f (z + z) f (z)
f (z) = lim .
dz z0 z
The complex derivative is defined if the limit exists and is independent of the
manner in which z 0. A function is analytic at a point if the complex
derivative exists in a neighborhood of that point.
Let z = (, ) define coordinates in the complex plane. The complex deriva-
tive in the coordinate directions is
1 1
d
= = .
dz
In Cartesian coordinates, this is
d
= = .
dz x y
In polar coordinates, this is
d
= e = e
dz r r
Since the complex derivative is defined with the same limit formula as real
derivatives, all the rules from the calculus of functions of a real variable may
be used to differentiate functions of a complex variable.
Example 8.1.5 We have shown that z n , n Z+ , is an entire function. Now we corroborate that
d n n1
dz z = nz by calculating the complex derivative in the polar coordinate directions.
d n n n
z = e r e
dz r
= e nrn1 en
= nrn1 e(n1)
= nz n1
d n n n
z = e r e
dz r
= e rn n en
r
= nrn1 e(n1)
= nz n1
224
We treat z and z as independent variables. We find the partial derivatives with respect to these
variables.
x y 1
= + =
z z x z y 2 x y
x y 1
= + = +
z z x z y 2 x y
Since is analytic, the complex derivatives in the x and y directions are equal.
=
x y
The partial derivative of f (z, z) with respect to z is zero.
f 1
= + =0
z 2 x y
Thus f (z, z) has no functional dependence on z, it can be written as a function of z alone.
If we were considering an analytic function expressed in polar coordinates (r, ), then we could
write it in Cartesian coordinates with the substitutions:
p
r = x2 + y 2 , = arctan(x, y).
A necessary condition for analyticity. Consider a function f (z) = (x, y). If f (z) is analytic,
the complex derivative is equal to the derivatives in the coordinate directions. We equate the deriva-
tives in the x and y directions to obtain the Cauchy-Riemann equations in Cartesian coordinates.
x = y (8.1)
This equation is a necessary condition for the analyticity of f (z).
Let (x, y) = u(x, y) + v(x, y) where u and v are real-valued functions. We equate the real
and imaginary parts of Equation 8.1 to obtain another form for the Cauchy-Riemann equations in
Cartesian coordinates.
ux = vy , uy = vx .
Note that this is a necessary and not a sufficient condition for analyticity of f (z). That is, u
and v may satisfy the Cauchy-Riemann equations but f (z) may not be analytic. At this point,
Cauchy-Riemann equations give us an easy test for determining if a function is not analytic.
Example 8.2.1 In Example 8.1.2 we showed that z is not analytic using the definition of complex
differentiation. Now we obtain the same result using the Cauchy-Riemann equations.
z = x y
ux = 1, vy = 1
225
We see that the first Cauchy-Riemann equation is not satisfied; the function is not analytic at any
point.
A sufficient condition for analyticity. A sufficient condition for f (z) = (x, y) to be analytic
at a point z0 = (x0 , y0 ) is that the partial derivatives of (x, y) exist and are continuous in some
neighborhood of z0 and satisfy the Cauchy-Riemann equations there. If the partial derivatives of
exist and are continuous then
Here the notation o(x) means terms smaller than x. We calculate the derivative of f (z).
f (z + z) f (z)
f 0 (z) = lim
z0 z
(x + x, y + y) (x, y)
= lim
x,y0 x + y
(x, y) + xx (x, y) + yy (x, y) + o(x) + o(y) (x, y)
= lim
x,y0 x + y
xx (x, y) + yy (x, y) + o(x) + o(y)
= lim
x,y0 x + y
1 1
=
We could separate this into two equations by equating the real and imaginary parts or the modulus
and argument.
226
Result 8.2.1 A necessary condition for analyticity of (, ), where z =
(, ), at z = z0 is that the Cauchy-Riemann equations are satisfied in a
neighborhood of z = z0 .
1 1
= .
(We could equate the real and imaginary parts or the modulus and argument
of this to obtain two equations.) A sufficient condition for analyticity of f (z)
is that the Cauchy-Riemann equations hold and the first partial derivatives of
exist and are continuous in a neighborhood of z = z0 .
Below are the Cauchy-Riemann equations for various forms of f (z).
f (z) = (x, y), x = y
f (z) = u(x, y) + v(x, y), ux = vy , uy = vx
f (z) = (r, ), r =
r
1
f (z) = u(r, ) + v(r, ), ur = v , u = rvr
r
R 1
f (z) = R(r, ) e(r,) , Rr = , R = Rr
r r
f (z) = R(x, y) e(x,y) , Rx = Ry , Ry = Rx
Example 8.2.2 Consider the Cauchy-Riemann equations for f (z) = u(r, ) + v(r, ). From Exer-
cise 8.3 we know that the complex derivative in the polar coordinate directions is
d
= e = e .
dz r r
From Result 8.2.1 we have the equation,
e [u + v] = e [u + v].
r r
We multiply by e and equate the real and imaginary components to obtain the Cauchy-Riemann
equations.
1
ur = v , u = rvr
r
x = y
ex (cos y + sin(y)) = ex ( sin y + cos(y))
ex (cos y + sin(y)) = ex (cos y + sin(y))
Since the function satisfies the Cauchy-Riemann equations and the first partial derivatives are con-
tinuous everywhere in the finite complex plane, the exponential function is entire.
227
Now we find the value of the complex derivative.
d z
e = = ex (cos y + sin(y)) = ez
dz x
The differentiability of the exponential function implies the differentiability of the trigonometric
functions, as they can be written in terms of the exponential.
In Exercise 8.13 you can show that the logarithm log z is differentiable for z 6= 0. This implies
the differentiability of z and the inverse trigonometric functions as they can be written in terms of
the logarithm.
If u is harmonic on some simply-connected domain, then there exists a harmonic function v such
that f (z) = u + v is analytic in the domain. v is called the harmonic conjugate of u. The harmonic
conjugate is unique up to an additive constant. To demonstrate this, let w be another harmonic
conjugate of u. Both the pair u and v and the pair u and w satisfy the Cauchy-Riemann equations.
ux = vy , uy = vx , ux = wy , uy = wx
vx wx = 0, vy wy = 0
228
On a simply connected domain, the integral is path independent and defines a unique v in terms of
vx and vy . We use the Cauchy-Riemann equations to write v in terms of ux and uy .
Z (x,y)
v(x, y) = v (x0 , y0 ) + uy dx + ux dy
(x0 ,y0 )
Changing the starting point (x0 , y0 ) changes v by an additive constant. The harmonic conjugate of
u to within an additive constant is
Z
v(x, y) = uy dx + ux dy.
This proves the existence3 of the harmonic conjugate. This is not the formula one would use to
construct the harmonic conjugate of a u. One accomplishes this by solving the Cauchy-Riemann
equations.
Result 8.3.1 If f (z) = u+v is an analytic function then u and v are harmonic
functions. That is, the Laplacians of u and v vanish u = v = 0. The
Laplacian in Cartesian and polar coordinates is
2 2 1 2
1
= + , = r + .
x2 y 2 r r r r2 2
Given a harmonic function u in a simply connected domain, there exists a
harmonic function v, (unique up to an additive constant), such that f (z) =
u + v is analytic in the domain. One can construct v by solving the Cauchy-
Riemann equations.
2u
= ex sin y ex sin y + x ex sin y y ex cos y
x2
= 2 ex sin y + x ex sin y y ex cos y
u
= ex (x cos y cos y + y sin y)
y
2u
= ex (x sin y + sin y + y cos y + sin y)
y 2
= x ex sin y + 2 ex sin y + y ex cos y
2u 2u
Thus we see that x2 + y 2 = 0 and u is harmonic.
3 A mathematician returns to his office to find that a cigarette tossed in the trash has started a small fire. Being
calm and a quick thinker he notes that there is a fire extinguisher by the window. He then closes the door and walks
away because the solution exists.
229
Example 8.3.3 Consider u = cos x cosh y. This function is harmonic.
Thus it is the real part of an analytic function, f (z). We find the harmonic conjugate, v, with the
Cauchy-Riemann equations. We integrate the first Cauchy-Riemann equation.
vy = ux = sin x cosh y
v = sin x sinh y + a(x)
Here a(x) is a constant of integration. We substitute this into the second Cauchy-Riemann equation
to determine a(x).
vx = uy
cos x sinh y + a0 (x) = cos x sinh y
a0 (x) = 0
a(x) = c
v = sin x sinh y + c.
Example 8.3.4 Here we consider an example that demonstrates the need for a simply connected
domain. Consider u = Log r in the multiply connected domain, r > 0. u is harmonic.
1 2
1
Log r = r Log r + 2 2 Log r = 0
r r r r
We solve the Cauchy-Riemann equations to try to find the harmonic conjugate.
1
ur = v , u = rvr
r
vr = 0, v = 1
v =+c
We are able to solve for v, but it is multi-valued. Any single-valued branch of that we choose will
not be continuous on the domain. Thus there is no harmonic conjugate of u = Log r for the domain
r > 0.
If we had instead considered the simply-connected domain r > 0, | arg(z)| < then the harmonic
conjugate would be v = Arg(z) + c. The corresponding analytic function is f (z) = Log z + c.
uxx + uyy = 6x 6x = 0
Thus it is the real part of an analytic function, f (z). We find the harmonic conjugate, v, with the
Cauchy-Riemann equations. We integrate the first Cauchy-Riemann equation.
vy = ux = 3x2 3y 2 + 1
v = 3x2 y y 3 + y + a(x)
230
Here a(x) is a constant of integration. We substitute this into the second Cauchy-Riemann equation
to determine a(x).
vx = uy
6xy + a0 (x) = 6xy
a0 (x) = 0
a(x) = c
v = 3x2 y y 3 + y + c.
8.4 Singularities
Any point at which a function is not analytic is called a singularity. In this section we will classify
the different flavors of singularities.
Example 8.4.1 Consider f (z) = z 3/2 . The originand infinity are branch points and are thus
singularities of f (z). We choose the branch g(z) = z 3 . All the points on the negative real axis,
including the origin, are singularities of g(z).
Removable Singularities.
If we were to fill in the hole in the definition of f (z), we could make it differentiable at z = 0.
Consider the function (
sin z
z z 6= 0,
g(z) =
1 z = 0.
231
We calculate the derivative at z = 0 to verify that g(z) is analytic there.
f (0) f (z)
f 0 (0) = lim
z0 z
1 sin(z)/z
= lim
z0 z
z sin(z)
= lim
z0 z2
1 cos(z)
= lim
z0 2z
sin(z)
= lim
z0 2
=0
We call the point at z = 0 a removable singularity of sin(z)/z because we can remove the singularity
by defining the value of the function to be its limiting value there.
n
Poles. If a function f (z) behaves like c/ (z z0 ) near z = z0 then the function has an nth order
pole at that point. More mathematically we say
n
lim (z z0 ) f (z) = c 6= 0.
zz0
We require the constant c to be nonzero so we know that it is not a pole of lower order. We can
denote a removable singularity as a pole of order zero.
Another way to say that a function has an nth order pole is that f (z) is not analytic at z = z0 ,
n
but (z z0 ) f (z) is either analytic or has a removable singularity at that point.
Example 8.4.3 1/ sin z 2 has a second order pole at z = 0 and first order poles at z = (n)1/2 ,
n Z .
z2 2z
lim = lim
z0 sin (z 2 ) z0 2z cos (z 2 )
2
= lim
z0 2 cos (z 2 ) 4z 2 sin (z 2 )
=1
z (n)1/2 1
lim = lim
z(n)1/2 sin (z 2 ) z(n)1/2 2z cos (z 2 )
1
=
2(n) (1)n
1/2
232
Example 8.4.4 e1/z is singular at z = 0. The function is not analytic as limz0 e1/z does not exist.
We check if the function has a pole of order n at z = 0.
e
lim z n e1/z = lim
z0 n
e
= lim
n!
Since the limit does not exist for any value of n, the singularity is not a pole. We could say that
e1/z is more singular than any power of 1/z.
The point at infinity. We can consider the point at infinity z by making the change of
variables z = 1/ and considering 0. If f (1/) is analytic at = 0 then f (z) is analytic at
infinity. We have encountered branch points at infinity before (Section 7.8). Assume that f (z) is
not analytic at infinity. If limz f (z) exists then f (z) has a removable singularity at infinity. If
limz f (z)/z n = c 6= 0 then f (z) has an nth order pole at infinity.
A pole may be called a non-essential singularity. This is because multiplying the function by an
integral power of z z0 will make the function analytic. Then an essential singularity is a point z0
n
such that there does not exist an n such that (z z0 ) f (z) is analytic there.
If you dont like the abstract notion of a deleted neighborhood, you can work with a deleted circular
neighborhood. However, this will require the introduction of more math symbols and a Greek letter.
233
z = z0 is an isolated singularity if there exists a > 0 such that there are no singularities in
0 < |z z0 | < .
(z n)z
lim (z n)f (z) = lim
zn zn sin z
2z n
= lim
zn cos z
n
=
(1)n
6= 0
Now to examine the behavior at infinity. There is no neighborhood of infinity that does not
contain first order poles of f (z). (Another way of saying this is that there does not exist an R such
that there are no singularities in R < |z| < .) Thus z = is a non-isolated singularity.
We could also determine this by setting = 1/z and examining the point = 0. f (1/) has first
order poles at = 1/(n) for n Z \ {0}. These first order poles come arbitrarily close to the point
= 0 There is no deleted neighborhood of = 0 which does not contain singularities. Thus = 0,
and hence z = is a non-isolated singularity.
The point at infinity is an essential singularity. It is certainly not a branch point or a removable
singularity. It is not a pole, because there is no n such that limz z n f (z) = const 6= 0. z n f (z)
has first order poles in any neighborhood of infinity, so this limit does not exist.
234
1 1
0 1 0 1
-1 0.5 -1 0.5
-1 0 -1 0
-0.5 -0.5
0 -0.5 0 -0.5
0.5 0.5
1-1 1-1
Figure 8.1: The velocity potential and stream function for (z) = v0 e0 z.
0.5
-0.5
-1
-1 -0.5 0 0.5 1
Figure 8.3: Velocity field and velocity direction field for = v0 (cos(0 )x + sin(0 )y).
Example 8.5.2 Steady, incompressible, inviscid, irrotational flow is governed by the Laplace equa-
tion. We consider flow around an infinite cylinder of radius a. Because the flow does not vary along
the axis of the cylinder, this is a two-dimensional problem. The flow corresponds to the complex
potential
a2
(z) = v0 z + .
z
235
We find the velocity potential and stream function .
(z) = +
2
a2
a
= v0 r+ cos , = v0 r sin
r r
These are plotted in Figure 8.4.
a2
Figure 8.4: The velocity potential and stream function for (z) = v0 z + z .
a2
Figure 8.5: Streamlines for = v0 r r sin .
v =
v = r r +
r
a2 a2
v = v0 1 2 cos r v0 1 + sin
r r2
The velocity field is shown in Figure 8.6.
236
a2
Figure 8.6: Velocity field and velocity direction field for = v0 r + r cos .
8.6 Exercises
Complex Derivatives
Exercise 8.1
Consider two functions f (z) and g(z) analytic at z0 with f (z0 ) = g(z0 ) = 0 and g 0 (z0 ) 6= 0.
1. Use the definition of the complex derivative to justify LHospitals rule:
f (z) f 0 (z0 )
lim = 0
zz0 g(z) g (z0 )
Exercise 8.2
Show that if f (z) is analytic and (x, y) = f (z) is twice continuously differentiable then f 0 (z) is
analytic.
Exercise 8.3
Find the complex derivative in the coordinate directions for f (z) = (r, ).
Exercise 8.4
Show that the following functions are nowhere analytic by checking where the derivative with respect
to z exists.
1. sin x cosh y cos x sinh y
2. x2 y 2 + x + (2xy y)
Exercise 8.5
f (z) is analytic for all z, (|z| < ). f (z1 + z2 ) = f (z1 ) f (z2 ) for all z1 and z2 . (This is known as a
functional equation). Prove that f (z) = exp (f 0 (0)z).
Cauchy-Riemann Equations
Exercise 8.6
If f (z) is analytic in a domain and has a constant real part, a constant imaginary part, or a constant
modulus, show that f (z) is constant.
237
Exercise 8.7
Show that the function ( 4
ez for z 6= 0,
f (z) =
0 for z = 0.
satisfies the Cauchy-Riemann equations everywhere, including at z = 0, but f (z) is not analytic at
the origin.
Exercise 8.8
Find the Cauchy-Riemann equations for the following forms.
Exercise 8.9
1. Show that ez is not analytic.
2. f (z) is an analytic function of z. Show that f (z) = f (z) is also an analytic function of z.
Exercise 8.10
1. Determine all points z = x + y where the following functions are differentiable with respect
to z:
(a) x3 + y 3
x1 y
(b)
(x 1)2 + y 2 (x 1)2 + y 2
2. Determine all points z where these functions are analytic.
3. Determine which of the following functions v(x, y) are the imaginary part of an analytic func-
tion u(x, y) + v(x, y). For those that are, compute the real part u(x, y) and re-express the
answer as an explicit function of z = x + y:
(a) x2 y 2
(b) 3x2 y
Exercise 8.11
Let
(
x4/3 y 5/3 +x5/3 y 4/3
x2 +y 2 for z 6= 0,
f (z) =
0 for z = 0.
Show that the Cauchy-Riemann equations hold at z = 0, but that f is not differentiable at this
point.
Exercise 8.12
Consider the complex function
x3 (1+)y 3 (1)
(
x2 +y 2 for z 6= 0,
f (z) = u + v =
0 for z = 0.
Show that the partial derivatives of u and v with respect to x and y exist at z = 0 and that ux = vy
and uy = vx there: the Cauchy-Riemann equations are satisfied at z = 0. On the other hand,
show that
f (z)
lim
z0 z
238
Exercise 8.13
Show that the logarithm log z is differentiable for z 6= 0. Find the derivative of the logarithm.
Exercise 8.14
Show that the Cauchy-Riemann equations for the analytic function f (z) = u(r, ) + v(r, ) are
ur = v /r, u = rvr .
Exercise 8.15
w = u + v is an analytic function of z. (x, y) is an arbitrary smooth function of x and y. When
expressed in terms of u and v, (x, y) = (u, v). Show that (w0 6= 0)
1
dw
= .
u v dz x y
Deduce 2 2
2 2 dw 2
+ = + .
u2 v 2 dz x2 y 2
Exercise 8.16 p
Show that the functions defined by f (z) = log |z| + arg(z) and f (z) = |z| e arg(z)/2 are analytic
in the sector |z| > 0, | arg(z)| < . What are the corresponding derivatives df /dz?
Exercise 8.17
Show that the following functions are harmonic. For each one of them find its harmonic conjugate
and form the corresponding holomorphic function.
1. u(x, y) = x Log(r) y arctan(x, y) (r 6= 0)
2. u(x, y) = arg(z) (| arg(z)| < , r 6= 0)
3. u(x, y) = rn cos(n)
4. u(x, y) = y/r2 (r 6= 0)
Exercise 8.18
1. Use the Cauchy-Riemann equations to determine where the function
Exercise 8.19
Consider the function f (z) = u + v with real and imaginary parts expressed in terms of either x
and y or r and .
1. Show that the Cauchy-Riemann equations
ux = vy , uy = vx
are satisfied and these partial derivatives are continuous at a point z if and only if the polar
form of the Cauchy-Riemann equations
1 1
ur = v , u = vr
r r
is satisfied and these partial derivatives are continuous there.
239
2. Show that it is easy to verify that Log z is analytic for r > 0 and < < using the polar
form of the Cauchy-Riemann equations and that the value of the derivative is easily obtained
from a polar differentiation formula.
3. Show that in polar coordinates, Laplaces equation becomes
1 1
rr + r + 2 = 0.
r r
Exercise 8.20
Determine which of the following functions are the real parts of an analytic function.
1. u(x, y) = x3 y 3
Exercise 8.21
Consider steady, incompressible, inviscid, irrotational flow governed by the Laplace equation. De-
termine the form of the velocity potential and stream function contours for the complex potentials
1. (z) = (x, y) + (x, y) = log z + log z
2. (z) = log(z 1) + log(z + 1)
Plot and describe the features of the flows you are considering.
Exercise 8.22
1. Classify all the singularities (removable, poles, isolated essential, branch points, non-isolated
essential) of the following functions in the extended complex plane
z
(a)
z2
+1
1
(b)
sin z
(c) log 1 + z 2
(d) z sin(1/z)
tan1 (z)
(e)
z sinh2 (z)
2. Construct functions that have the following zeros or singularities:
(a) a simple zero at z = and an isolated essential singularity at z = 1.
(b) a removable singularity at z = 3, a pole of order 6 at z = and an essential singularity
at z .
240
8.7 Hints
Complex Derivatives
Hint 8.1
Hint 8.2
Start with the Cauchy-Riemann equation and then differentiate with respect to x.
Hint 8.3
Read Example 8.1.3 and use Result 8.1.1.
Hint 8.4
Use Result 8.1.1.
Hint 8.5
Take the logarithm of the equation to get a linear equation.
Cauchy-Riemann Equations
Hint 8.6
Hint 8.7
Hint 8.8
For the first part use the result of Exercise 8.3.
Hint 8.9
Use the Cauchy-Riemann equations.
Hint 8.10
Hint 8.11
To evaluate ux (0, 0), etc. use the definition of differentiation. Try to find f 0 (z) with the definition
of complex differentiation. Consider z = r e .
Hint 8.12
To evaluate ux (0, 0), etc. use the definition of differentiation. Try to find f 0 (z) with the definition
of complex differentiation. Consider z = r e .
Hint 8.13
Hint 8.14
Hint 8.15
Hint 8.16
241
Hint 8.17
Hint 8.18
Hint 8.19
Hint 8.20
Hint 8.21
Hint 8.22
CONTINUE
242
8.8 Solutions
Complex Derivatives
Solution 8.1
1. We consider LHospitals rule.
f (z) f 0 (z0 )
lim = 0
zz0 g(z) g (z0 )
We start with the right side and show that it is equal to the left side. First we apply the
definition of complex differentiation.
Since both of the limits exist, we may take the limits with = .
f 0 (z0 ) f (z0 + )
= lim
g 0 (z0 ) 0 g(z0 + )
f 0 (z0 ) f (z)
0
= lim
g (z0 ) zz0 g(z)
This proves LHospitals rule.
2.
1 + z2
2z 1
lim 6
= 5
=
z 2 + 2z 12z z= 6
sinh(z) cosh(z)
lim = =1
z ez +1 ez z=
Solution 8.2
We start with the Cauchy-Riemann equation and then differentiate with respect to x.
x = y
xx = yx
We interchange the order of differentiation.
(x )x = (x )y
(f 0 )x = (f 0 )y
Since f 0 (z) satisfies the Cauchy-Riemann equation and its partial derivatives exist and are continu-
ous, it is analytic.
Solution 8.3
We calculate the complex derivative in the coordinate directions.
!1
df r e
= = e ,
dz r r r
!1
df r e
= = e .
dz r
243
Solution 8.4
1. Consider f (x, y) = sin x cosh y cos x sinh y. The derivatives in the x and y directions are
f
= cos x cosh y + sin x sinh y
x
f
= cos x cosh y sin x sinh y
y
These derivatives exist and are everywhere continuous. We equate the expressions to get a set
of two equations.
cos x cosh y = cos x cosh y, sin x sinh y = sin x sinh y
cos x cosh y = 0, sin x sinh y = 0
x = + n and (x = m or y = 0)
2
The function may be differentiable only at the points
x= + n, y = 0.
2
Thus the function is nowhere analytic.
2. Consider f (x, y) = x2 y 2 + x + (2xy y). The derivatives in the x and y directions are
f
= 2x + 1 + 2y
x
f
= 2y + 2x 1
y
These derivatives exist and are everywhere continuous. We equate the expressions to get a set
of two equations.
2x + 1 = 2x 1, 2y = 2y.
Since this set of equations has no solutions, there are no points at which the function is
differentiable. The function is nowhere analytic.
Solution 8.5
g(z) = cz.
Thus f (z) has the solutions:
f (z) = ecz ,
where c is any complex constant. We can write this constant in terms of f 0 (0). We differentiate the
original equation with respect to z1 and then substitute z1 = 0.
f 0 (z1 + z2 ) = f 0 (z1 ) f (z2 )
f 0 (z2 ) = f 0 (0)f (z2 )
f 0 (z) = f 0 (0)f (z)
244
We substitute in the form of the solution.
Cauchy-Riemann Equations
Solution 8.6
Constant Real Part. First assume that f (z) has constant real part. We solve the Cauchy-Riemann
equations to determine the imaginary part.
ux = vy , uy = vx
vx = 0, vy = 0
We integrate the first equation to obtain v = a + g(y) where a is a constant and g(y) is an arbitrary
function. Then we substitute this into the second equation to determine g(y).
g 0 (y) = 0
g(y) = b
We see that the imaginary part of f (z) is a constant and conclude that f (z) is constant.
Constant Imaginary Part. Next assume that f (z) has constant imaginary part. We solve the
Cauchy-Riemann equations to determine the real part.
ux = vy , uy = vx
ux = 0, uy = 0
We integrate the first equation to obtain u = a + g(y) where a is a constant and g(y) is an arbitrary
function. Then we substitute this into the second equation to determine g(y).
g 0 (y) = 0
g(y) = b
We see that the real part of f (z) is a constant and conclude that f (z) is constant.
Constant Modulus. Finally assume that f (z) has constant modulus.
|f (z)| = constant
p
u2 + v 2 = constant
u2 + v 2 = constant
This system has non-trivial solutions for u and v only if the matrix is non-singular. (The trivial
solution u = v = 0 is the constant function f (z) = 0.) We set the determinant of the matrix to zero.
ux vy uy vx = 0
245
We use the Cauchy-Riemann equations to write this in terms of ux and uy .
u2x + u2y = 0
ux = uy = 0
Since its partial derivatives vanish, u is a constant. From the Cauchy-Riemann equations we see
that the partial derivatives of v vanish as well, so it is constant. We conclude that f (z) is a constant.
Constant Modulus. Here is another method for the constant modulus case. We solve the
Cauchy-Riemann equations in polar form to determine the argument of f (z) = R(x, y) e(x,y) .
Since the function has constant modulus R, its partial derivatives vanish.
Rx = Ry , Ry = Rx
Ry = 0, Rx = 0
The equations are satisfied for R = 0. For this case, f (z) = 0. We consider nonzero R.
y = 0, x = 0
We see that the argument of f (z) is a constant and conclude that f (z) is constant.
Solution 8.7
First we verify that the Cauchy-Riemann equations are satisfied for z 6= 0. Note that the form
fx = fy
ux = vy , uy = vx
f (x, 0) f (0, 0)
fx (0, 0) = lim
x0 x
4
ex
= lim
x0 x
=0
f (0, y) f (0, 0)
fy (0, 0) = lim
y0 y
4
ey
= lim
y0 y
=0
246
Let z = r e , that is, we approach the origin at an angle of .
f r e
f 0 (0) = lim
r0 r e
4 4
er e
= lim
r0 r e
For most values of the limit does not exist. Consider = /4.
4
er
f 0 (0) = lim =
r0 r e/4
Because the limit does not exist, the function is not differentiable at z = 0. Recall that satisfying
the Cauchy-Riemann equations is a necessary, but not a sufficient condition for differentiability.
Solution 8.8
1. We find the Cauchy-Riemann equations for
From Exercise 8.3 we know that the complex derivative in the polar coordinate directions is
d
= e = e .
dz r r
We equate the derivatives in the two directions.
e Re = e Re
r r
(Rr + Rr ) e = (R + R ) e
r
We divide by e and equate the real and imaginary components to obtain the Cauchy-Riemann
equations.
R 1
Rr = , R = Rr
r r
Re = Re
x y
(Rx + Ry ) e = (Rx + Ry ) e
We divide by e and equate the real and imaginary components to obtain the Cauchy-Riemann
equations.
Rx = Ry , Ry = Rx
Solution 8.9
1. A necessary condition for analyticity in an open set is that the Cauchy-Riemann equations are
satisfied in that set. We write ez in Cartesian form.
247
Now we determine where u = ex cos y and v = ex sin y satisfy the Cauchy-Riemann equations.
ux = vy , uy = vx
x x
e cos y = e cos y, ex sin y = ex sin y
cos y = 0, sin y = 0
y = + m, y = n
2
Thus we see that the Cauchy-Riemann equations are not satisfied anywhere. ez is nowhere
analytic.
2. Since f (z) = u + v is analytic, u and v satisfy the Cauchy-Riemann equations and their first
partial derivatives are continuous.
We define f (z) (x, y) + (x, y) = u(x, y) v(x, y). Now we see if and satisfy the
Cauchy-Riemann equations.
x = y , y = x
(u(x, y))x = (v(x, y))y , (u(x, y))y = (v(x, y))x
ux (x, y) = vy (x, y), uy (x, y) = vx (x, y)
ux = vy , uy = vx
Thus we see that the Cauchy-Riemann equations for and are satisfied if and only if
the Cauchy-Riemann equations for u and v are satisfied. The continuity of the first partial
derivatives of u and v implies the same of and . Thus f (z) is analytic.
Solution 8.10
1. The necessary condition for a function f (z) = u + v to be differentiable at a point is that the
Cauchy-Riemann equations hold and the first partial derivatives of u and v are continuous at
that point.
(a)
f (z) = x3 + y 3 + 0
The Cauchy-Riemann equations are
ux = vy and uy = vx
2
3x = 0 and 3y 2 = 0
x=0 and y = 0
The first partial derivatives are continuous. Thus we see that the function is differentiable
only at the point z = 0.
(b)
x1 y
f (z) =
(x 1)2 + y 2 (x 1)2 + y 2
The Cauchy-Riemann equations are
ux = vy and uy = vx
2 2 2 2
(x 1) + y (x 1) + y 2(x 1)y 2(x 1)y
2 2 2
= and =
((x 1) + y ) ((x 1)2 + y 2 )2 2 2
((x 1) + y ) 2 ((x 1)2 + y 2 )2
The Cauchy-Riemann equations are each identities. The first partial derivatives are con-
tinuous everywhere except the point x = 1, y = 0. Thus the function is differentiable
everywhere except z = 1.
248
2. (a) The function is not differentiable in any open set. Thus the function is nowhere analytic.
(b) The function is differentiable everywhere except z = 1. Thus the function is analytic
everywhere except z = 1.
v = x2 y 2
vxx + vyy = 0
22=0
The function is harmonic in the complex plane and this is the imaginary part of some
analytic function. By inspection, we see that this function is
z 2 + c = 2xy + c + x2 y 2 ,
where c is a real constant. We can also find the function by solving the Cauchy-Riemann
equations.
ux = vy and uy = vx
ux = 2y and uy = 2x
u = 2xy + g(y)
Here g(y) is a function of integration. We substitute this into the second Cauchy-Riemann
equation to determine g(y).
uy = 2x
2x + g 0 (y) = 2x
g 0 (y) = 0
g(y) = c
u = 2xy + c
f (z) = 2xy + c + x2 y 2
f (z) = z 2 + c
v = 3x2 y
vxx + vyy = 6y
The function is not harmonic. It is not the imaginary part of some analytic function.
Solution 8.11
We write the real and imaginary parts of f (z) = u + v.
( (
x4/3 y 5/3 x5/3 y 4/3
x2 +y 2 for z 6= 0, x2 +y 2 for z 6= 0,
u= , v=
0 for z = 0. 0 for z = 0.
ux = vy , uy = vx .
249
We calculate the partial derivatives of u and v at the point x = y = 0 using the definition of
differentiation.
u(x, 0) u(0, 0) 00
ux (0, 0) = lim = lim =0
x0 x x0 x
v(x, 0) v(0, 0) 00
vx (0, 0) = lim = lim =0
x0 x x0 x
u(0, y) u(0, 0) 00
uy (0, 0) = lim = lim =0
y0 y y0 y
v(0, y) v(0, 0) 00
vy (0, 0) = lim = lim =0
y0 y y0 y
Since ux (0, 0) = uy (0, 0) = vx (0, 0) = vy (0, 0) = 0 the Cauchy-Riemann equations are satisfied.
f (z) is not analytic at the point z = 0. We show this by calculating the derivative there.
We let z = r e , that is, we approach the origin at an angle of . Then x = r cos and
y = r sin .
f r e
f 0 (0) = lim
r0 r e
r 4/3 cos4/3 r 5/3 sin5/3 +r 5/3 cos5/3 r 4/3 sin4/3
r 2
= lim
r0 r e
4/3 5/3 5/3 4/3
cos sin + cos sin
= lim
r0 e
The value of the limit depends on and is not a constant. Thus this limit does not exist. The
function is not differentiable at z = 0.
Solution 8.12
x3 y 3 x3 +y 3
( (
x2 +y 2 for z 6= 0, x2 +y 2 for z 6= 0,
u= , v=
0 for z = 0. 0 for z = 0.
The Cauchy-Riemann equations are
ux = vy , uy = vx .
u(x, 0) u(0, 0)
ux (0, 0) = lim
x0 x
x 0
= lim
x0 x
= 1,
v(x, 0) v(0, 0)
vx (0, 0) = lim
x0 x
x 0
= lim
x0 x
= 1,
250
u(0, y) u(0, 0)
uy (0, 0) = lim
y0 y
y 0
= lim
y0 y
= 1,
v(0, y) v(0, 0)
vy (0, 0) = lim
y0 y
y 0
= lim
y0 y
= 1.
Solution 8.13
We show that the logarithm log z = (r, ) = Log r + satisfies the Cauchy-Riemann equations.
r =
r
1
=
r r
1 1
=
r r
Since the logarithm satisfies the Cauchy-Riemann equations and the first partial derivatives are
continuous for z 6= 0, the logarithm is analytic for z 6= 0.
Now we compute the derivative.
d
log z = e (Log r + )
dz r
1
= e
r
1
=
z
Solution 8.14
The complex derivative in the coordinate directions is
d
= e = e .
dz r r
251
We substitute f = u + v into this identity to obtain the Cauchy-Riemann equation in polar coor-
dinates.
f f
e = e
r r
f f
=
r r
ur + vr = (u + v )
r
We equate the real and imaginary parts.
1 1
ur = v , vr = u
r r
1
ur = v , u = rvr
r
Solution 8.15
Since w is analytic, u and v satisfy the Cauchy-Riemann equations,
ux = vy and uy = vx .
Using the chain rule we can write the derivatives with respect to x and y in terms of u and v.
= ux + vx
x u v
= uy + vy
y u v
Now we examine x y .
x y = ux u + vx v (uy u + vy v )
x y = (ux uy ) u + (vx vy ) v
x y = (ux uy ) u (vy + vx ) v
x y = (ux + vx ) u (ux + vx ) v
Recall that w0 = ux + vx = vy uy .
dw
x y = (u v )
dz
Thus we see that,
1
dw
= .
u v dz x y
We write this in operator notation.
1
dw
=
u v dz x y
The complex conjugate of this relation is
1
dw
+ = +
u v dz x y
252
Now we apply both these operators to = .
1 1
dw dw
+ = +
u v u v dz x y dz x y
2 2 2 2
+ +
u2 uv vu v 2
1 " 1 ! 1 #
dw dw dw
= + + +
dz x y dz x y dz x y x y
1
(w0 ) is an analytic function. Recall that for analytic functions f , f 0 = fx = fy . So that
fx + fy = 0.
1 " 1 2 #
2 2 2
dw dw
+ = + 2
u2 v 2 dz dz x2 y
2 2
2 2 dw 2
+ = + 2
u2 v 2 dz x2 y
Solution 8.16
1. We consider
f (z) = log |z| + arg(z) = log r + .
The Cauchy-Riemann equations in polar coordinates are
1
ur = v , u = rvr .
r
We calculate the derivatives.
1 1 1
ur = , v =
r r r
u = 0, rvr = 0
Since the Cauchy-Riemann equations are satisfied and the partial derivatives are continuous,
f (z) is analytic in |z| > 0, | arg(z)| < . The complex derivative in terms of polar coordinates
is
d
= e = e .
dz r r
We use this to differentiate f (z).
df 1 1
= e [log r + ] = e =
dz r r z
2. Next we consider p
f (z) = |z| e arg(z)/2 = r e/2 .
The Cauchy-Riemann equations for polar coordinates and the polar form f (z) = R(r, ) e(r,)
are
R 1
Rr = , R = Rr .
r r
We calculate the derivatives for R = r, = /2.
1 R 1
Rr = , =
2 r r 2 r
1
R = 0, Rr = 0
r
253
Since the Cauchy-Riemann equations are satisfied and the partial derivatives are continuous,
f (z) is analytic in |z| > 0, | arg(z)| < . The complex derivative in terms of polar coordinates
is
d
= e = e .
dz r r
We use this to differentiate f (z).
df /2 1 1
= e [ re ]= /2
=
dz r 2e r 2 z
Solution 8.17
1. We consider the function
1 2u
1 u
u = r + 2 2
r r r r
1 1
= (cos (r + r Log r) sin ) + 2 (r( sin 2 cos ) r cos Log r)
r r r
1 1
= (2 cos + cos Log r sin ) + ( sin 2 cos cos Log r)
r r
=0
The function u is harmonic. We find the harmonic conjugate v by solving the Cauchy-Riemann
equations.
1
vr = u , v = rur
r
vr = sin (1 + Log r) + cos , v = r (cos (1 + Log r) sin )
We integrate the first equation with respect to r to determine v to within the constant of
integration g().
v = r(sin Log r + cos ) + g()
We differentiate this expression with respect to .
We compare this to the second Cauchy-Riemann equation to see that g 0 () = 0. Thus g() = c.
We have determined the harmonic conjugate.
On the positive real axis, ( = 0), the function has the value
f (z = r) = r Log r + c.
f (z) = z log z + c
254
2. We consider the function
u = Arg(z) = .
We compute the Laplacian.
1 2u
1 u
u = r + =0
r r r r2 2
The function u is harmonic. We find the harmonic conjugate v by solving the Cauchy-Riemann
equations.
1
vr = u , v = rur
r
1
vr = , v = 0
r
We integrate the first equation with respect to r to determine v to within the constant of
integration g().
v = Log r + g()
We differentiate this expression with respect to .
v = g 0 ()
We compare this to the second Cauchy-Riemann equation to see that g 0 () = 0. Thus g() = c.
We have determined the harmonic conjugate.
v = Log r + c
f (z) = Log r + c
On the positive real axis, ( = 0), the function has the value
f (z = r) = Log r + c
f (z) = log z + c
1 2u
1 u
u = r + 2 2
r r r r
1
= (nrn cos(n)) n2 rn2 cos(n)
r r
= n2 rn2 cos(n) n2 rn2 cos(n)
=0
The function u is harmonic. We find the harmonic conjugate v by solving the Cauchy-Riemann
equations.
1
vr = u , v = rur
r
n1
vr = nr sin(n), v = nrn cos(n)
255
We integrate the first equation with respect to r to determine v to within the constant of
integration g().
v = rn sin(n) + g()
We differentiate this expression with respect to .
v = nrn cos(n) + g 0 ()
We compare this to the second Cauchy-Riemann equation to see that g 0 () = 0. Thus g() = c.
We have determined the harmonic conjugate.
v = rn sin(n) + c
f (z) = z n
cos
v= +c
r
256
The corresponding analytic function is
sin cos
f (z) = + + c
r r
On the positive real axis, ( = 0), the function has the value
f (z = r) = + c.
r
f (z) = + c
z
Solution 8.18
1. We calculate the first partial derivatives of u = (x y)2 and v = 2(x + y).
ux = 2(x y)
uy = 2(y x)
vx =2
vy =2
ux = vy , uy = vx
2(x y) = 2, 2(y x) = 2
x y = 1, y x = 1
y =x1
Since the Cauchy-Riemann equation are satisfied along the line y = x 1 and the partial
derivatives are continuous, the function f (z) is differentiable there. Since the function is not
differentiable in a neighborhood of any point, it is nowhere analytic.
Since the Cauchy-Riemann equations, ux = vy and uy = vx , are satisfied everywhere and the
partial derivatives are continuous, f (z) is everywhere differentiable. Since f (z) is differentiable
in a neighborhood of every point, it is analytic in the complex plane. (f (z) is entire.)
Now to evaluate the derivative. The complex derivative is the derivative in any direction. We
choose the x direction.
f 0 (z) = ux + vx
2 2 2
y 2
f 0 (z) = 2 ex y
(x cos(2xy) y sin(2xy)) + 2 ex (y cos(2xy) + x sin(2xy))
2 2
f 0 (z) = 2 ex y
((x + y) cos(2xy) + (y + x) sin(2xy))
257
Finding the derivative is easier if we first write f (z) in terms of the complex variable z and
use complex differentiation.
2
y 2
f (z) = ex (cos(2x, y) + sin(2xy))
2
y 2
f (z) = ex e2xy
2
f (z) = e(x+y)
2
f (z) = ez
2
f 0 (z) = 2z ez
Solution 8.19
1. Assume that the Cauchy-Riemann equations in Cartesian coordinates
ux = vy , uy = vx
are satisfied and these partial derivatives are continuous at a point z. We write the derivatives
in polar coordinates in terms of derivatives in Cartesian coordinates to verify the Cauchy-
Riemann equations in polar coordinates. First we calculate the derivatives.
x = r cos , y = r sin
x y
wr = wx + wy = cos wx + sin wy
r r
x y
w = wx + wy = r sin wx + r cos wy
Then we verify the Cauchy-Riemann equations in polar coordinates.
ur = cos ux + sin uy
= cos vy sin vx
1
= v
r
1
u = sin ux + cos uy
r
= sin vy cos vx
= vr
This proves that the Cauchy-Riemann equations in Cartesian coordinates hold only if the
Cauchy-Riemann equations in polar coordinates hold. (Given that the partial derivatives are
continuous.) Next we prove the converse.
Assume that the Cauchy-Riemann equations in polar coordinates
1 1
ur = v , u = vr
r r
are satisfied and these partial derivatives are continuous at a point z. We write the derivatives
in Cartesian coordinates in terms of derivatives in polar coordinates to verify the Cauchy-
Riemann equations in Cartesian coordinates. First we calculate the derivatives.
p
r = x2 + y 2 , = arctan(x, y)
r x y
wx = wr + w = wr 2 w
x x r r
r y x
wy = wr + w = wr + 2 w
y y r r
258
Then we verify the Cauchy-Riemann equations in Cartesian coordinates.
x y
ux = ur 2 u
r r
x y
= 2 v + vr
r r
= uy
y x
uy = ur + 2 u
r r
y x
= 2 v vr
r r
= ux
This proves that the Cauchy-Riemann equations in polar coordinates hold only if the Cauchy-
Riemann equations in Cartesian coordinates hold. We have demonstrated the equivalence of
the two forms.
2. We verify that log z is analytic for r > 0 and < < using the polar form of the
Cauchy-Riemann equations.
Log z = ln r +
1 1
ur = v , u = vr
r r
1 1 1
= 1, 0 = 0
r r r
Since the Cauchy-Riemann equations are satisfied and the partial derivatives are continuous
for r > 0, log z is analytic there. We calculate the value of the derivative using the polar
differentiation formulas.
d 1 1
Log z = e (ln r + ) = e =
dz r r z
d 1
Log z = (ln r + ) = =
dz z z z
3. Let {xi } denote rectangular coordinates in two dimensions and let {i } be an orthogonal
coordinate system . The distance metric coefficients hi are defined
s 2 2
x1 x2
hi = + .
i i
The Laplacian is
1 h2 u h1 u
2 u = + .
h1 h2 1 h1 1 2 h2 2
First we calculate the distance metric coefficients in polar coordinates.
s
2 2 p
x y
hr = + = cos2 + sin2 = 1
r r
s
2 2 p
x y
h = + = r2 sin2 + r2 cos2 = r
259
In polar coordinates, Laplaces equation is
1 1
rr + r + 2 = 0.
r r
Solution 8.20
1. We compute the Laplacian of u(x, y) = x3 y 3 .
2 u = 6x 6y
Since u is harmonic, it is the real part of on analytic function. We determine v by solving the
Cauchy-Riemann equations.
vx = uy , vy = ux
vx = sinh x sin y, vy = cosh x cos y + 1
We substitute this into the second Cauchy-Riemann equation. This will determine v up to an
additive constant.
vy = cosh x cos y + 1
cosh x cos y + g 0 (y) = cosh x cos y + 1
g 0 (y) = 1
g(y) = y + a
v = cosh x sin y + y + a
f (z) = sinh x cos y + x + (cosh x sin y + y + a)
f (z) = sinh z + z + a
Since u is harmonic, it is the real part of on analytic function. We determine v by solving the
Cauchy-Riemann equations.
1
vr = u , v = rur
r
vr = nrn1 sin(n), v = nrn cos(n)
v = rn sin(n) + g()
260
We substitute this into the second Cauchy-Riemann equation. This will determine v up to an
additive constant.
v = nrn cos(n)
nrn cos(n) + g 0 () = nrn cos(n)
g 0 () = 0
g() = a
v = rn sin(n) + a
f (z) = rn cos(n) + (rn sin(n) + a)
Here a is a real constant. We write the function in terms of z.
f (z) = z n + a
Solution 8.21
1. We find the velocity potential and stream function .
(z) = log z + log z
(z) = ln r + + (ln r + )
= ln r , = ln r +
A branch of these are plotted in Figure 8.7.
Figure 8.7: The velocity potential and stream function for (z) = log z + log z.
261
Figure 8.8: Streamlines for = ln r + .
The velocity potential and a branch of the stream function are plotted in Figure 8.10.
2 6
1 2 4 2
0 2
-1 1 1
0
-2 0 -2 0
-1 -1
0 -1 0 -1
1 1
2-2 2-2
Figure 8.10: The velocity potential and stream function for (z) = log(z 1) + log(z + 1).
262
The stream lines, arg(z 1) + arg(z + 1) = c, are plotted in Figure 8.11.
-1
-2
-2 -1 0 1 2
Figure 8.12: Velocity field and velocity direction field for = ln |z 2 1|.
Solution 8.22
1. (a) We factor the denominator to see that there are first order poles at z = .
z z
=
z2 + 1 (z )(z + )
263
Since the function behaves like 1/z at infinity, it is analytic there.
(b) The denominator of 1/ sin z has first order zeros at z = n, n Z. Thus the function has
first order poles at these locations. Now we examine the point at infinity with the change
of variables z = 1/.
1 1 2
= = /
sin z sin(1/) e e/
We see that the point at infinity is a singularity of the function. Since the denominator
grows exponentially, there is no multiplicative factor of n that will make the function
analytic at = 0. We conclude that the point at infinity is an essential singularity. Since
there is no deleted neighborhood of the point at infinity that does contain first order poles
at the locations z = n, the point at infinity is a non-isolated singularity.
(c)
log 1 + z 2 = log(z + ) + log(z )
There are branch points at z = . Since the argument of the logarithm is unbounded
as z there is a branch point at infinity as well. Branch points are non-isolated
singularities.
(d)
1 /z
z sin(1/z) = z e + e/z
2
The point z = 0 is a singularity. Since the function grows exponentially at z = 0. There
is no multiplicative factor of z n that will make the function analytic. Thus z = 0 is an
essential singularity.
There are no other singularities in the finite complex plane. We examine the point at
infinity.
1 1
z sin = sin
z
The point at infinity is a singularity. We take the limit 0 to demonstrate that it is a
removable singularity.
sin cos
lim = lim =1
0 0 1
(e)
+z
tan1 (z) log z
=
z sinh2 (z) 2z sinh2 (z)
There are branch points at z = due to the logarithm. These are non-isolated singular-
ities. Note that sinh(z) has first order zeros at z = n, n Z. The arctangent has a first
order zero at z = 0. Thus there is a second order pole at z = 0. There are second order
poles at z = n, n Z \ {0} due to the hyperbolic sine. Since the hyperbolic sine has
an essential singularity at infinity, the function has an essential singularity at infinity as
well. The point at infinity is a non-isolated singularity because there is no neighborhood
of infinity that does not contain second order poles.
2. (a) (z ) e1/(z1) has a simple zero at z = and an isolated essential singularity at z = 1.
(b)
sin(z 3)
(z 3)(z + )6
has a removable singularity at z = 3, a pole of order 6 at z = and an essential
singularity at z .
264
Chapter 9
Analytic Continuation
For every complex problem, there is a solution that is simple, neat, and wrong.
- H. L. Mencken
Im(z)
D1
D2
Re(z)
If the two domains overlap and f1 (z) = f2 (z) in the overlap region D1 D2 , then f2 (z) is called
an analytic continuation of f1 (z). This is an appropriate name since f2 (z) continues the definition of
f1 (z) outside of its original domain of definition D1 . We can define a function f (z) that is analytic
in the union of the domains D1 D2 . On the domain D1 we have f (z) = f1 (z) and f (z) = f2 (z) on
D2 . f1 (z) and f2 (z) are called function elements. There is an analytic continuation even if the two
domains only share an arc and not a two dimensional region.
With more overlapping domains D3 , D4 , . . . we could perhaps extend f1 (z) to more of the complex
plane. Sometimes it is impossible to extend a function beyond the boundary of a domain. This is
known as a natural boundary. If a function f1 (z) is analytically continued to a domain Dn along
two different paths, (See Figure 9.2.), then the two analytic continuations are identical as long as
the paths do not enclose a branch point of the function. This is the uniqueness theorem of analytic
continuation.
Consider an analytic function f (z) defined in the domain D. Suppose that f (z) = 0 on the arc
AB, (see Figure 9.3.) Then f (z) = 0 in all of D.
Consider a point on AB. The Taylor series expansion of f (z) about the point z = converges
in a circle C at least up to the boundary of D. The derivative of f (z) at the point z = is
f ( + z) f ()
f 0 () = lim
z0 z
265
Dn
D1
D
C
B
A
If z is in the direction of the arc, then f 0 () vanishes as well as all higher derivatives, f 0 () =
f 00 () = f 000 () = = 0. Thus we see that f (z) = 0 inside C. By taking Taylor series expansions
about points on AB or inside of C we see that f (z) = 0 in D.
To prove Result 9.1.1, we define the analytic function g(z) = f1 (z) f2 (z). Since g(z) vanishes
in the region or on the arc, then g(z) = 0 and hence f1 (z) = f2 (z) for all points in D.
Result 9.1.2 Consider analytic functions f1 (z) and f2 (z) defined on the do-
mains D1 and D2 , respectively. Suppose that D1 D2 is a region or an arc and
that f1 (z) = f2 (z) for all z D1 D2 . (See Figure 9.4.) Then the function
(
f1 (z) for z D1 ,
f (z) =
f2 (z) for z D2 ,
is analytic in D1 D2 .
D1 D2 D1 D2
266
9.2 Analytic Continuation of Sums
Example 9.2.1 Consider the function
X
f1 (z) = zn.
n=0
The sum converges uniformly for D1 = |z| r < 1. Since the derivative also converges in this
domain, the function is analytic there.
Re(z) Re(z)
D2
D1
Im(z) Im(z)
P
Figure 9.5: Domain of Convergence for n=0 zn.
Analytic continuation tells us that there is a function that is analytic on the union of the two
domains. Here, the domain is the entire z plane except the point z = 1 and the function is
1
f (z) = .
1z
1
P
1z is said to be an analytic continuation of n=0 zn.
Example 9.3.1
267
On the real line, y = 0, f (z) is
f (z = x) = x ex sin x
f (z) = z ez sin z.
The analytic continuation from the imaginary axis to the complex plane is
Example 9.3.2 Consider u = ex (x sin y y cos y). Find v such that f (z) = u + v is analytic.
From the Cauchy-Riemann equations,
v u
= = ex sin y x ex sin y + y ex cos y
y x
v u
= = ex cos y x ex cos y y ex sin y
x y
F (x) is an arbitrary function of x. Substitute this expression for v into the equation for v/x.
Example 9.3.3 Find f (z) in the previous example. (Up to the additive constant.)
Method 1
f (z) = u + v
= ex (x sin y y cos y) + ex (y sin y + x cos y)
e ey e + ey e ey e + ey
y y y y
x x
=e x y + e y +x
2 2 2 2
= (x + y) e(x+y)
= z ez
268
Method 2 f (z) = f (x + y) = u(x, y) + v(x, y) is an analytic function.
On the real axis, y = 0, f (z) is
f (z = x) = u(x, 0) + v(x, 0)
= ex (x sin 0 0 cos 0) + ex (0 sin 0 + x cos 0)
= x ex
Suppose there is an analytic continuation of f (z) into the complex plane. If such a continuation,
f (z), exists, then it must be equal to f (z = x) on the real axis An obvious choice for the analytic
continuation is
f (z) = u(z, 0) + v(z, 0)
since this is clearly equal to u(x, 0) + v(x, 0) when z is real. Thus we obtain
f (z) = z ez
Example 9.3.4 Consider f (z) = u(x, y) + v(x, y). Show that f 0 (z) = ux (z, 0) uy (z, 0).
f 0 (z) = ux + vx
= ux uy
f 0 (z = x) = ux (x, 0) uy (x, 0)
Now f 0 (z = x) is defined on the real line. An analytic continuation of f 0 (z = x) into the complex
plane is
f 0 (z) = ux (z, 0) uy (z, 0).
Example 9.3.5 Again consider the problem of finding f (z) given that u(x, y) = ex (x sin y
y cos y). Now we can use the result of the previous example to do this problem.
u
ux (x, y) = = ex sin y x ex sin y + y ex cos y
x
u
uy (x, y) = = x ex cos y + y ex sin y ex cos y
y
= z ez + ez
269
f (z) = u(x, y) + v(x, y) is an analytic function. On the real line, f (z) is
f (z = x) = u(x, 0) + v(x, 0)
= cos x cosh2 0 sin x + cos x sin x sinh2 0 + cos2 x cosh 0 sinh 0 cosh 0 sin2 x sinh 0
= cos x sin x
Now we know the definition of f (z) on the real line. We would like to find an analytic continuation
of f (z) into the complex plane. An obvious choice for f (z) is
sin(2z)
f (z) = .
2
Recall that
f 0 (z) = ux + vx
= ux uy
f 0 (z = x) = cos2 x sin2 x
f 0 (z = x) = cos(2x)
f 0 (z) = cos(2z)
1 2
1
= r + .
r r r r2 2
270
We calculate the partial derivatives of u.
u
= cos + log r cos sin
r
u
r = r cos + r log r cos r sin
r
u
r = 2 cos + log r cos sin
r r
1 u 1
r = (2 cos + log r cos sin )
r r r r
u
= r ( cos + sin + log r sin )
2u
= r (2 cos log r cos + sin )
2
1 2u 1
2 2
= (2 cos log r cos + sin )
r r
From the above we see that
1 2u
1 u
u = r + = 0.
r r r r2 2
Therefore u is harmonic and is the real part of some analytic function.
f (z) = u + v
= r (log r cos sin ) + r ( cos + log r sin ) + const
271
f (z) is an analytic function. On the line = 0, f (z) is
We know that
df f
= e .
dz r
If f (z) = u(r, ) + v(r, ) then
df
= e (ur + vr )
dz
From the Cauchy-Riemann equations, we have vr = u /r.
df u
= e ur
dz r
u (r, 0)
f 0 (z = r) = ur (r, 0)
r
The analytic continuation of f 0 (z) into the complex plane is
f 0 (z) = ur (z, 0) u (z, 0).
r
f 0 (z) = ur (z, 0) u (z, 0)
r
= log z + 1
272
9.3.2 Analytic Functions Defined in Terms of Their Real or Imaginary
Parts
Consider an analytic function: f (z) = u(x, y) + v(x, y). We differentiate this expression.
This function is analytic where f () is analytic. To show this we first verify that the derivatives in
the and directions are equal.
g() = uxy (x, + ) uyy (x, + )
g() = (uxy (x, + ) + uyy (x, + )) = uxy (x, + ) uyy (x, + )
Since these partial derivatives are equal and continuous, g() is analytic. We evaluate the function
g() at = x. (Substitute y = x into Equation 9.1.)
If the expression is non-singular, then this defines the analytic function, f 0 (z), on the real axis. The
analytic continuation to the complex plane is
z z z z
f 0 (z) = ux , uy , .
2 2 2 2
d
Note that dz 2u(z/2, z/2) = ux (z/2, z/2)uy (z/2, z/2). We integrate the equation to obtain:
z z
f (z) = 2u , + c.
2 2
We know that the real part of an analytic function determines that function to within an additive
constant. Assuming that the above expression is non-singular, we have found a formula for writing
an analytic function in terms of its real part. With the same method, we can find how to write an
analytic function in terms of its imaginary part, v.
We can also derive formulas if u and v are expressed in polar coordinates:
273
Result 9.3.2 If f (z) = u(x, y) + v(x, y) is analytic and the expressions are
non-singular, then
z z
f (z) = 2u , + const (9.2)
2z 2
z
f (z) = 2v , + const. (9.3)
2 2
If f (z) = u(r, ) + v(r, ) is analytic and the expressions are non-singular,
then
1/2
f (z) = 2u z , log z + const (9.4)
2
f (z) = 2v z 1/2 , log z + const. (9.5)
2
Example 9.3.12 Consider the problem of finding f (z) given that u(x, y) = ex (x sin y y cos y).
z z
f (z) = 2u ,
2 2
z/2 z z z z
= 2e sin + cos +c
2 z 2 2
z 2
= z ez/2 sin + cos +c
2 2
= z ez/2 ez/2 + c
= z ez +c
Example 9.3.14 Again consider the logarithm, this time written in terms of polar coordinates.
Log z = Log r +
We try to construct the analytic function from its real part using Equation 9.4.
f (z) = 2u z 1/2 , log z + c
2
1/2
= 2 Log z +c
= Log z + c
With this method we recover the analytic function.
274
9.4 Exercises
Exercise 9.1
Consider two functions, f (x, y) and g(x, y). They are said to be functionally dependent if there is a
an h(g) such that
f (x, y) = h(g(x, y)).
f and g will be functionally dependent if and only if their Jacobian vanishes.
If f and g are functionally dependent, then the derivatives of f are
fx = h0 (g)gx
fy = h0 (g)gy .
Thus we have
(f, g) fx fy
= = fx gy fy gx = h0 (g)gx gy h0 (g)gy gx = 0.
(x, y) gx gy
If the Jacobian of f and g vanishes, then
fx gy fy gx = 0.
This is a first order partial differential equation for f that has the general solution
Prove that an analytic function u(x, y) + v(x, y) can be written in terms of a function of a
complex variable, f (z) = u(x, y) + v(x, y).
Exercise 9.2
Which of the following functions are the real part of an analytic function? For those that are, find
the harmonic conjugate, v(x, y), and find the analytic function f (z) = u(x, y) + v(x, y) as a function
of z.
1. x3 3xy 2 2xy + y
2. ex sinh y
3. ex (sin x cos y cosh y cos x sin y sinh y)
Exercise 9.3
For an analytic function, f (z) = u(r, ) + v(r, ) prove that under suitable restrictions:
f (z) = 2u z 1/2 , log z + const.
2
275
9.5 Hints
Hint 9.1
Show that u(x, y) + v(x, y) is functionally dependent on x + y so that you can write f (z) =
f (x + y) = u(x, y) + v(x, y).
Hint 9.2
Hint 9.3
Check out the derivation of Equation 9.2.
276
9.6 Solutions
Solution 9.1
u(x, y) + v(x, y) is functionally dependent on z = x + y if and only if
(u + v, x + y)
= 0.
(x, y)
(u + v, x + y) ux + vx uy + vy
=
(x, y) 1
= vx uy + (ux vy )
=0
Thus we see that u(x, y) + v(x, y) is functionally dependent on x + y so we can write
f (z) = f (x + y) = u(x, y) + v(x, y).
Solution 9.2
1. Consider u(x, y) = x3 3xy 2 2xy + y. The Laplacian of this function is
u uxx + uyy
= 6x 6x
=0
Since the function is harmonic, it is the real part of an analytic function. Clearly the analytic
function is of the form,
az 3 + bz 2 + cz + d,
with a, b and c complex-valued constants and d a real constant. Substituting z = x + y and
expanding products yields,
a x3 + 3x2 y 3xy 2 y 3 + b x2 + 2xy y 2 + c(x + y) + d.
f (z) = z 3 + z 2 z + d.
v(x, y) = 3x2 y y 3 + x2 y 2 x + d.
We can also do this problem with analytic continuation. The derivatives of u are
ux = 3x2 3y 2 2y,
uy = 6xy 2x + 1.
The derivative of f (z) is
f 0 (z) = ux uy = 3x2 2y 2 2y + (6xy 2x + 1).
On the real axis we have
f 0 (z = x) = 3x2 2x + .
Using analytic continuation, we see that
f 0 (z) = 3z 2 2z + .
Integration yields
f (z) = z 3 z 2 + z + const
277
2. Consider u(x, y) = ex sinh y. The Laplacian of this function is
u = ex sinh y + ex sinh y
= 2 ex sinh y.
Since the function is not harmonic, it is not the real part of an analytic function.
3. Consider u(x, y) = ex (sin x cos y cosh y cos x sin y sinh y). The Laplacian of the function is
x
u = (e (sin x cos y cosh y cos x sin y sinh y + cos x cos y cosh y + sin x sin y sinh y))
x
x
+ (e ( sin x sin y cosh y cos x cos y sinh y + sin x cos y sinh y cos x sin y cosh y))
y
= 2 ex (cos x cos y cosh y + sin x sin y sinh y) 2 ex (cos x cos y cosh y + sin x sin y sinh y)
= 0.
Thus u is the real part of an analytic function. The derivative of the analytic function is
f 0 (z) = ux + vx = ux uy
f (z) = (ex (sin x cos y cosh y cos x sin y sinh y + cos x cos y cosh y + sin x sin y sinh y))
(ex ( sin x sin y cosh y cos x cos y sinh y + sin x cos y sinh y cos x sin y cosh y))
f (z) = ez sin z + c,
where c is a real constant. We find the harmonic conjugate of u by taking the imaginary part
of f .
f (z) = ex (cosy + sin y)(sin x cosh y + cos x sinh y) + c
Solution 9.3
We consider the analytic function: f (z) = u(r, ) + v(r, ). Recall that the complex derivative in
terms of polar coordinates is
d
= e = e .
dz r r
The Cauchy-Riemann equations are
1 1
ur = v , vr = u .
r r
278
We differentiate f (z) and use the partial derivative in r for the right side.
f 0 (z) = e (ur + vr )
This function is analytic where f () is analytic. It is a simple calculus exercise to show that the
complex derivative in the direction, , and the complex derivative in the direction, , are
equal. Since these partial derivatives are equal and continuous, g() is analytic. We evaluate the
function g() at = log r. (Substitute = log r into Equation 9.6.)
1
f 0 r e( log r) = e( log r) ur (r, log r) u (r, log r)
r
1
rf 0 r2 = ur (r, log r) u (r, log r)
r
If the expression is non-singular, then it defines the analytic function, f 0 (z), on a curve. The analytic
continuation to the complex plane is
1
zf 0 z 2 = ur (z, log z) u (z, log z).
z
We integrate to obtain an expression for f z 2 .
1
f z 2 = u(z, log z) + const
2
We make a change of variables and solve for f (z).
f (z) = 2u z 1/2 , log z + const.
2
Assuming that the above expression is non-singular, we have found a formula for writing the analytic
function in terms of its real part, u(r, ). With the same method, we can find how to write an analytic
function in terms of its imaginary part, v(r, ).
279
280
Chapter 10
Between two evils, I always pick the one I never tried before.
- Mae West
Limit Sum Definition. First we develop a limit sum definition of a line integral. Consider a
curve C in the Cartesian plane joining the points (a0 , b0 ) and (a1 , b1 ). We partition the curve into
n segments with the points (x0 , y0 ), . . . , (xn , yn ) where the first and last points are at the endpoints
of the curve. We define the differences, xk = xk+1 xk and yk = yk+1 yk , and let (k , k ) be
points on the curve between (xk , yk ) and (xk+1 , yk+1 ). This is shown pictorially in Figure 10.1.
y
(x1 ,y1 )
(2 ,2 ) ( n1 ,n1 )
(0 ,0 )
(x0 ,y0 ) (xn1 ,yn1 )
x
281
This is a line integral along the curve C. The value of the line integral depends on the functions
P (x, y) and Q(x, y), the endpoints of the curve and the curve C. We can also write a line integral
in vector notation. Z
f (x) dx
C
where C is the semi-circle from (1, 0) to (1, 0) in the upper half plane. We parameterize the curve
with x = cos t, y = sin t for 0 t .
Z Z
x2 dx + (x + y) dy = cos2 t( sin t) + (cos t + sin t) cos t dt
C 0
2
=
2 3
where f is a continuous function on the contour. In the limit as each of the zk approach zero the
value of the sum, (if the limit exists), is denoted by
Z
f (z) dz.
C
282
Further, we can write a contour integral in terms of two real-valued line integrals. Let f (z) =
u(x, y) + v(x, y).
Z Z
f (z) dz = (u(x, y) + v(x, y))(dx + dy)
C C
Z Z Z
f (z) dz = (u(x, y) dx v(x, y) dy) + (v(x, y) dx + u(x, y) dy) (10.2)
C C C
Evaluation. Let the contour C be parametrized by z = z(t) for t0 t t1 . Then the differential
on the contour is dz = z 0 (t) dt. Using the parameterization we can evaluate a contour integral in
terms of a definite integral.
Z Z t1
f (z) dz = f (z(t))z 0 (t) dt
C t0
Example 10.2.1 Let C be the positively oriented unit circle about the origin in the complex plane.
Evaluate:
R
1. C
z dz
1
R
2. C z
dz
1
R
3. C z
|dz|
1.
z = e , dz = e d
Z Z 2
z dz = e e d
C 0
2
1 2
= e
2 0
1 4 1 0
= e e
2 2
=0
2.
Z Z 2 Z 2
1 1
dz = e d = d = 2
C z 0 e 0
3.
|dz| = e d = e |d| = |d|
283
10.2.1 Maximum Modulus Integral Bound
The absolute value of a real integral obeys the inequality
Z Z
b b
f (x) dx |f (x)| |dx| (b a) max |f (x)|.
a a axb
Now we prove the analogous result for the modulus of a contour integral.
Z n1
X
f (z) dz = lim f ( k )z k
z0
C
k=0
n1
X
lim |f (k )| |zk |
z0
k=0
Z
= |f (z)| |dz|
C
Z
max |f (z)| |dz|
zC
C Z
= max |f (z)| |dz|
zC C
= max |f (z)| (length of C)
zC
If we assume that f 0 (z) is continuous, we can apply Greens Theorem to the integral of f (z) on D.
Z Z Z
f (z) dz = dx + dy = (x y ) dx dy
D D D
Since f (z) is analytic, it satisfies the Cauchy-Riemann equation x = y . The integrand in the
area integral, x y , is zero. Thus the contour integral vanishes.
Z
f (z) dz = 0
D
This is known as Cauchys Theorem. The assumption that f 0 (z) is continuous is not necessary, but
it makes the proof much simpler because we can use Greens Theorem. If we remove this restriction
the result is known as the Cauchy-Goursat Theorem. The proof of this result is omitted.
284
Result 10.3.1 The Cauchy-Goursat Theorem. If f (z) is analytic in a
compact, closed, connected domain D then the integral of f (z) on the bound-
ary of the domain vanishes.
I XI
f (z) dz = f (z) dz = 0
D k Ck
Here the set of contours {Ck } make up the positively oriented boundary D
of the domain D.
As a special case of the Cauchy-Goursat theorem we can consider a simply-connected region.
For this the boundary is a Jordan curve. We can state the theorem in terms of this curve instead of
referring to the boundary.
Example 10.3.1 Let C be the unit circle about the origin with positive orientation. In Exam-
ple 10.2.1 we calculated that Z
z dz = 0
C
Now we can evaluate the integral without parameterizing the curve. We simply note that the
integrand is analytic inside and on the circle, which is simple and closed. By the Cauchy-Goursat
Theorem, the integral vanishes.
We cannot apply the Cauchy-Goursat theorem to evaluate
Z
1
dz = 2
C z
Example 10.3.2 Consider the domain D = {z | |z| > 1}. The boundary of the domainRis the unit
circle with negative orientation. f (z) = 1/z is analytic on D and its boundary. However D f (z) dz
does not vanish and we cannot apply the Cauchy-Goursat Theorem. This is because the domain is
not compact.
285
This implies that the integrals along C1 and C2 are equal.
Z Z
f (z) dz = f (z) dz
C1 C2
Thus contour integrals on simply connected domains are independent of path. This result does not
hold for multiply connected domains.
Deforming Contours. Consider two simple, closed, positively oriented contours, C1 and C2 . Let
C2 lie completely within C1 . If f (z) is analytic on and between C1 and C2 then the integrals of f (z)
along C1 and C2 are equal.
Z Z
f (z) dz = f (z) dz
C1 C2
Again, this is a direct consequence of the Cauchy-Goursat Theorem. Let D be the domain on and
between C1 and C2 . By the Cauchy-Goursat Theorem the integral along the boundary of D vanishes.
Z Z
f (z) dz + f (z) dz = 0
C1 C2
Z Z
f (z) dz = f (z) dz
C1 C2
By following
R this line of reasoning, we see that we can deform a contour C without changing the
value of C f (z) dz as long as we stay on the domain where f (z) is analytic.
286
simple, closed contour C be the boundary of D which is contained in the domain .
I I
f (z) dz = (u + v)(dx + dy)
C C
I I
= u dx v dy + v dx + u dy
ZC C
Z
= (vx uy ) dx dy + (ux vy ) dx dy
D D
=0
Since the two integrands are continuous and vanish for all C in , we conclude that the integrands
are identically zero. This implies that the Cauchy-Riemann equations,
ux = vy , uy = vx ,
Since the integrand, x y is continuous and vanishes for all C in , we conclude that the integrand
is identically zero. This implies that the Cauchy-Riemann equation,
x = y ,
for all possible simple, closed contours C in the domain, then f (z) is analytic
in .
We will prove existence later by writing an indefinite integral as a contour integral. We briefly
consider uniqueness of the indefinite integral here. Let F (z) and G(z) be integrals of f (z). Then
287
F 0 (z) G0 (z) = f (z) f (z) = 0. Although we do not prove it, it certainly makes sense that
F (z) G(z) is a constant on each connected component of the domain. Indefinite integrals are
unique up to an additive constant.
Integrals of analytic functions have all the nice properties of integrals of functions of a real
variable. All the formulas from integral tables, including things like integration by parts, carry over
directly.
d = P dx + Q dy.
A necessary and sufficient condition for the existence of a primitive is that Py = Qx . The definite
integral can be evaluated in terms of the primitive.
Z (c,d)
P dx + Q dy = (c, d) (a, b)
(a,b)
d = dx + dy.
How do we find the primitive that satisfies x = and y = ? Note that choosing
(x, y) = F (z) where F (z) is an anti-derivative of f (z), F 0 (z) = f (z), does the trick. We express
the complex derivative as partial derivatives in the coordinate directions to show this.
From this we see that x = and y = so (x, y) = F (z) is a primitive. Since we can evaluate
the line integral of ( dx + dy),
Z (c,d)
( dx + dy) = (c, d) (a, b),
(a,b)
288
10.8 Fundamental Theorem of Calculus via Complex Calcu-
lus
Now we will prove the analogous property for functions of a complex variable.
Z z
F (z) = f () d
a
F (z + z) F (z)
F 0 (z) = lim
z0 z !
Z z+z Z z
1
= lim f () d f () d
z0 z a a
Z z+z
1
= lim f () d
z0 z z
The integral is independent of path. We choose a straight line connecting z and z + z. We add
R z+z
and subtract zf (z) = z f (z) d from the expression for F 0 (z).
Z z+z !
1
F 0 (z) = lim zf (z) + (f () f (z)) d
z0 z z
Z z+z
1
= f (z) + lim (f () f (z)) d
z0 z z
lim f () = 0.
z
=0
289
This results demonstrates the existence of the indefinite integral. We will use this to prove the
Fundamental Theorem of Calculus for functions of a complex variable.
This proves the Fundamental Theorem of Calculus for functions of a complex variable.
where C is any closed contour that goes around the point z = a once in the positive direction.
We use the Fundamental Theorem of Calculus to evaluate the integral. We start at a point on the
contour z a = r e . When we traverse the contour once in the positive direction we end at the
point z a = r e(+2) .
Z
1 za=r e(+2)
dz = [log(z a)]za=r e
C za
= Log r + ( + 2) (Log r + )
= 2
290
10.9 Exercises
Exercise 10.1
C is the arc corresponding to the unit semi-circle, |z| = 1, =(z) 0, directed from z = 1 to z = 1.
Evaluate
Z
1. z 2 dz
C
Z
2
2. z dz
C
Z
3. z 2 |dz|
C
Z
2
4. z |dz|
C
Exercise 10.2
Evaluate
Z
2
e(ax +bx)
dx,
Exercise 10.3
Evaluate
Z Z
ax2 2
2 e cos(x) dx, and 2 x eax sin(x)dx,
0 0
where <(a) > 0 and R.
Exercise 10.4
Use an admissible parameterization to evaluate
Z
(z z0 )n dz, nZ
C
Exercise 10.5
1. Use bounding arguments to show that
Z
z + Log z
lim dz = 0
R CR z3 + 1
291
2. Place a bound on Z
Log z dz
C
3. Deduce that Z 2
R2 + 1
z 1
2
dz r
C z +1
R2 1
where C is a semicircle of radius R > 1 centered at the origin.
Exercise 10.6
Let C denote the entire positively oriented boundary of the half disk 0 r 1, 0 in the
upper half plane. Consider the branch
3
f (z) = r e/2 , <<
2 2
of the multi-valued function z 1/2 . Show by separate parametric evaluation of the semi-circle and the
two radii constituting the boundary that
Z
f (z) dz = 0.
C
Exercise 10.7
Evaluate the following contour integrals using anti-derivatives and justify your approach for each.
1. Z
z 3 + z 3 dz,
C
where C is the line segment from z1 = 1 + to z2 = .
2. Z
sin2 z cos z dz
C
where C is a right-handed spiral from z1 = to z2 = .
3.
1 + e
Z
z dz = (1 )
C 2
with
z = e Log z , < Arg z < .
C joins z1 = 1 and z2 = 1, lying above the real axis except at the end points. (Hint: redefine
z so that it remains unchanged above the real axis and is defined continuously on the real
axis.)
292
10.10 Hints
Hint 10.1
Hint 10.2
Let C be the parallelogram in the complex plane with corners at R and R + b/(2a). Consider
2
the integral of eaz on this contour. Take the limit as R .
Hint 10.3
Extend the range of integration to ( . . . ). Use ex = cos(x) + sin(x) and the result of
Exercise 10.2.
Hint 10.4
Hint 10.5
Hint 10.6
Hint 10.7
293
10.11 Solutions
Solution 10.1
We parameterize the path with z = e , with ranging from to 0.
dz = e d
|dz| = | e d| = |d| = d
1.
Z Z 0
2
z dz = e2 e d
C
Z 0
= e3 d
0
1 3
= e
3
1 0
e e3
=
3
1
= (1 (1))
3
2
=
3
2.
Z Z 0
|z 2 | dz = | e2 | e d
C
Z 0
= e d
0
= e
= 1 (1)
=2
3.
Z Z 0
z 2 |dz| = e2 | e d|
C
Z 0
= e2 d
h i0
= e2
2
= (1 1)
2
=0
4.
Z Z 0
|z 2 | |dz| = | e2 || e d|
C
Z 0
= d
0
= []
=
294
Solution 10.2
Z
2
I= e(ax +bx)
dx
Consider the parallelogram in the complex plane with corners at R and R + b/(2a). The integral
2
of eaz on this contour vanishes as it is an entire function. We relate the integral along one side of
the parallelogram to the integrals along the other three sides.
!
Z R+b/(2a) Z Z R
Z R R+b/(2a)
2 2
eaz dz = + + eaz dz.
R+b/(2a) R+b/(2a) R R
The first and third integrals on the right side vanish as R because the integrand vanishes and
the lengths of the paths of integration are finite. Taking the limit as R we have,
Z +b/(2a) Z Z
2 2 2
eaz dz ea(x+b/(2a)) dx = eax dx.
+b/(2a)
Now we have Z
2 2
I = eb /(4a)
eax dx.
We make the change of variables = ax.
Z
b2 /(4a) 1 2
I=e e d
a
Z r
2 b2 /(4a)
e(ax +bx)
dx = e
a
Solution 10.3
Consider
Z
2
I=2 eax cos(x) dx.
0
Since the integrand is an even function,
Z
2
I= eax cos(x) dx.
2
Since eax sin(x) is an odd function,
Z
2
I= eax ex dx.
Consider Z
2
I=2 x eax sin(x) dx.
0
295
Since the integrand is an even function,
Z
2
I= x eax sin(x) dx.
2
Since x eax cos(x) is an odd function,
Z
2
I = x eax ex dx.
Solution 10.4
1. We parameterize the contour and do the integration.
z z0 = e , [0 . . . 2)
Z Z 2
(z z0 )n dz = en e d
C 0
h
e(n+1) 2
i (
n+1 for n 6= 1 0 for n 6= 1
= 0 =
[]2 for n = 1 2 for n = 1
0
Z Z 2 n
n
(z z0 ) dz = 2 + e e d
C 0
n+1 2
(2+e )
for n 6= 1
= n+1 =0
0
log 2 + e 2
for n = 1
0
Z Z 4
1 1
z dz = (r0 () + r()) e d
C 0 r() e
Z 4 0
r ()
= + d
0 r()
4
= [log(r()) + ]0
296
1
-1 1
-1
Since r() does not vanish, the argument of r() does not change in traversing the contour and
thus the logarithmic term has the same value at the beginning and end of the path.
Z
z 1 dz = 4
C
This answer is twice what we found in part (a) because the contour goes around the origin
twice.
Solution 10.5
1. We parameterize the contour with z = R e and bound the modulus of the integral.
Z Z
z + Log z z + Log z
dz z 3 + 1 |dz|
CR z3 + 1 CR
Z 2
R + ln R +
R d
0 R3 1
R + ln R +
= 2r
R3 1
R + ln R +
lim 2r =0
R R3 1
z = 2 e , [/2 . . . /2]
297
Z Z
Log z dz |Log z| |dz|
C C
Z /2
= | ln 2 + |2 d
/2
Z /2
2 (ln 2 + ||) d
/2
Z /2
=4 (ln 2 + ) d
0
= ( + 4 ln 2)
2
3. We parameterize the contour and bound the modulus of the integral.
z = R e , [0 . . . 0 + ]
Z 2 Z 2
z 1 z 1
2
dz
2
|dz|
C z +1 C z +1
Z 0 + 2 2
R e 1
R2 e2 +1 |R d|
0
Z 0 + 2
R +1
R d
0 R2 1
R2 + 1
= r
R2 1
Solution 10.6
Z Z 1 Z
Z 0
f (z) dz = r dr + e/2 e d + r (dr)
C 0 0 1
2 2 2 2
= + +
3 3 3 3
=0
The Cauchy-Goursat theorem does not apply because the function is not analytic at z = 0, a point
on the boundary.
Solution 10.7
1.
z 4
Z
3 3
1
z + z dz = 2
C 4 2z 1+
1
= +
2
In this example, the anti-derivative is single-valued.
2.
sin3 z
Z
sin2 z cos z dz =
C 3
1
sin3 () sin3 ()
=
3
sinh3 ()
=
3
298
Again the anti-derivative is single-valued.
3. We choose the branch of z with /2 < arg(z) < 3/2. This matches the principal value of
z above the real axis and is defined continuously on the path of integration.
e0
z 1+
Z
z dz =
C 1 + e
e0
1 (1+) log z
= e
2 e
1 0
= e e(1+)
2
1 + e
= (1 )
2
299
300
Chapter 11
If I were founding a university I would begin with a smoking room; next a dormitory; and then a
decent reading room and a library. After that, if I still had more money that I couldnt use, I would
hire a professor and get some text books.
- Stephen Leacock
Here the set of contours {Ck } make up the positively oriented boundary D
of the domain D. More generally, we have
I I
(n) n! f () n! X f ()
f (z) = d = d. (11.2)
2 D ( z)n+1 2 k Ck ( z)n+1
Cauchys Formula shows that the value of f (z) and all its derivatives in a domain are determined
by the value of f (z) on the boundary of the domain. Consider the first formula of the result,
Equation 11.1. We deform the contour to a circle of radius about the point = z.
I I
f () f ()
d = d
C z C z
f () f (z)
I I
f (z)
= d + d
C z C z
f () f (z)
I I
f ()
d = 2f (z) + d
C z C z
301
The remaining integral along C vanishes as 0 because f () is continuous. We demonstrate this
with the maximum modulus integral bound. The length of the path of integration is 2.
I
f () f (z) 1
lim d lim (2) max |f () f (z)|
0 C z 0 |z|=
lim 2 max |f () f (z)|
0 |z|=
=0
We derive the second formula, Equation 11.2, from the first by differentiating with respect to z.
Note that the integral converges uniformly for z in any closed subset of the interior of C. Thus we
can differentiate with respect to z and interchange the order of differentiation and integration.
1 dn
I
(n) f ()
f (z) = d
2 dz n C z
dn f ()
I
1
= d
2 C dz n z
I
n! f ()
= d
2 C ( z)n+1
Example 11.1.1 Consider the following integrals where C is the positive contour on the unit circle.
For the third integral, the point z = 1 is removed from the contour.
I
sin cos z 5 dz
1.
C
I
1
2. dz
C (z 3)(3z 1)
Z
3. z dz
C
1. Since sin cos z 5 is an analytic function inside the unit circle,
I
sin cos z 5
dz = 0
C
1
2. (z3)(3z1) has singularities at z = 3 and z = 1/3. Since z = 3 is outside the contour, only
the singularity at z = 1/3 will contribute to the value of the integral. We will evaluate this
integral using the Cauchy integral formula.
I
1 1
dz = 2 =
C (z 3)(3z 1) (1/3 3)3 4
3. Since the curve is not closed, we cannot apply the Cauchy integral formula. Note that z is
single-valued and analytic in the complex plane with a branch cut on the negative real axis.
302
Thus we use the Fundamental Theorem of Calculus.
e
Z
2 3
z dz = z
C 3 e
2 3/2
= e e3/2
3
2
= ( )
3
4
=
3
Liouvilles Theorem. Consider a function f (z) that is analytic and bounded, (|f (z)| M ), in
the complex plane. From Cauchys inequality,
M
|f 0 (z)|
r
for any positive r. By taking r , we see that f 0 (z) is identically zero for all z. Thus f (z) is a
constant.
The Fundamental Theorem of Algebra. We will prove that every polynomial of degree n 1
has exactly n roots, counting multiplicities. First we demonstrate that each such polynomial has
at least one root. Suppose that an nth degree polynomial p(z) has no roots. Let the lower bound
on the modulus of p(z) be 0 < m |p(z)|. The function f (z) = 1/p(z) is analytic, (f 0 (z) =
p0 (z)/p2 (z)), and bounded, (|f (z)| 1/m), in the extended complex plane. Using Liouvilles theorem
we conclude that f (z) and hence p(z) are constants, which yields a contradiction. Therefore every
such polynomial p(z) must have at least one root.
303
Now we show that we can factor the root out of the polynomial. Let
n
X
p(z) = pk z k .
k=0
We note that
n1
X
(z n cn ) = (z c) cn1k z k .
k=0
= (z c)q(z)
Here q(z) is a polynomial of degree n 1. By induction, we see that p(z) has exactly n roots.
we see that f (z) is the average value of f () on the circle of radius r about the point z.
304
Extremum Modulus Theorem. Let f (z) be analytic in closed, connected domain, D. The
extreme values of the modulus of the function must occur on the boundary. If |f (z)| has an interior
extrema, then the function is a constant. We will show this with proof by contradiction. Assume
that |f (z)| has an interior maxima at the point z = c. This means that there exists an neighborhood
of the point z = c for which |f (z)| |f (c)|. Choose an so that the set |z c| lies inside this
neighborhood. First we use Gauss mean value theorem.
Z 2
1
f c + e d
f (c) =
2 0
We get an upper bound on |f (c)| with the maximum modulus integral bound.
Z 2
1 f c + e d
|f (c)|
2 0
If |f (z)| < |f (c)| for any point on |z c| = , then the continuity of f (z) implies that |f (z)| < |f (c)|
in a neighborhood of that point which would make the value of the integral of |f (z)| strictly less
than |f (c)|. Thus we conclude that |f (z)| = |f (c)| for all |z c| = . Since we can repeat the above
procedure for any circle of radius smaller than , |f (z)| = |f (c)| for all |z c| , i.e. all the points
in the disk of radius about z = c are also maxima. By recursively repeating this procedure points
in this disk, we see that |f (z)| = |f (c)| for all z D. This implies that f (z) is a constant in the
domain. By reversing the inequalities in the above method we see that the minimum modulus of
f (z) must also occur on the boundary.
Result 11.2.1 The Argument Theorem. Let f (z) be analytic inside and
on C except for isolated poles inside the contour. Let f (z) be nonzero on C.
Z 0
1 f (z)
dz = N P
2 C f (z)
Here N is the number of zeros and P the number of poles, counting multiplic-
ities, of f (z) inside C.
First we will simplify the problem and consider a function f (z) that has one zero or one pole. Let
f (z) be analytic and nonzero inside and on A except for a zero of order n at z = a. Then we can
0
write f (z) = (z a)n g(z) where g(z) is analytic and nonzero inside and on A. The integral of ff (z)
(z)
305
along A is
f 0 (z)
Z Z
1 1 d
dz = (log(f (z))) dz
2 A f (z) 2 A dz
Z
1 d
= (log((z a)n ) + log(g(z))) dz
2 A dz
Z
1 d
= (log((z a)n )) dz
2 A dz
Z
1 n
= dz
2 A z a
=n
Now let f (z) be analytic and nonzero inside and on B except for a pole of order p at z = b. Then
g(z) f 0 (z)
we can write f (z) = (zb) p where g(z) is analytic and nonzero inside and on B. The integral of f (z)
along B is
f 0 (z)
Z Z
1 1 d
dz = (log(f (z))) dz
2 B f (z) 2 B dz
Z
1 d
log((z b)p ) + log(g(z)) dz
=
2 B dz
Z
1 d
log((z b)p )+ dz
=
2 B dz
p
Z
1
= dz
2 B z b
= p
Now consider a function f (z) that is analytic inside an on the contour C except for isolated poles at
the points b1 , . . . , bp . Let f (z) be nonzero except at the isolated points a1 , . . . , an . Let the contours
Ak , k = 1, . . . , n, be simple, positive contours which contain the zero at ak but no other poles or
zeros of f (z). Likewise, let the contours Bk , k = 1, . . . , p be simple, positive contours which contain
the pole at bk but no other poles of zeros of f (z). (See Figure 11.1.) By deforming the contour we
obtain
n Z p Z
f 0 (z) f 0 (z) f 0 (z)
Z X X
dz = dz + dz.
C f (z) j=1 Aj
f (z)
k=1 Bj f (z)
A1 C
B3
B1
A2
B2
306
11.3 Rouches Theorem
Result 11.3.1 Rouches Theorem. Let f (z) and g(z) be analytic inside
and on a simple, closed contour C. If |f (z)| > |g(z)| on C then f (z) and
f (z) + g(z) have the same number of zeros inside C and no zeros on C.
First note that since |f (z)| > |g(z)| on C, f (z) is nonzero on C. The inequality implies that
|f (z) + g(z)| > 0 on C so f (z) + g(z) has no zeros on C. We well count the number of zeros of f (z)
and g(z) using the Argument Theorem, (Result 11.2.1). The number of zeros N of f (z) inside the
contour is
f 0 (z)
I
1
N= dz.
2 C f (z)
Now consider the number of zeros M of f (z) + g(z). We introduce the function h(z) = g(z)/f (z).
f 0 (z) + g 0 (z)
I
1
M= dz
2 C f (z) + g(z)
f 0 (z) + f 0 (z)h(z) + f (z)h0 (z)
I
1
= dz
2 C f (z) + f (z)h(z)
f 0 (z) h0 (z)
I I
1 1
= dz + dz
2 C f (z) 2 C 1 + h(z)
1
=N+ [log(1 + h(z))]C
2
=N
(Note that since |h(z)| < 1 on C, <(1 + h(z)) > 0 on C and the value of log(1 + h(z)) does not
not change in traversing the contour.) This demonstrates that f (z) and f (z) + g(z) have the same
number of zeros inside C and proves the result.
307
11.4 Exercises
Exercise 11.1
What is
(arg(sin z))C
where C is the unit circle?
Exercise 11.2
Let C be the circle of radius 2 centered about the origin and oriented in the positive direction.
Evaluate the following integrals:
1. C zsin z
H
2 +5 dz
2. C z2z+1 dz
H
z 2 +1
H
3. C z dz
Exercise 11.3
Let f (z) be analytic and bounded (i.e. |f (z)| < M ) for |z| > R, but not necessarily analytic for
|z| R. Let the points and lie inside the circle |z| = R. Evaluate
I
f (z)
dz
C (z )(z )
where C is any closed contour outside |z| = R, containing the circle |z| = R. [Hint: consider the circle
at infinity] Now suppose that in addition f (z) is analytic everywhere. Deduce that f () = f ().
Exercise 11.4
Using Rouches theorem show that all the roots of the equation p(z) = z 6 5z 2 + 10 = 0 lie in the
annulus 1 < |z| < 2.
Exercise 11.5
Evaluate as a function of t
ezt
I
1
= dz,
2 C z 2 (z 2 + a2 )
where C is any positively oriented contour surrounding the circle |z| = a.
Exercise 11.6
Consider C1 , (the positively oriented circle |z| = 4), and C2 , (the positively oriented boundary of
the square whose sides lie along the lines x = 1, y = 1). Explain why
Z Z
f (z) dz = f (z) dz
C1 C2
Exercise 11.7
Show that if f (z) is of the form
k k1 1
f (z) = k
+ k1 + + + g(z), k1
z z z
308
where g is analytic inside and on C, (the positive circle |z| = 1), then
Z
f (z) dz = 21 .
C
Exercise 11.8
Show that if f (z) is analytic within and on a simple closed contour C and z0 is not on C then
f 0 (z)
Z Z
f (z)
dz = dz.
C z z 0 C (z z0 )2
Note that z0 may be either inside or outside of C.
Exercise 11.9
If C is the positive circle z = e show that for any real constant a,
Z az
e
dz = 2
C z
and hence Z
ea cos cos(a sin ) d = .
0
Exercise 11.10
Use Cauchy-Goursat, the generalized Cauchy integral formula, and suitable extensions to multiply-
connected domains to evaluate the following integrals. Be sure to justify your approach in each
case.
1. Z
z
dz
C z3 9
where C is the positively oriented rectangle whose sides lie along x = 5, y = 3.
2. Z
sin z
dz,
C z 2 (z 4)
where C is the positively oriented circle |z| = 2.
3.
(z 3 + z + ) sin z
Z
dz,
C z 4 + z 3
where C is the positively oriented circle |z| = .
4.
ezt
Z
dz
C z 2 (z + 1)
where C is any positive simple closed contour surrounding |z| = 1.
Exercise 11.11
Use Liouvilles theorem to prove the following:
1. If f (z) is entire with <(f (z)) M for all z then f (z) is constant.
2. If f (z) is entire with |f (5) (z)| M for all z then f (z) is a polynomial of degree at most five.
Exercise 11.12
Find all functions f (z) analytic in the domain D : |z| < R that satisfy f (0) = e and |f (z)| 1 for
all z in D.
309
Exercise 11.13
P k
Let f (z) = k=0 k 4 z4 and evaluate the following contour integrals, providing justification in each
case:
Z
1. cos(z)f (z) dz C is the positive circle |z 1| = 1.
C
Z
f (z)
2. dz C is the positive circle |z| = .
C z3
310
11.5 Hints
Hint 11.1
Use the argument theorem.
Hint 11.2
Hint 11.3
To evaluate the integral, consider the circle at infinity.
Hint 11.4
Hint 11.5
Hint 11.6
Hint 11.7
Hint 11.8
Hint 11.9
Hint 11.10
Hint 11.11
Hint 11.12
Hint 11.13
311
11.6 Solutions
Solution 11.1
Let f (z) be analytic inside and on the contour C. Let f (z) be nonzero on the contour. The argument
theorem states that
f 0 (z)
Z
1
dz = N P,
2 C f (z)
where N is the number of zeros and P is the number of poles, (counting multiplicities), of f (z)
inside C. The theorem is aptly named, as
f 0 (z)
Z
1 1
dz = [log(f (z))]C
2 C f (z) 2
1
= [log |f (z)| + arg(f (z))]C
2
1
= [arg(f (z))]C .
2
Thus we could write the argument theorem as
f 0 (z)
Z
1 1
dz = [arg(f (z))]C = N P.
2 C f (z) 2
Since sin z has a single zero and no poles inside the unit circle, we have
1
arg(sin(z))C = 1 0
2
arg(sin(z)) = 2
C
Solution 11.2
3.
z2 + 1
Z Z
1
dz = z+ dz
C z z
ZC Z
1
= z dz + dz
C C z
= 0 + 2
= 2
312
Solution 11.3
Let C be the circle of radius r, (r > R), centered at the origin. We get an upper bound on the
integral with the Maximum Modulus Integral Bound, (Result 10.2.1).
I
f (z) f (z) M
dz 2r max
2r
C (z )(z ) |z|=r (z )(z ) (r ||)(r ||)
By taking the limit as r we see that the modulus of the integral is bounded above by zero.
Thus the integral vanishes.
Now we assume that f (z) is analytic and evaluate the integral with Cauchys Integral Formula.
(We assume that 6= .)
I
f (z)
dz = 0
C (z )(z )
I I
f (z) f (z)
dz + dz = 0
C (z )( ) C ( )(z )
f () f ()
2 + 2 =0
f () = f ()
Solution 11.4
Consider the circle |z| = 2. On this circle:
|z 6 | = 64
| 5z 2 + 10| | 5z 2 | + |10| = 30
Since |z 6 | < | 5z 2 + 10| on |z| = 2, p(z) has the same number of roots as z 6 in |z| < 2. p(z) has 6
roots in |z| < 2.
Consider the circle |z| = 1. On this circle:
|10| = 10
|z 6 5z 2 | |z 6 | + | 5z 2 | = 6
Since |z 6 5z 2 | < |10| on |z| = 1, p(z) has the same number of roots as 10 in |z| < 1. p(z) has no
roots in |z| < 1.
On the unit circle,
|p(z)| |10| |z 6 | |5z 2 | = 4.
Thus p(z) has no roots on the unit circle.
We conclude that p(z) has exactly 6 roots in 1 < |z| < 2.
Solution 11.5
We evaluate the integral with Cauchys Integral Formula.
ezt
I
1
= dz
2 C z 2 (z 2 + a2 )
I zt
ezt ezt
1 e
= + 3 dz
2 C a2 z 2 2a (z a) 2a3 (z + a)
d ezt eat eat
= +
dz a2 z=0 2a3 2a3
t sin(at)
= 2
a a3
at sin(at)
=
a3
313
Solution 11.6
1. We factor the denominator of the integrand.
1 1
=
3z 2 +1 3(z 3/3)(z + 3/3)
There are two first order poles which could contribute to the value of an integral on a closed
path. Both poles lie inside both contours. See Figure 11.2. We see that C1 can be continuously
-4 -2 2 4
-2
-4
1
Figure 11.2: The contours and the singularities of 3z 2 +1 .
deformed to C2 on the domain where the integrand is analytic. Thus the integrals have the
same value.
6
4
2
-6 -4 -2 2 4 6
-2
-4
-6
z
Figure 11.3: The contours and the singularities of 1ez .
C1 can be continuously deformed to C2 on the domain where the integrand is analytic. Thus
the integrals have the same value.
Solution 11.7
First we write the integral of f (z) as a sum of integrals.
Z Z
k k1 1
f (z) dz = + + + + g(z) dz
C zk z k1 z
ZC Z Z Z
k k1 1
= k
dz + k1
dz + + dz + g(z) dz
C z C z C z C
314
The integral of g(z) vanishes by the Cauchy-Goursat theorem. We evaluate the integral of 1 /z
with Cauchys integral formula. Z
1
dz = 21
C z
We evaluate the remaining n /z n terms with anti-derivatives. Each of these integrals vanish.
Z Z Z Z Z
k k1 1
f (z) dz = dz + dz + + dz + g(z) dz
C zk C z
k1
C z
C C
k h i
2
= + + + 21
(k 1)z k1 C z C
= 21
Solution 11.8
We evaluate the integrals with the Cauchy integral formula. (z0 is required to not be on C so the
integrals exist.)
(
f 0 (z) 2f 0 (z0 ) if z0 is inside C
Z
dz =
C z z0 0 if z0 is outside C
(
Z 2 0
f (z) f (z0 ) if z0 is inside C
2
dz = 1!
C (z z0 ) 0 if z0 is outside C
Solution 11.9
First we evaluate the integral using the Cauchy Integral Formula.
Z az
e
dz = [eaz ]z=0 = 2
C z
Next we parameterize the path of integration. We use the periodicity of the cosine and sine to
simplify the integral.
Z az
e
dz = 2
C z
Z 2 a e
e
e d = 2
0 e
Z 2
ea(cos + sin ) d = 2
0
Z 2
ea cos (cos(sin ) + sin(sin )) d = 2
0
Z 2
ea cos cos(sin ) d = 2
0
Z
ea cos cos(sin ) d =
0
Solution 11.10
1. We factor the integrand to see that there are singularities at the cube roots of 9.
z z
=
z3
9
3
z 9 z 9e 3 2/3 z 3 9 e2/3
Let C1 , C2 and C3 be contours around z = 3 9, z = 3 9 e2/3 and z = 3 9 e2/3 . See
Figure 11.4. Let D be the domain between C, C1 and C2 , i.e. the boundary of D is the union
315
of C, C1 and C2 . Since the integrand is analytic in D, the integral along the boundary of
D vanishes.
Z Z Z Z Z
z z z z z
39
dz = 39
dz + 39
dz + 39
dz + 39
dz = 0
D z C z C1 z C2 z C3 z
From this we see that the integral along C is equal to the sum of the integrals along C1 , C2
and C3 . (We could also see this by deforming C onto C1 , C2 and C3 .)
Z Z Z Z
z z z z
39
dz = 39
dz + 39
dz + 39
dz
C z C1 z C2 z C3 z
We use the Cauchy Integral Formula to evaluate the integrals along C1 , C2 and C2 .
Z Z
z z
3
dz = dz
C z 9
C1 z
3
9 z 9e3 2/3 z 3 9 e2/3
Z
z
+ dz
C2 z
3
9 z 9e3 2/3 z 3 9 e2/3
Z
z
+ dz
C3 z 3
9 z 3
9 e 2/3 z 3 9 e2/3
" #
z
= 2
z 3 9 e2/3 z 3 9 e2/3
z= 3 9
" #
z
+ 2
z 3 9 z 3 9 e2/3
z= 3 9 e2/3
" #
z
+ 2
z 3 9 z 3 9 e2/3
z= 3 9 e2/3
= 235/3 1 e/3 + e2/3
=0
4
C
C2 2
C1
-6 -4 -2 2 4 6
C3 -2
-4
z
Figure 11.4: The contours for z 3 9 .
2. The integrand has singularities at z = 0 and z = 4. Only the singularity at z = 0 lies inside
the contour. We use the Cauchy Integral Formula to evaluate the integral.
Z
sin z d sin z
2
dz = 2
C z (z 4) dz z 4 z=0
cos z sin z
= 2
z 4 (z 4)2 z=0
=
2
316
3. We factor the integrand to see that there are singularities at z = 0 and z = .
(z 3 + z + ) sin z (z 3 + z + ) sin z
Z Z
dz = dz
C z 4 + z 3 C z 3 (z + )
Let C1 and C2 be contours around z = 0 and z = . See Figure 11.5. Let D be the domain
between C, C1 and C2 , i.e. the boundary of D is the union of C, C1 and C2 . Since the
integrand is analytic in D, the integral along the boundary of D vanishes.
Z Z Z Z
= + + =0
D C C1 C2
From this we see that the integral along C is equal to the sum of the integrals along C1 and
C2 . (We could also see this by deforming C onto C1 and C2 .)
Z Z Z
= +
C C1 C2
We use the Cauchy Integral Formula to evaluate the integrals along C1 and C2 .
C1 C
-4 -2 C2 2 4
-2
-4
(z 3 +z+) sin z
Figure 11.5: The contours for z 4 +z 3 .
317
Let C1 and C2 be contours around z = 0 and z = 1. See Figure 11.6. We deform C onto C1
and C2 . Z Z Z
= +
C C1 C2
We use the Cauchy Integral Formula to evaluate the integrals along C1 and C2 .
ezt ezt ezt
Z Z Z
2
dz = 2
dz + 2
dz
C z (z + 1) C1 z (z + 1) C1 z (z + 1)
zt
d ezt
e
= 2 2 + 2
z z=1 dz (z + 1) z=0
zt ezt
t te
= 2 e +2
(z + 1) (z + 1)2 z=0
= 2(et +t 1)
1
C2 C1 C
-2 -1 1 2
-1
-2
ezt
Figure 11.6: The contours for z 2 (z+1) .
Solution 11.11
Liouvilles Theorem states that if f (z) is analytic and bounded in the complex plane then f (z) is a
constant.
1. Since f (z) is analytic, ef (z) is analytic. The modulus of ef (z) is bounded.
= e<(f (z)) eM
f (z)
e
By Liouvilles Theorem we conclude that ef (z) is constant and hence f (z) is constant.
2. We know that f (z) is entire and |f (5) (z)| is bounded in the complex plane. Since f (z) is
analytic, so is f (5) (z). We apply Liouvilles Theorem to f (5) (z) to conclude that it is a constant.
Then we integrate to determine the form of f (z).
f (z) = c5 z 5 + c4 z 4 + c3 z 3 + c2 z 2 + c1 z + c0
Here c5 is the value of f (5) (z) and c4 through c0 are constants of integration. We see that f (z)
is a polynomial of degree at most five.
Solution 11.12
For this problem we will use the Extremum Modulus Theorem: Let f (z) be analytic in a closed,
connected domain, D. The extreme values of the modulus of the function must occur on the
boundary. If |f (z)| has an interior extrema, then the function is a constant.
Since |f (z)| has an interior extrema, |f (0)| = | e | = 1, we conclude that f (z) is a constant on
D. Since we know the value at z = 0, we know that f (z) = e .
318
Solution 11.13
First we determine the radius of convergence of the series with the ratio test.
k 4 /4k
R = lim
k (k + 1)4 /4k+1
k4
= 4 lim
k (k + 1)4
24
= 4 lim
k 24
=4
We can parameterize the first integral to show that it vanishes. The second integral has the
value 2 by the Cauchy-Goursat Theorem. The third integral vanishes by Cauchys Theorem
as the integrand is analytic inside and on the contour.
Z
f (z)
3
dz = 2
C z
319
320
Chapter 12
- Neils Bohr
lim an = a
n
for some constant a. If the limit does not exist, then the sequence diverges. Recall the definition of
the limit in the above formula: For any > 0 there exists an N Z such that |a an | < for all
n > N.
Example 12.1.1 The sequence {sin(n)} is divergent. The sequence is bounded above and below,
but boundedness does not imply convergence.
Cauchy Convergence Criterion. Note that there is something a little fishy about the above
definition. We should be able to say if a sequence converges without first finding the constant to
which it converges. We fix this problem with the Cauchy convergence criterion. A sequence {an }
converges if and only if for any > 0 there exists an N such that |an am | < for all n, m > N .
The Cauchy convergence criterion is equivalent to the definition we had before. For some problems
it is handier to use. Now we dont need to know the limit of a sequence to show that it converges.
P
Convergence of Series. The series n=1 an converges if the sequence of partial sums, SN =
PN 1
n=0 an , converges. That is,
N
X 1
lim SN = lim an = constant.
N N
n=0
If the limit does not exist, then the series diverges. A necessary condition for the convergence of a
series is that
lim an = 0.
n
(See Exercise 12.1.) Otherwise the sequence of partial sums would not converge.
321
P
Example 12.1.2 The series n=0 (1)n = 1 1 + 1 1 + is divergent because the sequence of
partial sums, {SN } = 1, 0, 1, 0, 1, 0, . . . is divergent.
P
TailPof a Series. An infinite series, Pn=0 an , converges or diverges with its tail. That is, for fixed
N , n=0 an converges if and only if n=N an converges. This is because the sum of the first N
terms of a series is just a number. Adding or subtracting a number to a series does not change its
convergence.
P P
Absolute Convergence. The series n=0 an converges absolutely if n=0 |an | converges. Abso-
lute convergence implies convergence. If a series is convergent, but not absolutely convergent, then
it is said to be conditionally convergent.
The terms of an absolutely convergent series can be rearranged in any order and the series will
still converge to the same sum. This is not true of conditionally convergent series. Rearranging the
terms of a conditionally convergent series may change the sum. In fact, the terms of a conditionally
convergent series may be rearranged to obtain any desired sum.
1 1 1
1 + + ,
2 3 4
P
Finite Series and Residuals. Consider the series f (z) = n=0 an (z). We will denote the sum
of the first N terms in the series as
N
X 1
SN (z) = an (z).
n=0
1 The series is so named because the terms grow or decay geometrically. Each term in the series is a constant times
the previous term.
322
The series clearly diverges for |z| 1 since the terms do not vanish as n . Consider the partial
PN 1
sum, SN (z) n=0 z n , for |z| < 1.
N
X 1
(1 z)SN (z) = (1 z) zn
n=0
N
X 1 N
X
n
= z zn
n=0 n=1
= 1 + z + + z N 1 z + z 2 + + z N
= 1 zN
N 1
X 1 zN 1
zn = as N .
n=0
1z 1z
1
The limit of the partial sums is 1z .
X 1
zn = for |z| < 1
n=0
1z
The series is absolutely convergent for <() > 1 and absolutely divergent for <() 1, (see the
Exercise 12.8). The Riemann zeta function () is defined as the sum of the harmonic series.
X 1
() =
n=1
n
Again, the series is absolutely convergent for <() > 1 and absolutely divergent for <() 1.
P
Result 12.1.1 The P series of positive terms an converges Pif there exists a
convergent series bn such that a
P n b n for all n. Similarly, an diverges if
there exists a divergent series bn such that an bn for all n.
323
Then by comparing this series to the geometric series,
X 1
n
= 1,
n=1
2
we see that it is convergent.
Integral Test.
P
Result 12.1.2 If the coefficients an of a series n=0 an are monotonically
decreasing and can be extended to a monotonically decreasing function of the
continuous variable x,
a(x) = an for x Z0+ ,
then the series converges or diverges with the integral
Z
a(x) dx.
0
P 1
Example 12.1.5 Consider the series n=1 n2 . Define the functions sl (x) and sr (x), (left and
right),
1 1
sl (x) =2, sr (x) = 2.
(dxe) (bxc)
Recall that bxc is the greatest integer function, the greatest integer which is less than or equal to
x. dxe is the least integer function, the least integer greater than or equal to x. We can express the
series as integrals of these functions.
Z Z
X 1
= sl (x) dx = sr (x) dx
n=1
n2 0 1
In Figure 12.1 these functions are plotted against y = 1/x2 . From the graph, it is clear that we can
obtain a lower and upper bound for the series.
Z Z
1 X 1 1
2
dx 2
1+ 2
dx
1 x n=1
n 1 x
X 1
1 2
n=1
n2
1 1
1 2 3 4 1 2 3 4
P
Figure 12.1: Upper and Lower bounds to n=1 1/n2 .
In general, we have
Z
X Z
a(x) dx an am + a(x) dx.
m n=m m
Thus we see that the sum converges or diverges with the integral.
324
The Ratio Test.
P
Result 12.1.3 The series an converges absolutely if
an+1
lim < 1.
n an
If the limit is greater than unity, then the series diverges. If the limit is unity,
the test fails.
If the limit is greater than unity, then the terms are eventually increasing with n. Since the
terms do not vanish, the sum is divergent. If the limit is less than unity, then there exists some N
such that
an+1
an r < 1 for all n N.
P
From this we can show that n=0 an is absolutely convergent by comparing it to the geometric
series.
X
X
|an | |aN | rn
n=N n=0
1
= |aN |
1r
=0
n2
= lim 2
n n + 2n + 1
1
= lim
n 1 + 2/n + 1/n2
=1
325
The Root Test.
P
Result 12.1.4 The series an converges absolutely if
If the limit is greater than unity, then the series diverges. If the limit is unity,
the test fails. More generally, we can test that
lim sup |an |1/n < 1.
If the limit is greater than unity, then the terms in the series do not vanish as n . This
implies that the sum does not converge. If the limit is less than unity, then there exists some N
such that
|an |1/n r < 1 for all n N.
X
X n
|an | = |an |1/n
n=N n=N
X
rn
n=N
N
r
=
1r
P
n=0 an is absolutely convergent.
where a and b are real constants. We use the root test to check for absolute convergence.
1/n
lim |na bn | <1
n
Thus we see that the series converges absolutely for |b| < 1. Note that the value of a does not affect
the absolute convergence.
X 1
2
.
n=1
n
326
We aply the root test.
1/n
1/n 1
lim |an | = lim 2
n n n
= lim n2/n
n
2
= lim e n ln n
n
= e0
=1
Raabes Test
P
Result 12.1.5 The series an converges absolutely if
an+1
lim n 1 > 1.
n an
If the limit is less than unity, then the series diverges or converges conditionally.
If the limit is unity, the test fails.
Gauss Test
P
Result 12.1.6 Consider the series an . If
an+1 L bn
=1 + 2
an n n
where bn is bounded then the series converges absolutely if L > 1. Otherwise
the series diverges or converges conditionally.
lim f () = f (z)
z
327
for all z in the domain. Note that the rate of convergence, i.e. the number of terms, N (z) required
for for the absolute error to be less than , is a function of z.
P
Uniform Convergence. Consider a series n=0 an (z) that is convergent in some domain. If the
rate of convergence is independent of z then the series is said to be uniformly convergent. Stating
this a little more mathematically, the series is uniformly convergent in the domain if for any given
> 0 there exists an N , independent of z, such that
N
X
|f (z) SN (z)| = f (z) an (z) <
n=1
Dirichlet Test. Consider a sequence of monotone decreasing, positive constants cn with limit
zero. If all the partial sums of an (z) are bounded in some closed domain, that is
N
X
an (z) < constant
n=1
P
for all N , then n=1 cn an (z) is uniformly convergent in that closed domain. Note that the Dirichlet
test does not imply that the series is absolutely convergent.
We cannot use the Weierstrass M-test to determine if the series is uniformly convergent on an
interval. While it is easy to bound the terms with | sin(nx)/n| 1/n, the sum
X 1
n=1
n
328
PN 1
does not converge. Thus we will try the Dirichlet test. Consider the sum n=1 sin(nx). This sum
can be evaluated in closed form. (See Exercise 12.9.)
1
N
(
X 0 for x = 2k
sin(nx) = cos(x/2)cos((N 1/2)x)
n=1 2 sin(x/2) for x 6= 2k
The partial sums have infinite discontinuities at x = 2k, k Z. The partial sums are bounded on
any closed interval that does not contain an integer multiple of 2. By the Dirichlet test, the sum
P sin(nx)
n=1 n is uniformly convergent on any such closed interval. The series may not be uniformly
convergent in neighborhoods of x = 2k.
P
Example 12.2.3 Again consider n=1 sin(nx) n . In Example 12.2.2 we showed that the convergence
is uniform in any closed interval that does not contain an integer multiple of 2. In Figure 12.2 is
a plot of the first 10 and then 50 terms in the series and finally the function to which the series
converges. We see that the function has jump discontinuities at x = 2k and is continuous on any
closed interval not containing one of those points.
P sin(nx)
Figure 12.2: Ten, Fifty and all the Terms of n=1 n .
329
P
Domain of Convergence of a Power Series Consider the series n=0 an z n . Let the series
converge at some point z0 . Then |an z0n | is bounded by some constant A for all n, so
n n
z
n
|an z0n | < A z
|an z | =
z0 z0
This comparison test shows that the series converges absolutely for all z satisfying |z| < |z0 |.
Suppose that the series diverges at some point z1 . Then the series could not converge for any
|z| > |z1 | since this would imply convergence at z1 . Thus there exists some circle in the z plane such
that the power series converges absolutely inside the circle and diverges outside the circle.
|an |
|z| < lim
n |an+1 |
Result 12.3.2 Ratio formula. The radius of convergence of the power series
X
an z n
n=0
is
|an |
R = lim
n |an+1 |
is
1
R= p .
lim sup n
|an |
330
Absolute Convergence of Power Series. Consider a power series
X
f (z) = an z n
n=0
that converges for z = z0 . Let M be the value of Pthe greatest term, an z0n . Consider any point z
such that |z| < |z0 |. We can bound the residual of n=0 |an z n |,
X
RN (z) = |an z n |
n=N
an z n
X
n
= an z n |an z0 |
n=N 0
z n
X
M
z0
n=N
Thus the power series is absolutely convergent for |z| < |z0 |.
331
3. Again we apply the ration test to determine the radius of convergence.
(n + 1)!z (n+1)!
lim <1
n n!z n!
lim (n + 1)|z|(n+1)!n! < 1
n
P
Uniform Convergence of Power Series. Consider a power series n=0 an z n that converges
in the disk |z| < r0 .PThe sum converges absolutely for z in the closed disk, |z| r < r0 . Since
|an z n | |an rn | and n=0 |an rn | converges, the power series is uniformly convergent in |z| r < r0 .
This series converges for |z| 1, z 6= 1. Is the series uniformly convergent in this domain? The
residual after N terms RN is
X zn
RN (z) = .
n
n=N +1
We can get a lower bound on the absolute value of the residual for real, positive z.
X xn
|RN (x)| =
n
n=N +1
Z
x
d
N +1
= Ei((N + 1) ln x)
The exponential integral function, Ei(z), is defined
Z t
e
Ei(z) = dt.
z t
332
The exponential integral function is plotted in Figure 12.3. Since Ei(z) diverges as z 0, by
choosing x sufficiently close to 1 the residual can be made arbitrarily large. Thus this series is not
uniformly convergent in the domain |z| 1, z 6= 1. The series is uniformly convergent for |z| r < 1.
-4 -3 -2 -1
-1
-2
-3
Analyticity.
H Recall that a sufficient condition for the analyticity of a function f (z) in a domain
is that C f (z) dz = 0 for all simple,Pclosed contours in the domain.
Consider a power series f (z) = n=0 aHn z n that is uniformly convergent in |z| r. If C is any
simple, closed contour in the domain then C f (z) dz exists. Expanding f (z) into a finite series and
a residual, I I
f (z) dz = (SN (z) + RN (z)) dz.
C C
Since the series is uniformly convergent, for any given > 0 there exists an N such that |RN | <
for all z in |z| r. Let L be the length of the contour C.
I
RN (z) dz L 0 as N
C
1
I I N
!
X
n
f (z) dz = lim an z + RN (z) dz
C N C n=0
I X
= an z n
C n=0
X I
= an z n dz
n=0 C
=0
333
Since the series is uniformly convergent in the closed disk, for any given > 0, there exists an N
such that
|RN (z)| < for all |z| r.
We bound the absolute value of the integral of RN (z).
Z Z
RN (z) dz |RN (z)| dz
C C
< L
0 as N
Thus
Z N
Z X
f (z) dz = lim an z n dz
C N C n=0
N
X Z
= lim an z n dz
N C
n=0
X Z
= an z n dz
n=0 C
In the domain of uniform convergence of a series we can interchange the order of summation and
a limit process. That is,
X X
lim an (z) = lim an (z).
zz0 zz0
n=0 n=0
We can do this because the rate of convergence does not depend on z. Since differentiation is a limit
process,
d f (z + h) f (z)
f (z) = lim ,
dz h0 h
we would expect that we could differentiate a uniformly convergent series.
Since we showed that a uniformly convergent power series is equal to an analytic function, we
can differentiate a power series in its domain of uniform convergence.
Example 12.4.1 Differentiating a Series. Consider the series from Example 12.3.2.
X zn
log(1 z) =
n=1
n
334
We differentiate this to obtain the geometric series.
1 X
= z n1
1z n=1
1 X
= zn
1 z n=0
The geometric series is convergent for |z| < 1 and uniformly convergent for |z| r < 1. Note that
the domain of convergence is different than the series for log(1 z). The geometric series does not
converge for |z| = 1, z 6= 1. However, the domain of uniform convergence has remained the same.
Proof of Taylors Theorem. Lets see why Result 12.5.1 is true. Consider a function f (z) that
is analytic in |z| < R. (Considering z0 6= 0 is only trivially more general as we can introduce the
change of variables = z z0 .) According to Cauchys Integral Formula, (Result ??),
I
1 f ()
f (z) = d, (12.3)
2 C z
where C is a positive, simple, closed contour in 0 < | z| < R that goes once around z. We take
this contour to be the circle about the origin of radius r where |z| < r < R. (See Figure 12.4.)
1
We expand z in a geometric series,
1 1/
=
z 1 z/
n
1X z
= , for |z| < ||
n=0
X zn
= , for |z| < ||
n=0
n+1
We substitute this series into Equation 12.3.
!
f ()z n
I
1 X
f (z) = d
2 C n=0
n+1
335
Im(z)
r R
Re(z)
C z
zn
I
X f ()
= n+1
d
n=0
2 C
Now we have derived Equation 12.2. To obtain Equation 12.1, we apply Cauchys Integral Formula.
X f (n) (0) n
= z
n=0
n!
Example 12.5.1 Consider the Taylor series expansion of 1/(1 P z) about z = 0. Previously, we
showed that this function is the sum of the geometric series n=0 z n and we used the ratio test to
show that the series converged absolutely for |z| < 1. Now we find the series using Taylors theorem.
Since the nearest singularity of the function is at z = 1, the radius of convergence of the series is 1.
The coefficients in the series are
1 dn 1
an =
n! dz n 1 z z=0
1 n!
=
n! (1 z)n z=0
=1
Thus we have
1 X
= zn, for |z| < 1.
1 z n=0
336
12.5.1 Newtons Binomial Formula.
=e
=1
x
1
lim 1+ =e
x x
337
For |z| < 1,
1 1 1 2 1 3
=1+ z+ z + z +
1+z 1 2 3
= 1 + (1)1 z + (1)2 z 2 + (1)3 z 3 +
= 1 z + z2 z3 +
Example 12.5.4 Find the first few terms in the Taylor series expansion of
1
z2 + 5z + 6
1 1 1
=
z2 + 5z + 6 z+3 z+2
1 1
= p p
3 1 + z/3 2 1 + z/2
1 1/2 z 1/2 z 2 1/2 z 1/2 z 2
= 1+ + + 1+ + +
6 1 3 2 3 1 2 2 2
2 2
1 z z z 3z
= 1 + + 1 + +
6 6 24 4 32
1 5 17
= 1 z + z2 +
6 12 96
Result 12.6.1 Let f (z) be single-valued and analytic in the annulus R1 <
|z z0 | < R2 . For points in the annulus, the function has the convergent
Laurent series
X
f (z) = an z n ,
n=
where I
1 f (z)
an = dz
2 C (z z0 )n+1
and C is a positively oriented, closed contour around z0 lying in the annulus.
To derive this result, consider a function f () that is analytic in the annulus R1 < || < R2 .
Consider any point z in the annulus. Let C1 be a circle of radius r1 with R1 < r1 < |z|. Let C2 be
a circle of radius r2 with |z| < r2 < R2 . Let Cz be a circle around z, lying entirely between C1 and
C2 . (See Figure 12.5 for an illustration.)
Consider the integral of fz
()
around the C2 contour. Since the the only singularities of fz
()
occur
at = z and at points outside the annulus,
I I I
f () f () f ()
d = d + d.
C2 z Cz z C1 z
338
By Cauchys Integral Formula, the integral around Cz is
I
f ()
d = 2f (z).
Cz z
1 1/
=
z 1 z/
n
1X z
= , for |z| < ||
n=0
X zn
= , for |z| < ||
n=0
n+1
1 1/z
=
z 1 /z
n
1X
= , for || < |z|
z n=0 z
X n
= , for || < |z|
n=0
z n+1
1
X zn
= , for || < |z|
n=
n+1
Since the sums converge uniformly, we can interchange the order of integration and summation.
I 1 I
1 X f ()z n 1 X f ()z n
f (z) = d + d
2 n=0 C2 n+1 2 n= C1 n+1
Since the only singularities of the integrands lie outside of the annulus, the C1 and C2 contours can
be deformed to any positive, closed contour C that lies in the annulus and encloses the origin. (See
Figure 12.5.) Finally, we combine the two integrals to obtain the desired result.
I
X 1 f ()
f (z) = d zn
n=
2 C n+1
339
Im(z) Im(z)
r2 R2 R2
r1 R1 R1
Re(z) Re(z)
C1
z C
C2 Cz
1 1/z
=
1+z 1 + 1/z
1 1 1 1 2
= 1+ z + z +
z 1 2
= z 1 z 2 + z 3
340
12.7 Exercises
12.7.1 Series of Constants
Exercise 12.1P
Show that if an converges then limn an = 0. That is, limn an = 0 is a necessary condition
for the convergence of the series.
Exercise 12.2
Answer the following questions true or false. Justify your answers.
1. There exists a sequence which converges to both 1 and 1.
2. There exists a sequence {an } such that an > 1 for all n and limn an = 1.
3. There exists a divergent geometric series whose terms converge.
4. There exists a sequence whose even terms are greater than 1, whose odd terms are less than 1
and that converges to 1.
P
5. There exists a divergent series of non-negative terms, n=0 an , such that an < (1/2)n .
6. There exists a convergent sequence, {an }, such that limn (an+1 an ) 6= 0.
7. There exists a divergent sequence, {an }, such that limn |an | = 2.
P P P
8. There exists divergent series, an and bn , such that (an + bn ) is convergent.
9. There exists 2 different series of nonzero terms that have the same sum.
10. There exists a series of nonzero terms that converges to zero.
11. There exists a series with an infinite number of non-real terms which converges to a real
number.
P
12. There exists a convergent series an with limn |an+1 /an | = 1.
P
13. There exists a divergent series an with limn |an+1 /an | = 1.
P p
14. There exists a convergent series an with limn n |an | = 1.
P p
15. There exists a divergent series an with limn n |an | = 1.
P P 2
16. There exists a convergent series of non-negative terms, an , for which an diverges.
P P
17. There exists a convergent series of non-negative terms, an , for which an diverges.
P P
18. There exists a convergent series, an , for which |an | diverges.
an (z z0 )n which converges for z = 0 and z = 3 but diverges
P
19. There exists a power series
for z = 2.
an (z z0 )n which converges for z = 0 and z = 2 but diverges
P
20. There exists a power series
for z = 2.
Exercise 12.3
Determine if the following series converge.
X 1
1.
n=2
n ln(n)
X 1
2. n)
n=2
ln (n
341
X
n
3. ln ln n
n=2
X 1
4.
n=10
n(ln n)(ln(ln n))
X ln (2n )
5.
n=1
ln (3n ) + 1
X 1
6.
n=0
ln(n + 20)
X 4n + 1
7.
n=0
3n 2
X
8. (Log 2)n
n=0
X n2 1
9.
n=2
n4 1
X n2
10.
n=2
(ln n)n
X 1n
11. (1) ln
n=2
n
X (n!)2
12.
n=2
(2n)!
X 3n + 4n + 5
13.
n=2
5n 4n 3
X n!
14.
n=2
(ln n)n
X en
15.
n=2
ln(n!)
X (n!)2
16.
n=1
(n2 )!
X n8 + 4n4 + 8
17.
n=1
3n9 n5 + 9n
X 1 1
18.
n=1
n n+1
X cos(n)
19.
n=1
n
342
X ln n
20. 11/10
n=2
n
is convergent.
Exercise 12.6
The alternating harmonic series has the sum:
X (1)n
= ln(2).
n=1
n
Exercise 12.8
Show that the harmonic series,
X 1 1 1
= 1 + + + ,
n=1
n 2 3
ExerciseP12.9
N 1
Evaluate n=1 sin(nx).
Exercise 12.10
Evaluate
n
X n
X
kz k and k2 z k
k=1 k=1
for z 6= 1.
Exercise 12.11
Which of the following series converge? Find the sum of those that do.
1 1 1 1
1. + + + +
2 6 12 20
343
2. 1 + (1) + 1 + (1) +
X 1 1 1
3.
n=1
2n1 3n 5n+1
Exercise 12.12
Evaluate the following sum.
X X X 1
2k n
k1 =0 k2 =k1 kn =kn1
Exercise 12.14
Find the circle of convergence of the following series.
z2 z3 z4
1. z + ( ) + ( )( 2) + ( )( 2)( 3) +
2! 3! 4!
344
X n
2. n
(z )n
n=1
2
X
3. nn z n
n=1
X n! n
4. n
z
n=1
n
X n
5. (3 + (1)n ) z n
n=1
X
6. (n + n ) z n (|| > 1)
n=1
Exercise 12.15
Find the circle of convergence of the following series:
X
1. kz k
k=0
X
2. kk z k
k=1
X k! k
3. z
kk
k=1
X
4. (z + 5)2k (k + 1)2
k=0
X
5. (k + 2k )z k
k=0
and
X zn
log(1 z) = , for |z| < 1.
n=1
n
series from the singularities of the function. Determine the radius of convergence with the ratio test.
Exercise 12.18
Use two methods to find the Taylor series expansion of log(1 + z) about z = 0 and determine the
circle of convergence. First directly apply Taylors theorem, then differentiate a geometric series.
345
Exercise 12.19
Let f (z) = (1 + z) be the branch for which f (0) = 1. Find its Taylor series expansion about z = 0.
What is the radius of convergence of the series? ( is an arbitrary complex number.)
Exercise 12.20
Find the Taylor series expansions about the point z = 1 for the following functions. What are the
radii of convergence?
1
1.
z
2. Log z
1
3.
z2
4. z Log z z
Exercise 12.21
Find the Taylor series expansion about the point z = 0 for ez . What is the radius of convergence?
Use this to find the Taylor series expansions of cos z and sin z about z = 0.
Exercise 12.22
Find the Taylor series expansion about the point z = for the cosine and sine.
Exercise 12.23
Sum the following series.
X (ln 2)n
1.
n=0
n!
X (n + 1)(n + 2)
2.
n=0
2n
X (1)n
3.
n=0
n!
X (1)n 2n+1
4.
n=0
(2n + 1)!
X (1)n 2n
5.
n=0
(2n)!
X ()n
6.
n=0
(2n)!
Exercise 12.24
1. Find the first three terms in the following Taylor series and state the convergence properties
for the following.
(a) ez around z0 = 0
1+z
(b) around z0 =
1z
ez
(c) around z0 = 0
z1
It may be convenient to use the Cauchy product of two Taylor series.
346
2. Consider a function f (z) analytic for |z z0 | < R. Show that the series obtained by differ-
entiating the Taylor series for f (z) termwise is actually the Taylor series for f 0 (z) and hence
argue that this series converges uniformly to f 0 (z) for |z z0 | < R.
3. Find the Taylor series for
1
(1 z)3
by appropriate differentiation of the geometric series and state the radius of convergence.
4. Consider the branch of f (z) = (z + 1) corresponding to f (0) = 1. Find the Taylor series
expansion about z0 = 0 and state the radius of convergence.
Exercise 12.26
Obtain the Laurent expansion of
1
f (z) =
(z + 1)(z + 2)
centered on z = 0 for the three regions:
1. |z| < 1
2. 1 < |z| < 2
3. 2 < |z|
Exercise 12.27
By comparing the Laurent expansion of (z + 1/z)m , m Z+ , with the binomial expansion of this
quantity, show that
(
Z 2 m
m 2m1 (mn)/2 m n m and m n even
(cos ) cos(n) d =
0 0 otherwise
Exercise 12.28
The function f (z) is analytic in the entire z-plane, including , except at the point z = /2, where
it has a simple pole, and at z = 2, where it has a pole of order 2. In addition
I I I
f (z) dz = 2, f (z) dz = 0, (z 1)f (z) dz = 0.
|z|=1 |z|=3 |z|=3
Exercise 12.29
P k
Let f (z) = k=1 k 3 z3 . Compute each of the following, giving justification in each case. The
contours are circles of radius one about the origin.
Z
1. ez f (z) dz
|z|=1
Z
f (z)
2. dz
|z|=1 z4
f (z) ez
Z
3. dz
|z|=1 z2
347
Exercise 12.30
1
1. Expand f (z) = z(1z) in Laurent series that converge in the following domains:
348
12.8 Hints
Hint 12.1
Use the Cauchy convergence criterion for series. In particular, consider |SN +1 SN |.
Hint 12.2
CONTINUE
Hint 12.3
1.
X 1
n=2
n ln(n)
Use the integral test.
2.
X 1
ln (n n)
n=2
Simplify the summand.
3.
X
n
ln ln n
n=2
Simplify the summand. Use the comparison test.
4.
X 1
n=10
n(ln n)(ln(ln n))
Use the integral test.
5.
X ln (2n )
n=1
ln (3n ) + 1
Show that the terms in the sum do not vanish as n
6.
X 1
n=0
ln(n + 20)
Shift the indices.
7.
X 4n + 1
n=0
3n 2
Show that the terms in the sum do not vanish as n
8.
X
(Log 2)n
n=0
This is a geometric series.
9.
X n2 1
n=2
n4 1
Simplify the integrand. Use the comparison test.
349
10.
X n2
n=2
(ln n)n
11.
X
n1
(1) ln
n=2
n
12.
X (n!)2
n=2
(2n)!
13.
X 3n + 4n + 5
n=2
5n 4n 3
14.
X n!
n=2
(ln n)n
15.
X en
n=2
ln(n!)
16.
X (n!)2
n=1
(n2 )!
17.
X n8 + 4n4 + 8
n=1
3n9 n5 + 9n
18.
X 1 1
n=1
n n+1
19.
X cos(n)
n=1
n
350
20.
X ln n
n 11/10
n=2
Use the integral test.
Hint 12.4
Group the terms.
1 1
1 =
2 2
1 1 1
=
3 4 12
1 1 1
=
5 6 30
Hint 12.5
Show that
1
|S2n Sn | > .
2
Hint 12.6
The alternating harmonic series is conditionally convergent. Let {an } and {bn } be the positive
P and
negative
P terms in the sum, respectively, ordered in decreasing magnitude. Note that both n=1 an
and n=1 bn are divergent. Devise a method for alternately taking terms from {an } and {bn }.
Hint 12.7
Use the ratio test.
Hint 12.8
Use the integral test.
Hint 12.9
Note that sin(nx) = =(enx ). This substitute will yield a finite geometric series.
Hint 12.10
Let Sn be the sum. Consider Sn zSn . Use the finite geometric sum.
Hint 12.11
1. The summand is a rational function. Find the first few partial sums.
2.
3. This a geometric series.
Hint 12.12
CONTINUE
Hint 12.13
CONTINUE
X zn
1.
n=0
(z + 3)n
X Log z
2.
n=2
ln n
351
X z
3.
n=1
n
X (z + 2)2
4.
n=1
n2
X (z e)n
5.
n=1
nn
X z 2n
6.
n=1
2nz
X z n!
7.
n=0
(n!)2
X z ln(n!)
8.
n=0
n!
X (z )2n+1 n
9.
n=0
n!
X ln n
10.
n=0
zn
Hint 12.14
Hint 12.15
CONTINUE
Hint 12.16
Differentiate the geometric series. Integrate the geometric series.
Hint 12.17
The Taylor series is a geometric series.
Hint 12.18
Hint 12.19
Hint 12.20
1.
1 1
=
z 1 + (z 1)
The right side is the sum of a geometric series.
352
Hint 12.21
Evaluate the derivatives of ez at z = 0. Use Taylors Theorem, (Result 12.5.1).
Write the cosine and sine in terms of the exponential function.
Hint 12.22
cos z = cos(z )
sin z = sin(z )
Hint 12.23
CONTINUE
Hint 12.24
CONTINUE
Hint 12.25
Hint 12.26
Hint 12.27
Hint 12.28
Hint 12.29
Hint 12.30
CONTINUE
353
12.9 Solutions
Solution
P 12.1
a
n=0 n converges only if the partial sums, Sn , are a Cauchy sequence.
> 0 N s.t. m, n > N |Sm Sn | < ,
In particular, we can consider m = n + 1.
> 0 N s.t. n > N |Sn+1 Sn | <
Now we note that Sn+1 sn = an .
> 0 N s.t. n > N |an | <
Solution 12.2
CONTINUE
Solution 12.3
1.
X 1
n=2
n ln(n)
Since this is a series of positive, monotone decreasing terms, the sum converges or diverges
with the integral, Z Z
1 1
dx = d
2 x ln x ln 2
Since the integral diverges, the series also diverges.
2.
X 1 X 1
n
=
n=2
ln (n ) n=2 n ln(n)
The sum converges.
3.
X n
X 1 X 1
ln ln n = ln(ln n)
n=2 n=2
n n=2
n
The sum is divergent by the comparison test.
4.
X 1
n=10
n(ln n)(ln(ln n))
Since this is a series of positive, monotone decreasing terms, the sum converges or diverges
with the integral,
Z Z Z
1 1 1
dx = dy = dz
10 x ln x ln(ln x) ln(10) y ln y ln(ln(10)) z
Since the integral diverges, the series also diverges.
5.
X ln (2n ) X n ln 2 X ln 2
n) + 1
= =
n=1
ln (3 n=1
n ln 3 + 1 n=1
ln 3 + 1/n
Since the terms in the sum do not vanish as n , the series is divergent.
354
6.
X 1 X 1
=
n=0
ln(n + 20) n=20
ln n
The series diverges.
7.
X 4n + 1
n=0
3n 2
Since the terms in the sum do not vanish as n , the series is divergent.
8.
X
(Log 2)n
n=0
The series on the right side diverges because the terms do not vanish as n .
12.
X (n!)2 X (1)(2) n X 1
= < n
n=2
(2n)! n=2
(n + 1)(n + 2) (2n) n=2
2
The series converges by comparison with a geometric series.
13.
X 3n + 4n + 5
n=2
5n 4n 3
We use the root test to check for convergence.
3 + 4n + 5 1/n
n
1/n
lim |an | = lim n
n n 5 4n 3
1/n
4 (3/4)n + 1 + 5/4n
= lim
n 5 1 (4/5)n 3/5n
4
=
5
<1
355
14. We will use the comparison test.
p !n
X n! X (n/2)n/2 X n/2
> =
n=2
(ln n)n n=2
(ln n)n n=2
ln n
Since the terms in the series on the right side do not vanish as n , the series is divergent.
15. We will use the comparison test.
X en X en X en
> n
=
n=2
ln(n!) n=2 ln(n ) n=2 n ln(n)
Since the terms in the series on the right side do not vanish as n , the series is divergent.
16.
X (n!)2
n=1
(n2 )!
We apply the ratio test.
2 2
an+1
= lim ((n + 1)!) (n )!
lim
n an n ((n + 1)2 )!(n!)2
(n + 1)2
= lim 2 2
n ((n + 1) n )!
(n + 1)2
= lim
n (2n + 1)!
=0
The series is convergent.
17.
X n8 + 4n4 + 8 X 1 1 + 4n4 + 8n8
=
n=1
3n9 n5 + 9n n=1 n 3 n4 + 9n8
1X1
>
4 n=1 n
356
Solution 12.4
(1)n+1
X X 1 1
=
n=1
n n=1
2n 1 2n
X 1
=
n=1
(2n 1)(2n)
X 1
<
n=1
(2n 1)2
1X 1
<
2 n=1 n2
2
=
12
Solution 12.5
Since
2n1
X 1
|S2n Sn | =
j=n j
2n1
X 1
j=n
2n 1
n
=
2n 1
1
>
2
Solution 12.6
The alternating harmonic series is conditionally convergent. That is, the sum is convergent but not
absolutely convergent. Let {an } and {bn } be the positive
P and negative
P terms in the sum, respectively,
ordered in decreasing magnitude. Note that both n=1 an and n=1 bn are divergent. Otherwise
the alternating harmonic series would be absolutely convergent.
To sum the terms in the series to we repeat the following two steps indefinitely:
P P
Each of these steps can always be accomplished because the sums, n=1 an and n=1 bn are both
divergent. Hence the tails of the series are divergent. No matter how many terms we take, the
remaining terms in each series are divergent. In each step a finite, nonzero number of terms from
the respective series is taken. Thus all the terms will be used. Since the terms in each series vanish
as n , the running sum converges to .
357
Solution 12.7
Applying the ratio test,
n
an+1
lim = lim (n + 1)!n
n an n n!(n + 1)(n+1)
nn
= lim
n (n + 1)n
n
n
= lim
n (n + 1)
1
=
e
< 1,
we see that the series is absolutely convergent.
Solution 12.8
The harmonic series,
X 1 1 1
= 1 + + + ,
n=1
n 2 3
converges or diverges absolutely with the integral,
Z
1
Z
1 [ln x]
1 for <() = 1,
dx = dx = h x1<() i
1 |x | 1 x<() 1<() for <() 6= 1.
1
The integral converges only for <() > 1. Thus the harmonic series converges absolutely for <() > 1
and diverges absolutely for <() 1.
Solution 12.9
N
X 1 N
X 1
sin(nx) = sin(nx)
n=1 n=0
N
X 1
= =(enx )
n=0
1
N
!
X
x n
== (e )
n=0
(
=(N
) nx for x = 2k
=
= 1e
1ex for x 6= 2k
(
0 for x = 2k
= ex/2 e(N 1/2)x
= ex/2 ex/2
for x 6= 2k
(
0 for x = 2k
= ex/2 e(N 1/2)x
= 2 sin(x/2) for x 6= 2k
(
0 for x = 2k
= ex/2 e(N 1/2)x
< 2 sin(x/2) for x 6= 2k
1
N
(
X 0 for x = 2k
sin(nx) = cos(x/2)cos((N 1/2)x)
n=1 2 sin(x/2) 6 2k
for x =
358
Solution 12.10
Let
n
X
Sn = kz k .
k=1
n
X n
X
Sn zSn = kz k kz k+1
k=1 k=1
n
X n+1
X
= kz k (k 1)z k
k=1 k=2
Xn
= z k nz n+1
k=1
z z n+1
= nz n+1
1z
n
X z(1 (n + 1)z n + nz n+1 )
kz k =
(1 z)2
k=1
Let
n
X
Sn = k2 z k .
k=1
n
X
Sn zSn = (k 2 (k 1)2 )z k n2 z n+1
k=1
Xn n
X
=2 kz k z k n2 z n+1
k=1 k=1
z(1 (n + 1)z n + nz n+1 ) z z n+1
=2 n2 z n+1
(1 z)2 1z
n
X z(1 + z z n (1 + z + n(n(z 1) 2)(z 1)))
k2 z k =
(1 z)3
k=1
Solution 12.11
1.
X 1 1 1 1
an = + + + +
n=1
2 6 12 20
We conjecture that the terms in the sum are rational functions of summation index. That is,
an = 1/p(n) where p(n) is a polynomial. We use divided differences to determine the order of
the polynomial.
2 6 12 20
4 6 8
2 2
We see that the polynomial is second order. p(n) = an2 + bn + c. We solve for the coefficients.
a+b+c=2
4a + 2b + c = 6
9a + 3b + c = 12
359
p(n) = n2 + n
We examine the first few partial sums.
1
S1 =
2
2
S2 =
3
3
S3 =
4
4
S4 =
5
We conjecture that Sn = n/(n + 1). We prove this with induction. The base case is n = 1.
S1 = 1/(1 + 1) = 1/2. Now we assume the induction hypothesis and calculate Sn+1 .
Sn+1 = Sn + an+1
n 1
= +
n + 1 (n + 1)2 + (n + 1)
n+1
=
n+2
This proves the induction hypothesis. We calculate the limit of the partial sums to evaluate
the series.
X 1 n
2+n
= lim
n=1
n n n + 1
X 1
=1
n=1
n2 +n
2.
X
(1)n = 1 + (1) + 1 + (1) +
n=0
Since the terms in the series do not vanish as n , the series is divergent.
X 1 1 1 1 1 2
n1 3n 5n+1
= =
n=1
2 75 1 1/30 145
CONTINUE
Solution 12.12
The innermost sum is a geometric series.
X 1 1 1
= kn1 = 21kn1
2kn 2 1 1/2
kn =kn1
360
We evaluate the n nested sums by induction.
X X X 1
= 2n
2k n
k1 =0 k2 =k1 kn =kn1
Solution 12.13
CONTINUE.
X zn
1.
n=0
(z + 3)n
X Log z
2.
n=2
ln n
X z
3.
n=1
n
X (z + 2)2
4.
n=1
n2
X (z e)n
5.
n=1
nn
X z 2n
6.
n=1
2nz
X z n!
7.
n=0
(n!)2
X z ln(n!)
8.
n=0
n!
X (z )2n+1 n
9.
n=0
n!
X ln n
10.
n=0
zn
Solution 12.14
1. We assume that 6= 0. We determine the radius of convergence with the ratio test.
an
R = lim
n an+1
( ) ( (n 1))/n!
= lim
n ( ) ( n)/(n + 1)!
n+1
= lim
n n
1
=
||
361
2. By the ratio test formula, the radius of absolute convergence is
n/2n
R = lim
n (n + 1)/2n+1
n
= 2 lim
n n + 1
=2
1
R= p
limn |n/2n |
n
2
=
limn n n
=2
1
R= p
lim sup n |an |
1
= p
lim sup n |nn |
1
=
lim sup n
=0
n!/nn
R = lim n+1
n (n + 1)!/(n + 1)
(n + 1)n
= lim
n nn
n
n+1
= lim
n n
n
n+1
= exp lim ln
n n
n+1
= exp lim n ln
n n
ln(n + 1) ln(n)
= exp lim
n 1/n
1/(n + 1) 1/n
= exp lim
n 1/n2
n
= exp lim
n n + 1
1
=e
362
5. By the Cauchy-Hadamard formula, the radius of absolute convergence is
1
R= p n
lim sup | (3 + (1)n ) |
n
1
=
lim sup (3 + (1)n )
1
=
4
Thus the series converges absolutely for |z| < 1/4.
1
R= p
lim sup |n + n |
n
1
= p
lim sup || |1 + n/n |
n
1
=
||
Solution 12.15
1.
X
kz k
k=0
2.
X
kk z k
k=1
1
R= p
lim sup k |k k |
1
=
lim sup k
=0
3.
X k! k
z
kk
k=1
363
We determine the radius of convergence with the ratio formula.
k!/k k
R = lim (k+1)
k (k + 1)!/(k + 1)
(k + 1)k
= lim
k kk
k+1
= exp lim k ln
k k
ln(k + 1) ln(k)
= exp lim
k 1/k
1/(k + 1) 1/k
= exp lim
k 1/k 2
k
= exp lim
k k + 1
= exp(1)
=e
The series converges absolutely for |z| < e.
4.
X
(z + 5)2k (k + 1)2
k=0
We use the ratio formula to determine the domain of convergence.
(z + 5)2(k+1) (k + 2)2
lim <1
k (z + 5)2k (k + 1)2
(k + 2)2
|z + 5|2 lim <1
k (k + 1)2
2(k + 2)
|z + 5|2 lim <1
k 2(k + 1)
2
|z + 5|2 lim <1
k 2
|z + 5|2 < 1
5.
X
(k + 2k )z k
k=0
We determine the radius of convergence with the Cauchy-Hadamard formula.
1
R= p
lim sup |k + 2k |
k
1
= p
lim sup 2 |1 + k/2k |
k
1
=
2
The series converges for |z| < 1/2.
Solution 12.16
The geometric series is
1 X
= zn.
1 z n=0
364
This series is uniformly convergent in the domain, |z| r < 1. Differentiating this equation yields,
1 X
= nz n1
(1 z)2 n=1
X
= (n + 1)z n for |z| < 1.
n=0
Solution 12.17
1 X
2 n
X
(1)n z 2n
= z =
1 + z2 n=0 n=0
1 1
The function 1+z 2 = (1z)(1+z) has singularities at z = . Thus the radius of convergence is 1.
Now we use the ratio test to corroborate that the radius of convergence is 1.
an+1 (z)
lim <1
n an (z)
(1)n+1 z 2(n+1)
lim <1
n (1)n z 2n
lim z 2 < 1
n
|z| < 1
Solution 12.18
Method 1.
2
z2
d z d
log(1 + z) = [log(1 + z)]z=0 + log(1 + z) + 2
log(1 + z) +
dz z=0 1! dz z=0 2!
z2 z3
1 z 1 2
=0+ + 2
+ 3
+
1 + z z=0 1! (1 + z) z=0 2! (1 + z) z=0 3!
z2 z3 z4
=z + +
2 3 4
X zn
= (1)n+1
n=1
n
We integrate this equation to get the series for log(1 + z) in the domain |z| < 1.
X z n+1n
X zn
log(1 + z) = (1) = (1)n+1
n=0
n + 1 n=1 n
365
We calculate the radius of convergence with the ratio test.
= lim (n + 1) = 1
an
R = lim
n an+1 n n
Solution 12.19
The Taylor series expansion of f (z) about z = 0 is
X f (n) (0) n
f (z) = z .
n=0
n!
If is not a non-negative integer, then all of the terms in the series are non-zero.
Qn1
k=0 ( k) n
X
(1 + z) = z
n=0
n!
The radius of convergence of the series is the distance to the nearest singularity of (1 + z) . This
occurs at z = 1. Thus the series converges for |z| < 1. We can corroborate this with the ratio test.
The radius of convergence is
Q
n1
k=0 ( k) /n!
n+1
R = lim Qn
= n
lim
= 1.
n (
k=0 ( k)) /(n + 1)! n
If we use the binomial coefficient, we can write the series in a compact form.
Qn1
( k)
k=0
n n!
X n
(1 + z) = z
n=0
n
Solution 12.20
1. We find the series for 1/z by writing it in terms of z 1 and using the geometric series.
1 1
=
z 1 + (z 1)
1 X
= (1)n (z 1)n for |z 1| < 1
z n=0
366
Since the nearest singularity is at z = 0, the radius of convergence is 1. The series converges
absolutely for |z 1| < 1. We could also determine the radius of convergence with the Cauchy-
Hadamard formula.
1
R= p
n
|an |
lim sup
1
= p
lim sup n |(1)n |
=1
The series we derived for 1/z is uniformly convergent for |z 1| r < 1. We can integrate
the series in this domain.
Z
zX
Log z = (1)n ( 1)n d
1 n=0
X Z z
n
= (1) ( 1)n d
n=0 1
X (z 1)n+1
= (1)n
n=0
n+1
X (1)n1 (z 1)n
Log z = for |z 1| < 1
n=1
n
3. The series we derived for 1/z is uniformly convergent for |z 1| r < 1. We can differentiate
the series in this domain.
1 d 1
2
=
z dz z
d X
= (1)n (z 1)n
dz n=0
X
= (1)n+1 n(z 1)n1
n=1
1 X
= (1)n (n + 1)(z 1)n for |z 1| < 1
z2 n=0
The series we derived for Log z is uniformly convergent for |z 1| r < 1. We can integrate
367
the series in this domain.
Z z
z Log z z = = 1 + Log d
1
zX
(1)n1 ( 1)n
Z
= 1 + d
1 n=1
n
X (1)n1 (z 1)n+1
= 1 +
n=1
n(n + 1)
X (1)n (z 1)n
z Log z z = 1 + for |z 1| < 1
n=2
n(n 1)
Solution 12.21
We evaluate the derivatives of ez at z = 0. Then we use Taylors Theorem, (Result 12.5.1).
dn z
e = ez
dz n
dn z
z
n
e = e =1
dz
z=0
X zn
ez =
n=0
n!
Since the exponential function has no singularities in the finite complex plane, the radius of conver-
gence is infinite.
We find the Taylor series for the cosine and sine by writing them in terms of the exponential
function.
ez + ez
cos z =
2
!
1 X (z)n X (z)n
= +
2 n=0 n! n=0
n!
X (z)n
=
n=0
n!
even n
X (1)n z 2n
cos z =
n=0
(2n)!
ez ez
sin z =
2
!
1 X (z)n X (z)n
=
2 n=0 n! n=0
n!
X (z)n
=
n=0
n!
odd n
X (1)n z 2n+1
sin z =
n=0
(2n + 1)!
368
Solution 12.22
cos z = cos(z )
X (1)n (z )2n
=
n=0
(2n)!
X (1)n+1 (z )2n
=
n=0
(2n)!
sin z = sin(z )
X (1)n (z )2n+1
=
n=0
(2n + 1)!
X (1)n+1 (z )2n+1
=
n=0
(2n + 1)!
Solution 12.23
CONTINUE
Solution 12.24
1. (a)
f (z) = ez
f (0) = 1
f 0 (0) = 1
f 00 (0) = 1
z2
ez = 1 z + + O z3
2
Since ez is entire, the Taylor series converges in the complex plane.
(b)
1+z
f (z) = , f () =
1z
2
f 0 (z) = , f 0 () =
(1 z)2
4
f 00 (z) = , f 00 () = 1 +
(1 z)3
1+z 1 +
(z )2 + O (z )3
= + (z ) +
1z 2
Since the nearest
singularity, (at z = 1), is a distance of 2 from
z0 = , the radius of
convergence is 2. The series converges absolutely for |z | < 2.
(c)
ez z2
3
1 + z + z2 + O z3
= 1+z+ +O z
z1 2
5 2
= 1 2z z + O z 3
2
Since the nearest singularity, (at z = 1), is a distance of 1 from z0 = 0, the radius of
convergence is 1. The series converges absolutely for |z| < 1.
369
2. Since f (z) is analytic in |z z0 | < R, its Taylor series converges absolutely on this domain.
X f (n) (z0 )z n
f (z) =
n=0
n!
The Taylor series converges uniformly on any closed sub-domain of |z z0 | < R. We consider
the sub-domain |z z0 | < R. On the domain of uniform convergence we can interchange
differentiation and summation.
d X f (n) (z0 )z n
f 0 (z) =
dz n=0 n!
X nf (n) (z0 )z n1
f 0 (z) =
n=1
n!
X f (n+1) (z0 )z n
f 0 (z) =
n=0
n!
Note that this is the Taylor series that we could obtain directly for f 0 (z). Since f (z) is analytic
on |z z0 | < R so is f 0 (z).
X f (n+1) (z0 )z n
f 0 (z) =
n=0
n!
3.
1 d2 1 1
=
(1 z)3 dz 2 2 1 z
1 d2 X n
= z
2 dz 2 n=0
1X
= n(n 1)z n2
2 n=2
1X
= (n + 2)(n + 1)z n
2 n=0
370
The radius of convergence of the series is the distance to the nearest singularity of (1 + z) .
This occurs at z = 1. Thus the series converges for |z| < 1. We can corroborate this with
the ratio test. We compute the radius of convergence.
Q
n1
( k) /n!
k=0
= lim n + 1 = 1
R = lim Qn
n n
k=0 ( k)) /(n + 1)!
n (
X
(1 + z) = zn
n=0
n
Solution 12.25
For |z| < 1:
1
=
z 1 + z
X
= (z)n
n=0
1 1 1
=
z z (1 /z)
1 X n
=
z n=0 z
0
1 X n n
= z
z n=
0
X
= ()n z n1
n=
1
X
= ()n+1 z n
n=
Solution 12.26
We expand the function in partial fractions.
1 1 1
f (z) = =
(z + 1)(z + 2) z+1 z+2
371
The Taylor series about z = 0 for 1/(z + 1) is
1 1
=
1+z 1 (z)
X
= (z)n , for |z| < 1
n=0
X
= (1)n z n , for |z| < 1
n=0
To find the expansions in the three regions, we just choose the appropriate series.
1.
1 1
f (z) =
1+z 2+z
X X (1)n n
= (1)n z n z , for |z| < 1
n=0 n=0
2n+1
X 1
= (1)n 1 n+1 z n , for |z| < 1
n=0
2
372
X 2n+1 1 n
f (z) = (1)n z , for |z| < 1
n=0
2n+1
2.
1 1
f (z) =
1+z 2+z
1
X X (1)n n
f (z) = (1)n+1 z n z , for 1 < |z| < 2
n= n=0
2n+1
3.
1 1
f (z) =
1+z 2+z
1 1
X X (1)n+1 n
= (1)n+1 z n z , for 2 < |z|
n= n=
2n+1
1
X 2n+1 1 n
f (z) = (1)n+1 z , for 2 < |z|
n=
2n+1
Solution 12.27
Laurent Series. We assume that m is a non-negative integer and that n is an integer. The Laurent
series about the point z = 0 of m
1
f (z) = z +
z
is
X
f (z) = an z n
n=
where I
1 f (z)
an = dz
2 C z n+1
and C is a contour going around the origin once in the positive direction. We manipulate the
coefficient integral into the desired form.
(z + 1/z)m
I
1
an = dz
2 C z n+1
Z 2
1 (e + e )m
= e d
2 0 e(n+1)
Z 2
1
= 2m cosm en d
2 0
2m1 2
Z
= cosm (cos(n) sin(n)) d
0
2
2m1
Z
= cosm cos(n) d
0
373
Binomial Series. Now we find the binomial series expansion of f (z).
m m n
1 X m 1
z+ = z mn
z n=0
n z
m
X m m2n
= z
n=0
n
m
X m
= zn
n=m
(m n)/2
mn even
P
The coefficients in the series f (z) = n= an z n are
(
m
(mn)/2 m n m and m n even
an =
0 otherwise
By equating the coefficients found by the two methods, we evaluate the desired integral.
(
2 m
m n m and m n even
Z
m 2m1 (mn)/2
(cos ) cos(n) d =
0 0 otherwise
Solution 12.28
First we write f (z) in the form
g(z)
f (z) = .
(z /2)(z 2)2
g(z) is an entire function which grows no faster that z 3 at infinity. By expanding g(z) in a Taylor
series about the origin, we see that it is a polynomial of degree no greater than 3.
z 3 + z 2 + z +
f (z) =
(z /2)(z 2)2
Since f (z) is a rational function we expand it in partial fractions to obtain a form that is convenient
to integrate.
a b c
f (z) = + + +d
z /2 z 2 (z 2)2
We use the value of the integrals of f (z) to determine the constants, a, b, c and d.
I
a b c
+ + + d dz = 2
|z|=1 z /2 z 2 (z 2)2
2a = 2
a=1
I
1 b c
+ + +d dz = 0
|z|=3 z /2 z 2 (z 2)2
2(1 + b) = 0
b = 1
Note that by applying the second constraint, we can change the third constraint to
I
zf (z) dz = 0.
|z|=3
374
I
1 1 c
z + +d dz = 0
|z|=3 z /2 z 2 (z 2)2
(z /2) + /2 (z 2) + 2 c(z 2) + 2c
I
+ dz = 0
|z|=3 z /2 z2 (z 2)2
2 2+c =0
2
c=2
2
Thus we see that the function is
1 1 2 /2
f (z) = + + d,
z /2 z 2 (z 2)2
where d is an arbitrary constant. We can also write the function in the form:
dz 3 + 15 8
f (z) = .
4(z /2)(z 2)2
Complete Laurent Series. We find the complete Laurent series about z = 0 for each of the
terms in the partial fraction expansion of f (z).
1 2
=
z /2 1 + 2z
X
= 2 (2z)n , for | 2z| < 1
n=0
X
= (2)n+1 z n , for |z| < 1/2
n=0
1 1/z
=
z /2 1 /(2z)
1 X n
= , for |/(2z)| < 1
z n=0 2z
n
X
= z n1 , for |z| < 2
n=0
2
1 n1
X
= zn, for |z| < 2
n=
2
1
X
= (2)n+1 z n , for |z| < 2
n=
1 1/2
=
z2 1 z/2
1 X z n
= , for |z/2| < 1
2 n=0 2
X zn
= n+1
, for |z| < 2
n=0
2
375
1 1/z
=
z2 1 2/z
n
1X 2
= , for |2/z| < 1
z n=0 z
X
= 2n z n1 , for |z| > 2
n=0
1
X
= 2n1 z n , for |z| > 2
n=
2 /2 1
= (2 /2) (1 z/2)2
(z 2)2 4
4 X 2 z n
= , for |z/2| < 1
8 n=0 n 2
4 X
= (1)n (n + 1)(1)n 2n z n , for |z| < 2
8 n=0
4 X n+1 n
= z , for |z| < 2
8 n=0 2n
2
2 /2 2 /2 2
= 1
(z 2)2 z2 z
n
2 /2 X 2 2
= , for |2/z| < 1
z 2 n=0 n z
X
= (2 /2) (1)n (n + 1)(1)n 2n z n2 , for |z| > 2
n=0
2
X
= (2 /2) (n 1)2n2 z n , for |z| > 2
n=
2
X n+1 n
= (2 /2) z , for |z| > 2
n=
2n+2
We take the appropriate combination of these series to find the Laurent series expansions in the
regions: |z| < 1/2, 1/2 < |z| < 2 and 2 < |z|. For |z| < 1/2, we have
X X zn 4 X n+1 n
f (z) = (2)n+1 z n + + z +d
n=0 n=0
2n+1 8 n=0 2n
X 1 4n+1
f (z) = (2)n+1 + n+1 + n
zn + d
n=0
2 8 2
X 1 4
f (z) = (2)n+1 + 1+ (n + 1) z n + d, for |z| < 1/2
n=0
2n+1 4
376
For 1/2 < |z| < 2, we have
1
X X zn 4 X n+1 n
f (z) = (2)n+1 z n + + z +d
n= n=0
2n+1 8 n=0 2n
1
X
n+1 n
X 1 4
f (z) = (2) z + n+1
1+ (n + 1) z n + d, for 1/2 < |z| < 2
n= n=0
2 4
Solution 12.29
The radius of convergence of the series for f (z) is
k 3 /3k 3
= 3 lim k
R = lim 3 k+1 3
= 3.
n (k + 1) /3 n (k + 1)
Solution 12.30
1. (a)
1 1 1
= +
z(1 z) z 1z
1 X n
= + z , for 0 < |z| < 1
z n=0
1 X
= + zn, for 0 < |z| < 1
z n=1
377
(b)
1 1 1
= +
z(1 z) z 1z
1 1 1
=
z z 1 1/z
n
1 1X 1
= , for |z| > 1
z z n=0 z
1 X n
= z , for |z| > 1
z n=1
X
= zn, for |z| > 1
n=2
(c)
1 1 1
= +
z(1 z) z 1z
1 1
= +
(z + 1) 1 2 (z + 1)
1 1 1 1
= , for |z + 1| > 1 and |z + 1| > 2
(z + 1) 1 1/(z + 1) (z + 1) 1 2/(z + 1)
1 X 1 1 X 2n
= n
, for |z + 1| > 1 and |z + 1| > 2
(z + 1) n=0 (z + 1) (z + 1) n=0 (z + 1)n
1 X 1 2n
= , for |z + 1| > 2
(z + 1) n=0 (z + 1)n
X 1 2n
= , for |z + 1| > 2
n=1
(z + 1)n+1
X
1 2n1 (z + 1)n ,
= for |z + 1| > 2
n=2
We look for an annulus about z = 1 containing the point z = where f (z) is analytic. The
z = 1 are a distance of 1 from z = 1; the singularities
singularities at at z = 1 are at
a distance of 5. Since f (z) is analytic in the domain 1 < |z 1| < 5 there is a convergent
Laurent series in that domain.
378
Chapter 13
Man will occasionally stumble over the truth, but most of the time he will pick himself up and
continue on.
- Winston Churchill
We will find that many integrals on closed contours may be evaluated in terms of the residues of a
function. We first define residues and then prove the Residue Theorem.
1
The residue of f (z) at z = z0 is the coefficient of the zz0
term:
Res(f (z), z0 ) = a1 .
Example 13.1.1 In Example 8.4.5 we showed that f (z) = z/ sin z has first order poles at z = n,
379
C B
Residue Theorem. We can evaluate many integrals in terms of the residues of a function. Sup-
pose f (z) has only one singularity, (at z = z0 ), inside the simple, closed, positively oriented contour
C. f (z) has a convergent Laurent seriesRin some deleted disk about z0 . We deform C to lie in the
disk. See Figure 13.1. We now evaluate C f (z) dz by deforming the contour and using the Laurent
series expansion of the function.
Z Z
f (z) dz = f (z) dz
C B
Z
X
= an (z z0 )n dz
B n=
r e(+2)
(z z0 )n+1
r e(+2)
X
= an + a1 [log(z z0 )]r e
n=
n+1 r e
n6=1
= a1 2
Z
f (z) dz = 2 Res(f (z), z0 )
C
Now assume that f (z) has n singularities at {z1 , . . . , zn }. We deform C to n contours C1 , . . . , Cn
which enclose the singularities and lie in deleted disks about R the singularities in which f (z) has
convergent Laurent series. See Figure 13.2. We evaluate C f (z) dz by deforming the contour.
Z n Z
X n
X
f (z) dz = f (z) dz = 2 Res(f (z), zk )
C k=1 Ck k=1
Now instead let f (z) be analytic outside and on C except for isolated singularities at {n } in the
domain outside C Rand perhaps an isolated singularity at infinity. Let a be any point in the interior
of C. To evaluate C f (z) dz we make the change of variables = 1/(z a). This maps the contour
C to C 0 . (Note that C 0 is negatively oriented.) All the points outside C are mapped to points inside
C 0 and vice versa. We can then evaluate the integral in terms of the singularities inside C 0 .
380
C2
C1 C3
C
Figure 13.2: Deform the contour n contours which enclose the n singularities.
a C
1
I I
1
f (z) dz = f +a d
C C0 2
I
1 1
= 2
f + a dz
C 0 z z
X 1 1 1 1 1
= 2 Res f +a , + 2 Res f + a ,0 .
n
z2 z n a z2 z
Here the set of contours {Ck } make up the positively oriented boundary D
of the domain D. If the boundary of the domain is a single contour C then
the formula simplifies.
I X
f (z) dz = 2 Res(f (z), zn )
C n
381
Example 13.1.2 Consider
Z
1 sin z
dz
2 C z(z 1)
where C is the positively oriented circle of radius 2 centered at the origin. Since the integrand is
single-valued with only isolated singularities, the Residue Theorem applies. The value of the integral
is the sum of the residues from singularities inside the contour.
The only places that the integrand could have singularities are z = 0 and z = 1. Since
sin z cos z
lim = lim = 1,
z0 z z0 1
sin z sin z
Res , z = 1 = lim (z 1) = sin(1).
z(z 1) z1 z(z 1)
There is only one singular point with a residue inside the path of integration. The residue at
this point is sin(1). Thus the value of the integral is
Z
1 sin z
dz = sin(1)
2 C z(z 1)
Z
cot z coth z
dz
C z3
where C is the unit circle about the origin in the positive direction.
The integrand is
sin z has zeros at n. sinh z has zeros at n. Thus the only pole inside the contour of integration
is at z = 0. Since sin z and sinh z both have simple zeros at z = 0,
382
the integrand has a pole of order 5 at the origin. The residue at z = 0 is
1 d4 1 d4
5 cot z coth z
z 2 cot z coth z
lim z = lim
z0 4! dz 4 z 3 z0 4! dz 4
1 2 4
= lim 24 cot(z) coth(z)csc(z) 32z coth(z)csc(z)
4! z0
4 4
16z cos(2z) coth(z)csc(z) + 22z 2 cot(z) coth(z)csc(z)
5 2
+ 2z 2 cos(3z) coth(z)csc(z) + 24 cot(z) coth(z)csch(z)
2 2 2 2
+ 24csc(z) csch(z) 48z cot(z)csc(z) csch(z)
2 2 2 2
48z coth(z)csc(z) csch(z) + 24z 2 cot(z) coth(z)csc(z) csch(z)
4 2 4 2
+ 16z 2 csc(z) csch(z) + 8z 2 cos(2z)csc(z) csch(z)
4 4
32z cot(z)csch(z) 16z cosh(2z) cot(z)csch(z)
4 2 4
+ 22z 2 cot(z) coth(z)csch(z) + 16z 2 csc(z) csch(z)
2 2 4 2 5
+ 8z cosh(2z)csc(z) csch(z) + 2z cosh(3z) cot(z)csch(z)
1 56
=
4! 15
7
=
45
Since taking the fourth derivative of z 2 cot z coth z really sucks, we would like a more elegant way
of finding the residue. We expand the functions in the integrand in Taylor series about the origin.
2 4
2 4
cos z cosh z 1 z2 + z24 1 + z2 + z24 +
= 3 z5 3 z5
z 3 sin z sinh z z 3 z z6 + 120 z + z6 + 120
+
z4
1 6 +
= 1 1
z3 z + z6
2
36 + 60 +
4
1 1 z6 +
=
z 5 1 z904 +
z4 z4
1
= 5 1 + 1+ +
z 6 90
1 7
= 5 1 z4 +
z 45
1 7 1
= 5 +
z 45 z
7
Thus we see that the residue is 45 . Now we can evaluate the integral.
Z
cot z coth z 14
3
dz =
C z 45
383
For integrals on ( . . . ),
Z Z b
f (x) dx lim f (x) dx.
a, b a
R1 1
Example 13.2.1 1 x
dx is divergent. We show this with the definition of improper integrals.
Z 1 Z Z 1
1 1 1
dx = lim+ dx + lim+ dx
1 x 0 1 x 0 x
1
= lim [ln |x|]1 + lim [ln |x|]
0+ 0+
= lim+ ln lim+ ln
0 0
We could make the integral have any value we pleased by choosing = c. 1
Z Z 1
1
lim + dx = lim (ln ln(c)) = ln c
0+ 1 c x 0+
The Cauchy principal value is obtained by approaching the singularity symmetrically. The principal
value of the integral may exist when the integral diverges. If the integral exists, it is equal to the
principal value of the integral. R1
The Cauchy principal value of 1 x1 dx is defined
Z 1 Z Z 1
1 1 1
dx lim+ dx + dx
1 x 0 1 x x
1
= lim [log |x|]1 [log |x|]
0+
= lim+ (log | | log ||)
0
= 0.
R
(Another notation for the principal value of an integral is PV f (x) dx.) Since the limits of integra-
tion approach zero symmetrically, the two halves of the integral cancel. If the limits of integration
approached zero independently, (the definition of the integral), then the two halves would both
diverge.
1 Thismay remind you of conditionally convergent series. You can rearrange the terms to make the series sum to
any number.
384
R x
Example 13.2.2 x2 +1
dx is divergent. We show this with the definition of improper integrals.
Z Z b
x x
dx = lim dx
x2 + 1 a, b a x2+1
b
1
= lim ln(x2 + 1)
a, b 2
2 a
1 b +1
= lim ln
2 a, b a2 + 1
The integral diverges because a and b approach infinity independently. Now consider what would
happen if a and b were not independent. If they approached zero symmetrically, a = b, then the
value of the integral would be zero.
b2 + 1
1
lim ln =0
2 b b2 + 1
We could make the integral have any value we pleased by choosing a = cb.
R
We can assign a meaning to divergent integrals of the form
f (x) dx with the Cauchy principal
value. The Cauchy principal value of the integral is defined
Z Z a
f (x) dx = lim f (x) dx.
a a
Z Z a
x x
dx = lim dx
x2 + 1 a a x2+1
a
1 2
= lim ln x + 1
a 2
a
= 0.
385
Result 13.2.1 Cauchy Principal Value. If f (x) is continuous on (a, b)
except at the point x0 (a, b) then the integral of f (x) is defined
Z b Z x0 Z b
f (x) dx = lim+ f (x) dx + lim+ f (x) dx.
a 0 a 0 x0 +
The principal value of the integral may exist when the integral diverges. If the
integral exists, it is equal to the principal value of the integral.
R
Example 13.2.3 Clearly
x dx diverges, however the Cauchy principal value exists.
Z 2
x
x dx = lim a=0
a 2 a
In general, if f (x) is an odd function with no singularities on the finite real axis then
Z
f (x) dx = 0.
386
C
z0
We choose the branch of the logarithm with a branch cut on the positive real axis and arg log z
(0, 2).
= lim+ log e(2) 1 log (e 1)
0
= lim+ log 1 i + O(2 ) 1 log 1 + i + O(2 ) 1
0
= lim+ log i + O(2 ) log i + O(2 )
0
= lim+ Log + O(2 ) + arg + O(2 ) Log + O(2 ) arg + O(2 )
0
3
=
2 2
=
Thus we obtain
Z
1 0
for r < 1,
dz = for r = 1,
Cr z1
2 for r > 1.
In the above example we evaluated the contour integral by parameterizing the contour. This
approach is only feasible when the integrand is simple. We would like to use the residue theorem
to more easily evaluate the principal value of the integral. But before we do that, we will need a
preliminary result.
Result 13.3.1 Let f (z) have a first order pole at z = z0 and let (z z0 )f (z)
be analytic in some neighborhood of z0 . Let the contour C be a circular arc
from z0 + e to z0 + e . (We assume that > and < 2.)
Z
lim+ f (z) dz = ( ) Res(f (z), z0 )
0 C
The contour is shown in Figure 13.4. (See Exercise 13.9 for a proof of this
result.)
387
Cp
We can calculate the integral along C using Result 13.3.1. Note that as 0+ , the contour
becomes a semi-circle, a circular arc of radians.
Z
1 1
lim+ dz = Res , 1 =
0 C z1 z1
Now we can write the principal value of the integral along C in terms of the two known integrals.
Z Z Z
1 1 1
dz = dz dz
C z1 Ci z1 C z1
= 2
=
In the previous example, we formed an indented contour that included the first order pole. You
can show that if we had indented the contour to exclude the pole, we would obtain the same result.
(See Exercise 13.11.)
We can extend the residue theorem to principal values of integrals. (See Exercise 13.10.)
Result 13.3.2 Residue Theorem for Principal Values. Let f (z) be an-
alytic inside and on a simple, closed, positive contour C, except for isolated
singularities at z1 , . . . , zm inside the contour and first order poles at 1 , . . . , n
on the contour. Further, let the contour be C 1 at the locations of these first
order poles. (i.e., the contour does not have a corner at any of the first order
poles.) Then the principal value of the integral of f (z) along C is
Z Xm n
X
f (z) dz = 2 Res(f (z), zj ) + Res(f (z), j ).
C j=1 j=1
388
13.4 Integrals on the Real Axis
Example 13.4.1 We wish to evaluate the integral
Z
1
2
dx.
x + 1
Now we will evaluate the integral using contour integration. Let CR be the semicircular arc from R
to R in the upper half plane. Let C be the union of CR and the interval [R, R].
We can evaluate the integral along C with the residue theorem. The integrand has first order
poles at z = . For R > 1, we have
Z
1 1
2
dz = 2 Res ,
C z +1 z2 + 1
1
= 2
2
= .
Now we examine the integral along CR . We use the maximum modulus integral bound to show that
the value of the integral vanishes as R .
Z
1 1
2
dz R max 2
CR z + 1 zCR z + 1
1
= R 2
R 1
0 as R .
We would get the same result by closing the path of integration in the lower half plane. Note that
in this case the closed contour would be in the negative direction.
If you are really observant, you may have noticed that we did something a little funny in evalu-
ating Z
1
2+1
dx.
x
The definition of this improper integral is
Z Z 0 Z b
1 1 1
2+1
dx = lim 2+1
dx+ = lim 2+1
dx.
x a+ a x b+ 0 x
389
In the above example we instead computed
Z R
1
lim dx.
R+ R x2 +1
Note that for some integrands, the former and latter are not the same. Consider the integral of
x
x2 +1 .
Z Z 0 Z b
x x x
dx = lim dx + lim dx
x2 + 1 a+ a x2+1 b+ 0 x2+1
1 1
= lim log |a2 + 1| + lim log |b2 + 1|
a+ 2 b+ 2
Note that the limits do not exist and hence the integral diverges. We get a different result if the
limits of integration approach infinity symmetrically.
Z R
x 1
lim 2
dx = lim (log |R2 + 1| log |R2 + 1|)
R+ R x +1 R+ 2
=0
(Note that the integrand is an odd function, so the integral from R to R is zero.) We call this the
principal value of the integral and denote it by writing PV in front of the integral sign or putting
a dash through the integral.
Z Z Z R
PV f (x) dx f (x) dx lim f (x) dx
R+ R
The principal value of an integral may exist when the integral diverges. If the integral does
converge, then it is equal to its principal value.
We can use the method of Example 13.4.1 to evaluate the principal value of integrals of functions
that vanish fast enough at infinity.
390
Result 13.4.1 Let f (z) be analytic except for isolated singularities, with only
first order poles on the real axis. Let CR be the semi-circle from R to R in
the upper half plane. If
lim R max |f (z)| = 0
R zCR
then Z m n
X X
f (x) dx = 2 Res (f (z), zk ) + Res(f (z), xk )
k=1 k=1
where z1 , . . . zm are the singularities of f (z) in the upper half plane and
x1 , . . . , xn are the first order poles on the real axis.
Now let CR be the semi-circle from R to R in the lower half plane. If
lim R max |f (z)| = 0
R zCR
then
Z m
X n
X
f (x) dx = 2 Res (f (z), zk ) Res(f (z), xk )
k=1 k=1
where z1 , . . . zm are the singularities of f (z) in the lower half plane and
x1 , . . . , xn are the first order poles on the real axis.
This result is proved in Exercise 13.13. Of course we can use this result to evaluate the integrals
of the form
Z
f (z) dz,
0
We evaluate these integrals by closing the path of integration in the lower or upper half plane and
using techniques of contour integration.
Consider the integral
Z /2
eR sin d.
0
391
Z /2 Z /2
eR sin d eR2/ d
0 0
h i/2
= eR2/
2R 0
R
= (e 1)
2R
2R
0 as R
We can use this to prove the following Result 13.5.1. (See Exercise 13.17.)
vanishes as R .
WeRcan use Jordans Lemma and the Residue Theorem to evaluate many Fourier integrals. Con-
sider f (x) ex dx, where is a positive real number. Let f (z) be analytic except for isolated
singularities, with only first order poles on the real axis. Let C be the contour from R to R on
the real axis and then back to R along a semi-circle in the upper half plane. If R is large enough
so that C encloses all the singularities of f (z) in the upper half plane then
Z m
X n
X
f (z) ez dz = 2 Res(f (z) ez , zk ) + Res(f (z) ez , xk )
C k=1 k=1
where z1 , . . . zm are the singularities of f (z) in the upper half plane and x1 , . . . , xn are the first order
poles on the real axis. If f (z) vanishes as |z| then the integral on CR vanishes as R by
Jordans Lemma.
Z m
X n
X
f (x) ex dx = 2 Res(f (z) ez , zk ) + Res(f (z) ez , xk )
k=1 k=1
For negative we close the path of integration in the lower half plane. Note that the contour is
then in the negative direction.
392
Result 13.5.2 Fourier Integrals. Let f (z) be analytic except for isolated
singularities, with only first order poles on the real axis. Suppose that f (z)
vanishes as |z| . If is a positive real number then
Z Xm n
X
x z
f (x) e dx = 2 Res(f (z) e , zk ) + Res(f (z) ez , xk )
k=1 k=1
where z1 , . . . zm are the singularities of f (z) in the upper half plane and
x1 , . . . , xn are the first order poles on the real axis. If is a negative real
number then
Z Xm n
X
f (x) ex dx = 2 Res(f (z) ez , zk ) Res(f (z) ez , xk )
k=1 k=1
where z1 , . . . zm are the singularities of f (z) in the lower half plane and
x1 , . . . , xn are the first order poles on the real axis.
If f (x) is even/odd then we can evaluate the cosine/sine integral with the method we developed for
Fourier integrals.
Let f (z) be analytic except for isolated singularities, with only first order poles on the real axis.
Suppose that f (x) is an even function and that f (z) vanishes as |z| . We consider real > 0.
Z
1
Z
f (x) cos(x) dx = f (x) cos(x) dx
0 2
1
Z
f (x) sin(x) dx = 0.
2
Thus Z
1
Z
f (x) cos(x) dx = f (x) ex dx
0 2
Now we apply Result 13.5.2.
Z m n
X X
f (x) cos(x) dx = Res(f (z) ez , zk ) + Res(f (z) ez , xk )
0 2
k=1 k=1
where z1 , . . . zm are the singularities of f (z) in the upper half plane and x1 , . . . , xn are the first order
poles on the real axis.
If f (x) is an odd function, we note that f (x) cos(x) is an odd function to obtain the analogous
result for Fourier sine integrals.
393
Result 13.6.1 Fourier Cosine and Sine Integrals. Let f (z) be analytic
except for isolated singularities, with only first order poles on the real axis.
Suppose that f (x) is an even function and that f (z) vanishes as |z| . We
consider real > 0.
Z m n
X
z X
f (x) cos(x) dx = Res(f (z) e , zk ) + Res(f (z) ez , xk )
0 k=1
2 k=1
where z1 , . . . zm are the singularities of f (z) in the upper half plane and
x1 , . . . , xn are the first order poles on the real axis. If f (x) is an odd function
then,
Z n
X
z X
f (x) sin(x) dx = Res(f (z) e , k ) + Res(f (z) ez , xk )
0 k=1
2 k=1
where 1 , . . . are the singularities of f (z) in the lower half plane and
x1 , . . . , xn are the first order poles on the real axis.
Now suppose that f (x) is neither even nor odd. We can evaluate integrals of the form:
Z Z
f (x) cos(x) dx and f (x) sin(x) dx
z a
f (z) = |z| > 0, 0 < arg z < 2
z+1
394
CR
Figure 13.6:
exp[a(log r + i0)] ra
f (r e0 ) = =
r+1 r+1
a e2a
exp[a(log r + 2)] r
f (r e2 ) = = .
r+1 r+1
I
f (z) dz = 2 Res(f (z), 1)
C
R R
ra ra e2a
Z Z Z Z
dr + f (z) dz dr + f (z) dz = 2 Res(f (z), 1)
r+1 CR r+1 C
The residue is
We bound the integrals along C and CR with the maximum modulus integral bound.
a
1a
Z
2
f (z) dz = 2
C
1 1
Z a
R1a
R
f (z) dz 2R = 2
CR R1 R1
Since 0 < a < 1, the values of the integrals tend to zero as 0 and R . Thus we have
ra ea
Z
dr = 2
0 r+1 1 e2a
xa
Z
dx =
0 x+1 sin a
395
Result 13.7.1 Integrals from Zero to Infinity. Let f (z) be a single-valued
analytic function with only isolated singularities and no singularities on the
positive, real axis, [0, ). Let a 6 Z. If the integrals exist then,
Z Xn
f (x) dx = Res (f (z) log z, zk ) ,
0 k=1
Z n
2 X
xa f (x) dx = Res (z a f (z), zk ) ,
0 1 e2a k=1
Z n n
1 X
2
X
f (x) log x dx = Res f (z) log z, zk + Res (f (z) log z, zk ) ,
0 2 k=1 k=1
Z n
a 2 X
x f (x) log x dx = 2a
Res (z a f (z) log z, zk )
0 1e k=1
n
2a X
+ Res (z a f (z), zk ) ,
sin2 (a) k=1
n
!
m
Z
2 X
xa f (x) logm x dx = m Res (z a f (z), zk ) ,
0 a 1 e2a k=1
where z1 , . . . , zn are the singularities of f (z) and there is a branch cut on the
positive real axis with 0 < arg(z) < 2.
we put a branch cut on the positive real axis and noted that the value of the integrand below the
branch cut is a constant multiple of the value of the function above the branch cut. This enabled
us to evaluate the real integral with contour integration. In this section we will use other kinds of
symmetry to evaluate integrals. We will discover that periodicity of the integrand will produce this
symmetry.
396
where n N, n 2. We can evaluate this integral using Result 13.7.1.
Z n1
1 X log z (1+2k)/n
n
dx = Res , e
0 1+x 1 + zn
k=0
n1
(z e(1+2k)/n ) log z
X
= lim
ze(1+2k)/n 1 + zn
k=0
n1
log z + (z e(1+2k)/n )/z
X
= lim
ze(1+2k)/n nz n1
k=0
n1
X (1 + 2k)/n
=
n e(1+2k)(n1)/n
k=0
n1
X
= (1 + 2k) e2k/n
n2 e(n1)/n k=0
n1
2 e/n X
= k e2k/n
n2
k=1
2 e/n n
= 2 2/n
n e 1
=
n sin(/n)
This is a bit grungy. To find a spiffier way to evaluate the integral we note that if we write the
integrand as a function of r and , it is periodic in with period 2/n.
1 1
=
1 + zn 1 + rn en
The integrand along the rays = 2/n, 4/n, 6/n, . . . has the same value as the integrand on the
real axis. Consider the contour C that is the boundary of the wedge 0 < r < R, 0 < < 2/n.
There is one singularity inside the contour. We evaluate the residue there.
z e/n
1
Res , e/n = lim
1 + zn ze/n 1 + zn
1
= lim
ze/n nz n1
e/n
=
n
2 e/n
Z
1
dz =
C 1 + zn n
397
We parametrize the contour to evaluate the desired integral.
0
2 e/n
Z Z
1 1
dx + e2/n dx =
0 1 + xn 1+x
n n
Z /n
1 2 e
dx =
0 1 + xn n(1 e2/n )
Z
1
n
dx =
0 1+x n sin(/n)
= log(1) log(1)
= .
ex+ + ex
cosh(x + ) = = cosh(x).
2
Consider the box contour C that is the boundary of the region R < x < R, 0 < y < . The only
singularity of the integrand inside the contour is a first order pole at z = /2. We evaluate the
integral along C with the residue theorem.
I
1 1
dz = 2 Res ,
C cosh z cosh z 2
z /2
= 2 lim
z/2 cosh z
1
= 2 lim
z/2 sinh z
= 2
398
The value of the integrand on the top of the box is the negative of its value on the bottom. We take
the limit as R .
Z Z
1 1
dx + dx = 2
cosh x cosh x
Z
1
dx =
cosh x
z z 1 z + z 1 dz
sin = , cos = , dz = e d, d =
2 2 z
We write f (a) as an integral along C, the positively oriented unit circle |z| = 1.
I I
1/(z) 2/a
f (a) = 1
dz = 2
dz
C 1 + a(z z )/(2) C z + (2/a)z 1
Because |a| < 1, the second root is outside the unit circle.
1 + 1 a2
|z2 | = > 1.
|a|
Since |z1 z2 | = 1, |z1 | < 1. Thus the pole at z1 is inside the contour and the pole at z2 is outside.
We evaluate the contour integral with the residue theorem.
I
2/a
f (a) = 2
dz
C z + (2/a)z 1
2/a
= 2
z1 z2
1
= 2
1 a2
399
2
f (a) =
1 a2
Complex-Valued a. We note that the integral converges except for real-valued a satisfying
|a| 1. On any closed subset of C \ {a R | |a| 1} the integral is uniformly convergent. Thus
except for the values {a R | |a| 1}, we can differentiate the integral with respect to a. f (a) is
analytic in the complex plane except for the set of points on the real axis: a ( . . . 1] and
a [1 . . . ). The value of the analytic function f (a) on the real axis for the interval (1 . . . 1) is
2
f (a) = .
1 a2
By analytic continuation we see that the value of f (a) in the complex plane is the branch of the
function
2
f (a) =
(1 a2 )1/2
where f (a) is positive, real-valued for a (1 . . . 1) and there are branch cuts on the real axis on
the intervals: ( . . . 1] and [1 . . . ).
(z n) cos(z)
Res( cot(z), n) = lim
zn sin(z)
cos(z) (z n) sin(z)
= lim
zn cos(z)
=1
cos z = cos x cosh y sin x sinh y and sin z = sin x cosh y + cos x sinh y.
400
First we bound the modulus of cot(z).
cos x cosh y sin x sinh y
| cot(z)| =
sin x cosh y + cos x sinh y
s
cos2 x cosh2 y + sin2 x sinh2 y
=
sin2 x cosh2 y + cos2 x sinh2 y
s
cosh2 y
sinh2 y
= | coth(y)|
The hyperbolic cotangent, coth(y), has a simple pole at y = 0 and tends to 1 as y .
Along the top and bottom of Cn , (z = x(n+1/2)), we bound the modulus of g(z) = cot(z).
| cot(z)| coth((n + 1/2))
Along the left and right sides of Cn , (z = (n + 1/2) + y), the modulus of the function is bounded
by a constant.
cos((n + 1/2)) cosh(y) sin((n + 1/2)) sinh(y)
|g((n + 1/2) + y)| =
sin((n + 1/2)) cosh(y) + cos((n + 1/2)) sinh(y)
= | tanh(y)|
Thus the modulus of cot(z) can be bounded by a constant M on Cn .
Let f (z) be analytic except for isolated singularities. Consider the integral,
I
cot(z)f (z) dz.
Cn
Note that if
lim |zf (z)| = 0,
|z|
then I
lim cot(z)f (z) dz = 0.
n Cn
This implies that the sum of all residues of cot(z)f (z) is zero. Suppose further that f (z) is
analytic at z = n Z. The residues of cot(z)f (z) at z = n are f (n). This means
X
f (n) = ( sum of the residues of cot(z)f (z) at the poles of f (z) ).
n=
Result 13.10.1 If
lim |zf (z)| = 0,
|z|
then the sum of all the residues of cot(z)f (z) is zero. If in addition f (z) is
analytic at z = n Z then
X
f (n) = ( sum of the residues of cot(z)f (z) at the poles of f (z) ).
n=
401
Example 13.10.1 Consider the sum
X 1
, a 6 Z.
n=
(n + a)2
X 1 2
= 2
n=
(n + a)2 sin (a)
1 1 X (1)n
= 2z
sin z z n=1
n2 2 z 2
1 X (1)n (1)n
=
z n=1 n z n + z
402
13.11 Exercises
The Residue Theorem
Exercise 13.1
Evaluate the following closed contour integrals using Cauchys residue theorem.
Z
dz
1. 2
, where C is the contour parameterized by r = 2 cos(2), 0 2.
C z 1
ez
Z
2. 2
dz, where C is the positive circle |z| = 3.
C z (z 2)(z + 5)
Z
3. e1/z sin(1/z) dz, where C is the positive circle |z| = 1.
C
Exercise 13.2
Derive Cauchys integral formula from Cauchys residue theorem.
Exercise 13.3
Calculate the residues of the following functions at each of the poles in the finite part of the plane.
1
1.
z 4 a4
sin z
2.
z2
1 + z2
3.
z(z 1)2
ez
4.
z 2 + a2
(1 cos z)2
5.
z7
Exercise 13.4
Let f (z) have a pole of order n at z = z0 . Prove the Residue Formula:
dn1
1 n
Res(f (z), z0 ) = lim [(z z 0 ) f (z)] .
zz0 (n 1)! dz n1
Exercise 13.5
Consider the function
z4
f (z) = .
z2 +1
Classify the singularities of f (z) in the extended complex plane. Calculate the residue at each pole
and at infinity. Find the Laurent series expansions and their domains of convergence about the
points z = 0, z = and z = .
Exercise 13.6
Let P (z) be a polynomial none of whose roots lie on the closed contour . Show that
P 0 (z)
Z
1
dz = number of roots of P (z) which lie inside .
2 P (z)
403
Hint: From the fundamental theorem of algebra, it is always possible to factor P (z) in the form
P (z) = (z z1 )(z z2 ) (z zn ). Using this form of P (z) the integrand P 0 (z)/P (z) reduces to a
very simple expression.
Exercise 13.7
Find the value of
ez
I
dz
C (z ) tan z
where C is the positively-oriented circle
1. |z| = 2
2. |z| = 4
Evaluate Z 1
1
lim+ dx
0 1 x
and Z 1
1
lim dx.
0 1 x
The integral exists for arbitrarily close to zero, but diverges when = 0. Plot the real and
imaginary part of the integrand. If one were to assign meaning to the integral for = 0, what would
the value of the integral be?
Exercise 13.8
Do the principal values of the following integrals exist?
R1
1. 1 x12 dx,
R1 1
2. 1 x3
dx,
R1f (x)
3. 1 x3
dx.
404
Exercise 13.10
Let f (z) be analytic inside and on a simple, closed, positive contour C, except for isolated singu-
larities at z1 , . . . , zm inside the contour and first order poles at 1 , . . . , n on the contour. Further,
let the contour be C 1 at the locations of these first order poles. (i.e., the contour does not have a
corner at any of the first order poles.) Show that the principal value of the integral of f (z) along C
is
Z m
X n
X
f (z) dz = 2 Res(f (z), zj ) + Res(f (z), j ).
C j=1 j=1
Exercise 13.11
Let C be the unit circle. Evaluate
Z
1
dz
C z1
by indenting the contour to exclude the first order pole at z = 1.
Exercise 13.13
Prove Result 13.4.1.
Exercise 13.14
Evaluate
Z
2x
.
x2 +x+1
Exercise 13.15
Use contour integration to evaluate the integrals
Z
dx
1. ,
1 + x4
x2 dx
Z
2. ,
(1 + x2 )2
Z
cos(x)
3. dx.
1 + x2
Exercise 13.16
Evaluate by contour integration
x6
Z
dx.
0 (x4 + 1)2
Fourier Integrals
405
Exercise 13.17
Suppose that f (z) vanishes as |z| . If is a (positive / negative) real number and CR is a
semi-circle of radius R in the (upper / lower) half plane then show that the integral
Z
f (z) ez dz
CR
vanishes as R .
Exercise 13.18
Evaluate by contour integration
Z
cos 2x
dx.
x
Exercise 13.20
Evaluate
1 cos x
Z
dx.
x2
Exercise 13.21
Evaluate
Z
sin(x)
dx.
0 x(1 x2 )
Exercise 13.23
By methods of contour integration find
Z
dx
0 x2 + 5x + 6
R
[ Recall the trick of considering
f (z) log z dz with a suitably chosen contour and branch for
log z. ]
Exercise 13.24
Show that
xa
Z
a
dx = for 1 < <(a) < 1.
0 (x + 1)2 sin(a)
406
From this derive that
log2 x 2
Z Z
log x
dx = 0, dx = .
0 (x + 1)2 0 (x + 1)2 3
Exercise 13.25
Consider the integral
xa
Z
I(a) = dx.
0 1 + x2
1. For what values of a does the integral exist?
2. Evaluate the integral. Show that
I(a) =
2 cos(a/2)
Exercise 13.26
Let f (z) be a single-valued analytic function with only isolated singularities and no singularities
on the positive real axis, [0, ). Give sufficient conditions on f (x) for absolute convergence of the
integral Z
xa f (x) dx.
0
Assume that a is not an integer. Evaluate the integral by considering the integral of z a f (z) on a
suitable contour. (Consider the branch of z a on which 1a = 1.)
Exercise 13.27
Using the solution to Exercise 13.26, evaluate
Z
xa f (x) log x dx,
0
and Z
xa f (x) logm x dx,
0
where m is a positive integer.
Exercise 13.28
Using the solution to Exercise 13.26, evaluate
Z
f (x) dx,
0
Exercise 13.29
Let f (z) be an analytic function with only isolated singularities and no singularities on the positive
real axis, [0, ). Give sufficient conditions on f (x) for absolute convergence of the integral
Z
f (x) log x dx
0
407
Exercise 13.30
For what values of a does the following integral exist?
xa
Z
dx.
0 1 + x4
Exercise 13.31
By considering the integral of f (z) = z 1/2 log z/(z + 1)2 on a suitable contour, show that
x1/2 log x x1/2
Z Z
dx = , dx = .
0 (x + 1)2 0 (x + 1)2 2
Exploiting Symmetry
Exercise 13.32
Evaluate by contour integration, the principal value integral
Z
eax
I(a) = dx
ex ex
for a real and |a| < 1. [Hint: Consider the contour that is the boundary of the box, R < x < R,
0 < y < , but indented around z = 0 and z = .
Exercise 13.33
Evaluate the following integrals.
Z
dx
1. ,
0 (1 + x2 )2
Z
dx
2. .
0 1 + x3
Exercise 13.34
Find the value of the integral I
Z
dx
I=
0 1 + x6
by considering the contour integral Z
dz
1 + z6
with an appropriately chosen contour .
Exercise 13.35
2
Let C be the boundary of the sector 0 < r < R, 0 < < /4. By integrating ez on C and letting
R show that Z Z Z
1 2
2
cos(x ) dx = 2
sin(x ) dx = ex dx.
0 0 2 0
Exercise 13.36
Evaluate
Z
x
dx
sinh x
using contour integration.
408
Exercise 13.37
Show that
eax
Z
x
dx = for 0 < a < 1.
e +1 sin(a)
Use this to derive that
Z
cosh(bx)
dx = for 1 < b < 1.
cosh x cos(b/2)
Exercise 13.38
Using techniques of contour integration find for real a and b:
Z
d
F (a, b) = 2
0 (a + b cos )
What are the restrictions on a and b if any? Can the result be applied for complex a, b? How?
Exercise 13.39
Show that
Z
cos x
dx = /2
ex + ex e + e/2
[ Hint: Begin by considering the integral of ez /(ez + ez ) around a rectangle with vertices: R,
R + .]
Exercise 13.41
Use contour integration to evaluate the integrals
Z 2
d
1. ,
0 2 + sin()
Z
cos(n)
2. 2
d for |a| < 1, n Z0+ .
1 2a cos() + a
Exercise 13.42
By integration around the unit circle, suitably indented, show that
Z
cos(n) sin(n)
d = .
0 cos cos sin
Exercise 13.43
Evaluate
1
x2
Z
dx.
0 (1 + x2 ) 1 x2
Infinite Sums
409
Exercise 13.44
Evaluate
X 1
4
.
n=1
n
Exercise 13.45
Sum the following series using contour integration:
X 1
n 2 2
n=
410
13.12 Hints
The Residue Theorem
Hint 13.1
Hint 13.2
Hint 13.3
Hint 13.4
Substitute the Laurent series into the formula and simplify.
Hint 13.5
Use that the sum of all residues of the function in the extended complex plane is zero in calculating
the residue at infinity. To obtain the Laurent series expansion about z = , write the function as
a proper rational function, (numerator has a lower degree than the denominator) and expand in
partial fractions.
Hint 13.6
Hint 13.7
Hint 13.9
For the third part, does the integrand have a term that behaves like 1/x2 ?
Hint 13.11
Use the result of Exercise 13.9.
Hint 13.12
Look at Example 13.3.2.
Hint 13.14
Close the path of integration in the upper or lower half plane with a semi-circle. Use the maximum
modulus integral bound, (Result 10.2.1), to show that the integral along the semi-circle vanishes.
411
Hint 13.15
Make the change of variables x = 1/.
Hint 13.16
Use Result 13.4.1.
Hint 13.17
Fourier Integrals
Hint 13.18
Use
Z
eR sin d < .
0 R
Hint 13.19
Hint 13.21
Show that
Z
1 cos x 1 ex
Z
2
dx = dx.
x x2
Hint 13.22
Show that
ex
Z Z
sin(x)
dx = dx.
0 x(1 x2 ) 2 x(1 x2 )
Hint 13.24
Hint 13.25
Note that
Z 1
xa dx
0
412
Hint 13.26
Hint 13.27
Consider the integral of z a f (z) on the contour in Figure 13.11.
Hint 13.28
Differentiate with respect to a.
Hint 13.29
Take the limit as a 0. Use LHospitals rule. To corroborate the result, consider the integral of
f (z) log z on an appropriate contour.
Hint 13.30
Consider the integral of f (z) log2 z on the contour in Figure 13.11.
Hint 13.31
Consider the integral of
za
f (z) =
1 + z4
on the boundary of the region < r < R, 0 < < /2. Take the limits as 0 and R .
Hint 13.32
Consider the branch of f (z) = z 1/2 log z/(z + 1)2 with a branch cut on the positive real axis and
0 < arg z < 2. Integrate this function on the contour in Figure 13.11.
Exploiting Symmetry
Hint 13.33
Hint 13.34
For the second part, consider the integral along the boundary of the region, 0 < r < R, 0 < < 2/3.
Hint 13.35
Hint 13.36
To show that the integral on the quarter-circle vanishes as R establish the inequality,
4
cos 2 1 , 0 .
4
Hint 13.37
Consider the box contour C this is the boundary of the rectangle, R x R, 0 y . The
value of the integral is 2 /2.
Hint 13.38
Consider the rectangular contour with corners at R and R + 2. Let R .
Hint 13.39
Hint 13.40
413
Definite Integrals Involving Sine and Cosine
Hint 13.41
Hint 13.42
Hint 13.43
Hint 13.44
Make the changes of variables x = sin and then z = e .
Infinite Sums
Hint 13.45
Use Result 13.10.1.
Hint 13.46
414
1
-1 1
-1
13.13 Solutions
Solution 13.2
1. We consider
Z
dz
C z21
Z
dz 1 1
= 2 Res , z = 1 + Res , z = 1
C z2 1 z2 1 z2 1
!
1 1
= 2 +
z + 1 z=1 z 1 z=1
=0
ez
Z
dz,
C z 2 (z 2)(z + 5)
where C is the positive circle |z| = 3. There is a second order pole at z = 0, and first order
poles at z = 2 and z = 5. The poles at z = 0 and z = 2 lie inside the contour. We evaluate
415
the integral with Cauchys residue theorem.
ez ez
Z
2
dz = 2 Res , z = 0
C z (z 2)(z + 5) z 2 (z 2)(z + 5)
ez
+ Res , z = 2
z 2 (z 2)(z + 5)
ez ez
d
= 2 + 2
dz (z 2)(z + 5) z=0 z (z + 5) z=2
ez ez
d
= 2 + 2
dz (z 2)(z + 5) z=0 z (z + 5) z=2
!
z 2 + (7 2)z 5 12 ez
1 5
= 2 + e2
(z 2)2 (z + 5)2 58 116
z=0
3 1 5
= 2 + + e2
25 20 58 116
5 1 6 1 5
= + cos 2 sin 2 + + cos 2 + sin 2
10 58 29 25 29 58
Solution 13.3
If f () is analytic in a compact, closed, connected domain D and z is a point in the interior of D
then Cauchys integral formula states
I
n! f ()
f (n) (z) = d.
2 D ( z)n+1
To corroborate this, we evaluate the integral with Cauchys residue theorem. There is a pole of order
n + 1 at the point = z.
n! 2 dn
I
n! f ()
n+1
d. = n
f ()
2 D ( z) 2 n! d
=z
= f (n) (z)
Solution 13.4
1.
1 1
=
z 4 a4 (z a)(z + a)(z a)(z + a)
416
There are first order poles at z = a and z = a. We calculate the residues there.
1 1 1
Res ,z = a = = 3
z 4 a4 (z + a)(z a)(z + a) z=a 4a
1 1 1
Res , z = a = = 3
z 4 a4 (z a)(z a)(z + a) z=a 4a
1 1
Res 4 4
, z = a = = 3
z a (z a)(z + a)(z + a) z=a
4a
1 1
Res 4 4
, z = a = = 3
z a (z a)(z + a)(z a) z=a
4a
2.
sin z
z2
Since denominator has a second order zero at z = 0 and the numerator has a first order zero
there, the function has a first order pole at z = 0. We calculate the residue there.
sin z sin z
Res , z = 0 = lim
z2 z0 z
cos z
= lim
z0 1
=1
3.
1 + z2
z(z 1)2
1 + z2 1 + z 2
Res , z = 0 = =1
z(z 1)2 (z 1)2 z=0
1 + z2 d 1 + z 2
Res ,z = 1 =
z(z 1)2 dz z z=1
1
= 1 2
z z=1
=0
4. ez / z 2 + a2 has first order poles at z = a. We calculate the residues there.
ez ez ea
Res , z = a = =
z 2 + a2 z + a z=a
2a
z z
ea
e e
Res , z = a = =
z 2 + a2 z a z=a 2a
(1cos z)2
5. Since 1 cos z has a second order zero at z = 0, z7 has a third order pole at that point.
417
We find the residue by expanding the function in a Laurent series.
(1 cos z)2 z2 z4 2
7 6
=z 1 1 + +O z
z7 2 24
2
2
z4
z
= z 7 + O z6
2 24
4
z6
z
= z 7 + O z8
4 24
1 1
= 3 + O(z)
4z 24z
The residue at z = 0 is 1/24.
Solution 13.5
Since f (z) has an isolated pole of order n at z = z0 , it has a Laurent series that is convergent in a
deleted neighborhood about that point. We substitute this Laurent series into the Residue Formula
to verify it.
dn1
1 n
Res(f (z), z0 ) = lim [(z z0 ) f (z)]
zz0 (n 1)! dz n1
" #!
1 dn1 n
X
k
= lim (z z0 ) ak (z z0 )
zz0 (n 1)! dz n1
k=n
" #!
1 dn1 X k
= lim akn (z z0 )
zz0 (n 1)! dz n1
k=0
!
1 X k!
= lim akn (z z0 )kn+1
zz0 (n 1)! (k n + 1)!
k=n1
!
1 X (k + n 1)! k
= lim ak1 (z z0 )
zz0 (n 1)! k!
k=0
1 (n 1)!
= a1
(n 1)! 0!
= a1
Solution 13.6
Classify Singularities.
z4 z4
f (z) = = .
z2+1 (z )(z + )
There are first order poles at z = . Since the function behaves like z 2 at infinity, there is a second
order pole there. To see this more slowly, we can make the substitution z = 1/ and examine the
point = 0.
4
1 1 1
f = 2 = 2 = 2
+1 + 4 (1 + 2 )
f (1/) has a second order pole at = 0, which implies that f (z) has a second order pole at infinity.
Residues. The residues at z = are,
4
z4
z
Res 2
, = lim = ,
z +1 z z + 2
418
z4 z4
Res 2
, = lim = .
z +1 z z 2
The residue at infinity is
1 1
Res(f (z), ) = Res f , = 0
2
4
1
= Res , = 0
2 2 + 1
4
= Res , = 0
1 + 2
Here we could use the residue formula, but its easier to find the Laurent expansion.
!
X
4 n 2n
= Res (1) , = 0
n=0
=0
We could also calculate the residue at infinity by recalling that the sum of all residues of this function
in the extended complex plane is zero.
+ + Res(f (z), ) = 0
2 2
Res(f (z), ) = 0
Laurent Series about z = 0. Since the nearest singularities are at z = , the Taylor series
will converge in the disk |z| < 1.
z4 1
2
= z4
z +1 1 (z)2
X
4
=z (z 2 )n
n=0
X
= z4 (1)n z 2n
n=0
X
= (1)n z 2n
n=2
This geometric series converges for | z 2 | < 1, or |z| < 1. The series expansion of the function is
z4 X
= (1)n z 2n for |z| < 1
z 2 + 1 n=2
Laurent Series about z = . We expand f (z) in partial fractions. First we write the function
as a proper rational function, (i.e. the numerator has lower degree than the denominator). By
polynomial division, we see that
1
f (z) = z 2 1 + 2 .
z +1
Now we expand the last term in partial fractions.
/2 /2
f (z) = z 2 1 + +
z z+
419
Since the nearest singularity is at z = , the Laurent series will converge in the annulus 0 < |z | <
2.
z 2 1 = ((z ) + )2 1
= (z )2 + 2(z ) 2
/2 /2
=
z+ 2 + (z )
1/4
=
1 (z )/2
n
1 X (z )
=
4 n=0 2
1 X n
= (z )n
4 n=0 2n
This geometric series converges for |(z )/2| < 1, or |z | < 2. The series expansion of f (z) is
/2 1 X n
f (z) = 2 + 2(z ) + (z )2 + (z )n .
z 4 n=0 2n
z4 /2 2 1 X n
= 2 + 2(z ) + (z ) + (z )n for |z | < 2
z2 + 1 z 4 n=0 2n
Laurent Series about z = . Since the nearest singularities are at z = , the Laurent series
will converge in the annulus 1 < |z| < .
z4 z2
=
z2 +1 1 + 1/z 2
n
2
X 1
=z 2
n=0
z
0
X
= (1)n z 2(n+1)
n=
1
X
= (1)n+1 z 2n
n=
This geometric series converges for | 1/z 2 | < 1, or |z| > 1. The series expansion of f (z) is
1
z4 X
= (1)n+1 z 2n for 1 < |z| <
z 2 + 1 n=
Solution 13.7
Method 1: Residue Theorem. We factor P (z). Let m be the number of roots, counting
multiplicities, that lie inside the contour . We find a simple expression for P 0 (z)/P (z).
n
Y
P (z) = c (z zk )
k=1
n Y
X n
P 0 (z) = c (z zj )
k=1 j=1
j6=k
420
Pn Qn
c j=1 (z zj )
P 0 (z) k=1
j6=k
= Qn
P (z) c k=1 (z zk )
Qn
n j=1 (z zj )
j6=k
X
= Qn
j=1 (z zj )
k=1
n
X 1
=
z zk
k=1
=m
QMethod
n
2: Fundamental Theorem of Calculus. We factor the polynomial, P (z) =
c k=1 (z zk ). Let m be the number of roots, counting multiplicities, that lie inside the contour .
P 0 (z)
Z
1 1
dz = [log P (z)]C
2 P (z) 2
" n
#
1 Y
= log (z zk )
2
k=1
" n #C
1 X
= log(z zk )
2
k=1 C
The value of the logarithm changes by 2 for the terms in which zk is inside the contour. Its value
does not change for the terms in which zk is outside the contour.
" #
1 X
= log(z zk )
2
zk inside C
1 X
= 2
2
zk inside
=m
Solution 13.8
1.
ez ez cos z
I I
dz = dz
C (z ) tan z C (z ) sin z
421
theorem.
ez cos z ez cos z
I
dz = 2 Res ,z = 0
C (z ) sin z (z ) sin z
ez cos z
= 2 lim z
z=0 (z ) sin z
z
= 2 lim
z=0 sin z
1
= 2 lim
z=0 cos z
= 2
ez
I
dz = 2
C (z ) tan z
2. The integrand has a first order poles at z = 0, and a second order pole at z = inside the
contour. The value of the integral is 2 times the sum of the residues at these points. From
the previous part we know that residue at z = 0.
ez cos z
1
Res ,z = 0 =
(z ) sin z
ez cos z ez cos z
Res , z = = lim (z + )
(z ) sin z z (z ) sin z
e (1) z+
= lim
2 z sin z
e 1
= lim
2 z cos z
e
=
2
We find the residue at z = by finding the first few terms in the Laurent series of the integrand.
ez cos z e + e (z ) + O (z )2 1 + O (z )2
=
(z ) sin z (z ) ((z ) + O ((z )3 ))
e e (z ) + O (z )2
=
(z )2 + O ((z )4 )
e e
(z)2 + z + O(1)
=
1 + O ((z )2 )
e e
2
= + + O(1) 1 + O (z )
(z )2 z
e e
= + + O(1)
(z )2 z
422
The integral is
ez cos z ez cos z ez cos z
I
dz = 2 Res , z = + Res ,z = 0
C (z ) sin z (z ) sin z (z ) sin z
z
e cos z
+ Res ,z =
(z ) sin z
e
1
= 2 + e
2
ez
I
dz = 2 e 2 e
C (z ) tan z
1
= lim+ [log |x|]1 + lim+ [log |x|]
0 0
= lim+ log lim+ log
0 0
and Z 1
1
lim dx = .
0 1 x
423
Figure 13.8: The real and imaginary part of the integrand for several values of .
The integral exists for arbitrarily close to zero, but diverges when = 0. The real part of the
integrand is an odd function with two humps that get thinner and taller with decreasing . The
imaginary part of the integrand is an even function with a hump that gets thinner and taller with
decreasing . (See Figure 13.8.)
1 x 1
< = 2 , = = 2
x x + 2 x x + 2
Note that Z 1
1
< dx + as 0+
0 x
and Z 0
1
< dx as 0 .
1 x
However,
Z 1
1
lim < dx = 0
0 1 x
because the two integrals above cancel each other.
Now note that when = 0, the integrand is real. Of course the integral doesnt converge for this
case, but if we could assign some value to
Z 1
1
dx
1 x
Solution 13.10
1.
Z 1 Z Z 1
1 1 1
2
dx = lim+ 2
dx + 2
dx
1 x 0 1 x x
1 !
1 1
= lim+ +
0 x 1 x
1 1
= lim+ 11+
0
424
2.
Z 1 Z Z 1
1 1 1
3
dx = lim 3
dx + 3
dx
1 x 0+ 1 x x
1 !
1 1
= lim+ 2 + 2
0 2x 1 2x
1 1 1 1
= lim+ + + 2
0 2()2 2(1)2 2(1)2 2
=0
z = z0 + e , (, ).
425
CONTINUE
Solution 13.12
Let Ci be the contour that is indented with circular arcs or radius at each of the first order poles
on C so as to enclose these poles. Let A1 , . . . , An be these circular arcs of radius centered at the
points 1 , . . . , n . Let Cp be the contour, (not necessarily connected), obtained by subtracting each
of the Aj s from Ci .
Since the curve is C 1 , (or continuously differentiable), at each of the first order poles on C, the
Aj s becomes semi-circles as 0+ . Thus
Z
f (z) dz = Res(f (z), j ) for j = 1, . . . , n.
Aj
Z m
X n
X
f (z) dz = 2 Res(f (z), zj ) + Res(f (z), j ).
C j=1 j=1
Solution 13.13
Consider Z
1
dz
C z1
where C is the unit circle. Let Cp be the circular arc of radius 1 that starts and ends a distance of
from z = 1. Let C be the negative, circular arc of radius with center at z = 1 that joins the
endpoints of Cp . Let Ci , be the union of Cp and C . (Cp stands for Principal value Contour; Ci
stands for Indented Contour.) Ci is an indented contour that avoids the first order pole at z = 1.
Figure 13.9 shows the three contours.
Note that the principal value of the integral is
Z Z
1 1
dz = lim+ dz.
C z1 0 Cp z 1
We can calculate the integral along Ci with Cauchys theorem. The integrand is analytic inside the
contour. Z
1
dz = 0
Ci z 1
We can calculate the integral along C using Result 13.3.1. Note that as 0+ , the contour
becomes a semi-circle, a circular arc of radians in the negative direction.
Z
1 1
lim+ dz = Res , 1 =
0 C z 1 z1
426
Now we can write the principal value of the integral along C in terms of the two known integrals.
Z Z Z
1 1 1
dz = dz dz
C z1 Ci z1 C z1
= 0 ()
=
Solution 13.14
1. First we note that the integrand is an even function and extend the domain of integration.
x2 x2
Z Z
1
dx = dx
0 (x2 + 1)(x2 + 4) 2 (x2 + 1)(x2 + 4)
Next we close the path of integration in the upper half plane. Consider the integral along the
boundary of the domain 0 < r < R, 0 < < .
z2 z2
Z Z
1 1
dz = dz
2 C (z + 1)(z 2 + 4)
2 2 C (z )(z + )(z 2)(z + 2)
z2
1
= 2 Res , z =
2 (z 2 + 1)(z 2 + 4)
z2
+ Res , z = 2
(z 2 + 1)(z 2 + 4)
z2 z2
= +
(z + )(z 2 + 4) z= (z 2 + 1)(z + 2) z=2
=
6 3
=
6
R RR R
Let CR be the circular arc portion of the contour. C = R + CR . We show that the integral
along CR vanishes as R with the maximum modulus bound.
z2 z2
Z
2 2
dz R max 2
2
CR (z + 1)(z + 4) zCR (z + 1)(z + 4)
R2
= R
(R2
1)(R2 4)
0 as R
We take the limit as R to evaluate the integral along the real axis.
1 R x2
Z
lim dx =
R 2 R (x2 + 1)(x2 + 4) 6
Z
x2
dx =
0 (x2 + 1)(x2 + 4) 6
2. We close the path of integration in the upper half plane. Consider the integral along the
427
boundary of the domain 0 < r < R, 0 < < .
Z Z
dz dz
2 + a2
=
C (z + b) C (z + b a)(z + b + a)
1
= 2 Res , z = b + a
(z + b a)(z + b + a)
1
= 2
z + b + a z=b+a
=
a
R RR R
Let CR be the circular arc portion of the contour. C = R + CR . We show that the integral
along CR vanishes as R with the maximum modulus bound.
Z
dz
R max
1
2 2 2 2
CR (z + b) + a zCR (z + b) + a
1
= R
(R b)2 + a2
0 as R
We take the limit as R to evaluate the integral along the real axis.
Z R
dx
lim =
R R (x + b)2 + a2 a
Z
dx
2 + a2
=
(x + b) a
Solution 13.15
Let CR be the semicircular arc from R to R in the upper half plane. Let C be the union of CR and
the interval [R, R]. We can evaluate the principal value of the integral along C with Result 13.3.2.
Z m
X n
X
f (x) dx = 2 Res (f (z), zk ) + Res(f (z), xk )
C k=1 k=1
0 as R .
If we close the path of integration in the lower half plane, the contour will be in the negative direction.
Z m
X n
X
f (x) dx = 2 Res (f (z), zk ) Res(f (z), xk )
k=1 k=1
428
Solution 13.16
We consider
Z
2x
dx.
x2 +x+1
With the change of variables x = 1/, this becomes
Z
2 1
1
d,
2 + 1 + 1 2
Z
2 1
2
d
+ + 1
There are first order poles at = 0 and = 1/2 3/2. We close the path of integration in
the upper half plane with a semi-circle. Since the integrand decays like 3 the integrand along the
semi-circle vanishes as the radius tends to infinity. The value of the integral is thus
!
2z 1 2z 1
1 3
Res 2
, z = 0 + 2 Res 2
,z = +
z +z+1 z +z+1 2 2
2z 1
2
lim + 2 lim
z0 z2 + z + 1 z(1+ 3)/2 z + (1 + 3)/2
Z
2x 2
dx =
x2 + x + 1 3
Solution 13.17
1. Consider
Z
1
dx.
x4 +1
1
The integrand z 4 +1 is analytic on the real axis and has isolated singularities at the points
z e/4
1
Res 4
, e/4 = lim
z +1 ze/4 z4 + 1
1
= lim
ze/4 4z 3
1 3/4
= e
4
1
= ,
4 2
429
1 1
Res 4
, e3/4 =
z +1 4(e3/4 )3
1 /4
= e
4
1
= ,
4 2
We evaluate the integral with the residue theorem.
Z
1 1 1
4
dx = 2 +
x + 1 4 2 4 2
Z
1
dx =
x4 + 1 2
2. Now consider
x2
Z
dx.
(x2 + 1)2
The integrand is analytic on the real axis and has second order poles at z = . Since the
integrand decays sufficiently fast at infinity,
z2 R2
lim R max 2 = lim R =0
R zCR (z + 1)2 R (R2 1)2
z2 z2
d 2
Res ,z = = lim (z ) 2
(z 2 + 1)2 z dz (z + 1)2
2
d z
= lim
z dz (z + )2
(z + )2 2z z 2 2(z + )
= lim
z (z + )4
=
4
x2
Z
dx =
(x2 + 1)2 2
3. Since
sin(x)
1 + x2
is an odd function,
ex
Z Z
cos(x)
dx = dx
1 + x2 1 + x2
Since ez /(1 + z 2 ) is analytic except for simple poles at z = and the integrand decays
sufficiently fast in the upper half plane,
z
e
= lim R 1
lim R max =0
R zCR 1 + z 2 R R2 1
430
we can apply Result 13.4.1.
ex ez
Z
dx = 2 Res ,z =
1 + x2 (z )(z + )
e1
= 2
2
Z
cos(x)
dx =
1 + x2 e
Solution 13.18
Consider the function
z6
f (z) = .
(z 4 + 1)2
y 6
(y 4 + 1)2
x6
.
(x4 + 1)2
Thus to evaluate the real integral we consider the path of integration, C, which starts at the origin,
follows the real axis to R, follows a circular path to R and then follows the imaginary axis back
down to theorigin. f (z) has second order poles at the fourth roots of 1: (1 )/ 2. Of these
only (1 + )/ 2 lies inside the path of integration. We evaluate the contour integral with the Residue
Theorem. For R > 1:
z6 z6
Z
/4
dz = 2 Res ,z = e
C (z 4 + 1)2 (z 4 + 1)2
z6
d /4 2
= 2 lim (z e ) 4
ze/4 dz (z + 1)2
z6
d
= 2 lim
ze/4 dz (z e3/4 )2 (z e5/4 )2 (z e7/4 )2
z6
= 2 lim
ze/4 (z e3/4 )2 (z e5/4 )2 (z e7/4 )2
!
6 2 2 2
z z e3/4 z e5/4 z e7/4
!
6 2 2 2 2 2
= 2
(2)(4)(2) 1 + 2 2 + 2 2
3
= 2 (1 ) 2
32
3
= (1 + )
8 2
The integral along the circular part of the contour, CR , vanishes as R . We demonstrate this
431
with the maximum modulus integral bound.
z6 z6
Z
R
dz max
4 2 4 zCR (z 4 + 1)2
CR (z + 1)
R R6
=
4 (R 1)2
4
0 as R
Taking the limit R , we have:
Z Z 0
x6 (y)6 3
4 + 1)2
dx + 4 + 1)2
dy = (1 + )
0 (x ((y) 8 2
Z Z
x6 y6 3
4 + 1)2
dx + 4 + 1)2
dy = (1 + )
0 (x 0 (y 8 2
Z
x6 3
(1 + ) dx = (1 + )
0 (x4 + 1)2 8 2
Z 6
x 3
4 + 1)2
dx =
0 (x 8 2
Fourier Integrals
Solution 13.19
We know that
Z
eR sin d < .
0 R
First take the case that is positive and the semi-circle is in the upper half plane.
Z Z
z z
f (z) e dz e dz max |f (z)|
zCR
CR CR
Z
R e
e R e d max |f (z)|
0 zCR
Z
R sin
=R e d max |f (z)|
0 zCR
<R max |f (z)|
R zCR
= max |f (z)|
zCR
0 as R
The procedure is almost the same for negative .
Solution 13.20
First we write the integral in terms of Fourier integrals.
Z Z Z
cos 2x e2x e2x
dx = dx + dx
x 2(x ) 2(x )
1
Note that 2(z) vanishes as |z| . We close the former Fourier integral in the upper half plane
and the latter in the lower half plane. There is a first order pole at z = in the upper half plane.
Z
e2x e2z
dx = 2 Res , z =
2(x ) 2(z )
e2
= 2
2
432
There are no singularities in the lower half plane.
Z
e2x
dx = 0
2(x )
Let CR be the semicircular arc in the upper half plane from R to R. Let C be the closed contour
that is the union of CR and the real interval [R, R]. If we close the path of integration with a
semicircular arc in the upper half plane, we have
Z Z z
ez
Z
sin x e
dx = lim dz dz ,
x R C z CR z
Solution 13.22
Note that (1 cos x)/x2 has a removable singularity at x = 0. The integral decays like x12 at infinity,
so the integral exists. Since (sin x)/x2 is a odd function with a simple pole at x = 0, the principal
value of its integral vanishes.
Z
sin x
2
dx = 0
x
Z Z Z
1 cos x 1 cos x sin x 1 ex
dx = dx = dx
x2 x2 x2
433
Let CR be the semi-circle of radius R in the upper half plane. Since
1 ez
lim R max = lim R 2 = 0
R zCR z 2 R R2
the integral along CR vanishes as R .
1 ez
Z
dz 0 as R
CR z2
We can apply Result 13.4.1.
Z
1 ex 1 ez 1 ez ez
2
dx = Res 2
, z = 0 = lim = lim
x z z0 z z0 1
1 cos x
Z
dx =
x2
Solution 13.23
Consider
Z
sin(x)
dx.
0 x(1 x2 )
Note that the integrand has removable singularities at the points x = 0, 1 and is an even function.
Z
1 sin(x)
Z
sin(x)
dx = dx.
0 x(1 x2 ) 2 x(1 x2 )
cos(x)
Note that is an odd function with first order poles at x = 0, 1.
x(1 x2 )
Z
cos(x)
dx = 0
x(1 x2 )
Z Z
sin(x) ex
dx = dx.
0 x(1 x2 ) 2 x(1 x2 )
Let CR be the semi-circle of radius R in the upper half plane. Since
ez
1
lim R max
2
= lim R
2
=0
R zCR z(1 z ) R R(R 1)
the integral along CR vanishes as R .
ez
Z
2
dz 0 as R
CR z(1 z )
434
Contour Integration and Branch Cuts
Solution 13.24
Let C be the boundary of the region < r < R, 0 < < . Choose the branch of the logarithm with
a branch cut on the negative imaginary axis and the angle range /2 < < 3/2. We consider
the integral of log2 z/(1 + z 2 ) on this contour.
log2 z log2 z
I
dz = 2 Res ,z =
C 1 + z2 1 + z2
log2 z
= 2 lim
z z +
(/2)2
= 2
2
3
=
4
Let CR be the semi-circle from R to R in the upper half plane. We show that the integral along
CR vanishes as R with the maximum modulus integral bound.
log2 z 2
Z
R max log z
dz
CR 1 + z2 zCR 1 + z 2
ln2 R + 2 ln R + 2
R
R2 1
0 as R
Let C be the semi-circle from to in the upper half plane. We show that the integral along C
vanishes as 0 with the maximum modulus integral bound.
log2 z log2 z
Z
dz max
C 1 + z2 zC 1 + z 2
ln2 2 ln + 2
1 2
0 as 0
log2 z 3
I
2
dz =
C 1+z 4
Z 2 Z 0
ln r (ln r + )2 3
2
dr + 2
dr =
0 1+r 1+r 4
2 Z Z
3
Z
ln x ln x 2 1
2 dx + 2 dx = dx (13.1)
0 1 + x2 0 1 + x2 0 1 + x2 4
the integral of 1/(1 + z 2 ) along CR vanishes as R . We evaluate the integral with the Residue
435
CR
Theorem.
2
Z Z
1 1
2 dx = dx
0 1 + x2 2 1 + x
2
2
1
= 2 Res , z =
2 1 + z2
1
= 3 lim
z z +
3
=
2
ln2 x 3
Z Z
ln x
2 dx + 2 dx =
0 1 + x2 0 1 + x2 4
We equate the real and imaginary parts to solve for the desired integrals.
ln2 x 3
Z
dx =
0 1 + x2 8
Z
ln x
dx = 0
0 1 + x2
Solution 13.25
We consider the branch of the function
log z
f (z) =
z 2 + 5z + 6
with a branch cut on the real axis and 0 < arg(z) < 2.
Let C and CR denote the circles of radius and R where < 1 < R. C is negatively oriented;
CR is positively oriented. Consider the closed contour, C, that is traced by a point moving from
to R above the branch cut, next around CR back to R, then below the cut to , and finally around
C back to . (See Figure 13.11.)
We can evaluate the integral of f (z) along C with the residue theorem. For R > 3, there are
436
first order poles inside the path of integration at z = 2 and z = 3.
Z
log z log z log z
2
dz = 2 Res , z = 2 + Res , z = 3
C z + 5z + 6 z 2 + 5z + 6 z 2 + 5z + 6
log z log z
= 2 lim + lim
z2 z + 3 z3 z + 2
log(2) log(3)
= 2 +
1 1
= 2 (log(2) + log(3) )
2
= 2 log
3
In the limit as 0, the integral along C vanishes. We demonstrate this with the maximum
modulus theorem.
Z
log z
2 max
log z
2
dz 2
C z + 5z + 6 zC z + 5z + 6
2 log
2
6 5 2
0 as 0
In the limit as R , the integral along CR vanishes. We again demonstrate this with the
maximum modulus theorem.
Z
log z log z
2
dz 2R max
CR z + 5z + 6
zCR z 2 + 5z + 6
log R + 2
2R 2
R 5R 6
0 as R
Solution 13.26
We consider the integral
xa
Z
I(a) = dx.
0 (x + 1)2
To examine convergence, we split the domain of integration.
1
xa xa xa
Z Z Z
dx = dx + dx
0 (x + 1)2 0 (x + 1)2 1 (x + 1)2
437
First we work with the integral on (0 . . . 1).
Z 1 Z 1
xa xa
dx (x + 1)2 |dx|
2
0 (x + 1)
0
Z 1
x<(a)
= 2
dx
0 (x + 1)
Z 1
x<(a) dx
0
e2||
2
(1 )2
0 as 0
The integral on CR vanishes as R .
za za
Z
2
dz 2R max
CR (z + 1) zCR (z + 1)2
R e2||
2R
(R 1)2
0 as R
438
Above the branch cut, (z = r e0 ), the integrand is
ra
f (r e0 ) = .
(r + 1)2
e2a ra
f (r e2 ) = .
(r + 1)2
The right side has a removable singularity at a = 0. We use analytic continuation to extend the
answer to a = 0.
(
a
xa for 1 < <(a) < 1, a 6= 0
Z
sin(a)
I(a) = dx =
0 (x + 1)2 1 for a = 0
We can derive the last two integrals by differentiating this formula with respect to a and taking
the limit a 0.
Z a Z a
x log x x log2 x
I 0 (a) = dx, I 00
(a) = dx
0 (x + 1)2 0 (x + 1)2
Z Z
log x log2 x
I 0 (0) = dx, I 00
(0) = dx
0 (x + 1)2 0 (x + 1)2
We can find I 0 (0) and I 00 (0) either by differentiating the expression for I(a) or by finding the first
few terms in the Taylor series expansion of I(a) about a = 0. The latter approach is a little easier.
X I (n) (0) n
I(a) = a
n=0
n!
a
I(a) =
sin(a)
a
=
a (a)3 /6 + O(a5 )
1
=
1 (a)2 /6 + O(a4 )
2 a2
=1+ + O(a4 )
6
439
Z
0 log x
I (0) = dx = 0
0 (x + 1)2
log2 x 2
Z
I 00 (0) = dx =
0 (x + 1)2 3
Solution 13.27
1. We consider the integral
xa
Z
I(a) = dx.
0 1 + x2
To examine convergence, we split the domain of integration.
Z Z 1 Z
xa xa xa
2
dx = 2
dx + dx
0 1+x 0 1+x 1 1 + x2
First we work with the integral on (0 . . . 1).
Z 1 Z 1
xa xa
2
dx
1 + x2 |dx|
0 1+x
0
Z 1 <(a)
x
= 2
dx
0 1+x
Z 1
x<(a) dx
0
This integral converges for <(a) > 1.
Next we work with the integral on (1 . . . ).
Z Z
xa xa
dx 1 + x2 |dx|
1 1 + x2 1
Z <(a)
x
= dx
1 + x2
Z1
x<(a)2 dx
1
This integral converges for <(a) < 1.
Thus we see that the integral defining I(a) converges in the strip, 1 < <(a) < 1. The integral
converges uniformly in any closed subset of this domain. Uniform convergence means that we
can differentiate the integral with respect to a and interchange the order of integration and
differentiation. Z a
0 x log x
I (a) = dx
0 1 + x2
Thus we see that I(a) is analytic for 1 < <(a) < 1.
2. For 1 < <(a) < 1 and a 6= 0, z a is multi-valued. Consider the branch of the function
f (z) = z a /(1 + z 2 ) with a branch cut on the positive real axis and 0 < arg(z) < 2. We
integrate along the contour in Figure 13.11.
The integral on C vanishes are 0. We show this with the maximum modulus integral
bound. First we write z a in modulus-argument form, where z = e and a = + .
z a = ea log z
= e(+)(log +)
= e log +( log +)
= a e e( log +)
440
CR
Figure 13.11:
Z
za
a
z
dz 2 max
C 1 + z 2 zC 1 + z 2
e2||
2
1 2
0 as 0
za
Z a
z
2
dz 2R max
CR 1+z zCR 1 + z 2
R e2||
2R
R2 1
0 as R
ra
f (r e0 ) = .
1 + r2
e2a ra
f (r e2 ) = .
1 + r2
441
Now we use the residue theorem.
Z Z 0 2a a
ra
a a
e r z z
dr + dr = 2 Res , + Res ,
0 1 + r2 1+r
2 1 + z2 1 + z2
Z
xa za za
1 e2a
dx = 2 lim + lim
0 1 + x2 z z + z z
Z a
a/2
ea3/2
2a
x e
1e dx = 2 +
0 1 + x2 2 2
Z a a/2 a3/2
x e e
dx =
0 1 + x2 1 e2a
Z
xa ea/2 (1 ea )
2
dx =
0 1+x (1 + ea )(1 ea )
Z
xa
dx = a/2
0 1 + x2 e + ea/2
Z a
x
2
dx = for 1 < <(a) < 1, a 6= 0
0 1+x 2 cos(a/2)
3. We can derive the last two integrals by differentiating this formula with respect to a and taking
the limit a 0.
Z a Z a
x log x x log2 x
I 0 (a) = dx, I 00
(a) = dx
0 1 + x2 0 1 + x2
Z Z
log x log2 x
I 0 (0) = dx, I 00
(0) = dx
0 1 + x2 0 1 + x2
We can find I 0 (0) and I 00 (0) either by differentiating the expression for I(a) or by finding the
first few terms in the Taylor series expansion of I(a) about a = 0. The latter approach is a
little easier.
X I (n) (0) n
I(a) = a
n=0
n!
I(a) =
2 cos(a/2)
1
=
2 1 (a/2)2 /2 + O(a4 )
1 + (a/2)2 /2 + O(a4 )
=
2
3 /8 2
= + a + O(a4 )
2 2
Z
0 log x
I (0) = dx = 0
0 1 + x2
log2 x 3
Z
I 00 (0) = 2
dx =
0 1+x 8
442
Solution 13.28
Convergence. If xa f (x) x as x 0 for some > 1 then the integral
Z 1
xa f (x) dx
0
will converge absolutely. If xa f (x) x as x for some < 1 then the integral
Z
xa f (x)
1
will converge absolutely. These are sufficient conditions for the absolute convergence of
Z
xa f (x) dx.
0
Contour Integration. We put a branch cut on the positive real axis and choose 0 < arg(z) <
2. We consider the integral of z a f (z) on the contour in Figure 13.11. Let the singularities of f (z)
occur at z1 , . . . , zn . By the residue theorem,
Z n
X
z a f (z) dz = 2 Res (z a f (z), zk ) .
C k=1
1
On the circle of radius , the integrand is o( ). Since the length of C is 2, the integral on
C vanishes as 0. On the circle of radius R, the integrand is o(R1 ). Since the length of CR is
2R, the integral on CR vanishes as R .
The value of the integrand below the branch cut, z = x e2 , is
f (x e2 ) = xa e2a f (x)
In the limit as 0 and R we have
Z Z 0 n
X
xa f (x) dx + xa e2a f (x) dx = 2 Res (z a f (z), zk ) .
0 k=1
Z n
2 X
xa f (x) dx = Res (z a f (z), zk ) .
0 1 e2a
k=1
Solution 13.29
In the interval of uniform convergence of th integral, we can differentiate the formula
Z n
2 X
xa f (x) dx = Res (z a f (z), zk ) ,
0 1 e2a
k=1
n n
2 a X
Z
2 X
xa f (x) log x dx = Res (z a
f (z) log z, z k ) , + Res (z a f (z), zk ) ,
0 1 e2a
k=1
sin2 (a) k=1
443
Solution 13.30
Taking the limit as a 0 Z in the solution of Exercise 13.26 yields
Z Pn a
k=1 Res (z f (z), zk )
f (x) dx = 2 lim
0 a0 1 e2a
The numerator vanishes because the sum of all residues of z n f (z) is zero. Thus we can use
LHospitals rule. Z Pn a
k=1 Res (z f (z) log z, zk )
f (x) dx = 2 lim
0 a0 2 e2a
Z n
X
f (x) dx = Res (f (z) log z, zk )
0 k=1
This suggests that we could have derived the result directly by considering the integral of f (z) log z
on the contour in Figure 13.11. We put a branch cut on the positive real axis and choose the branch
arg z = 0. Recall that we have assumed that f (z) has only isolated singularities and no singularities
on the positive real axis, [0, ). By the residue theorem,
Z n
X
f (z) log z dz = 2 Res (f (z) log z, z = zk ) .
C k=1
By assuming that f (z) z as z 0 where > 1 the integral on C will vanish as 0. By
assuming that f (z) z as z where < 1 the integral on CR will vanish as R . The
value of the integrand below the branch cut, z = x e2 is f (x)(log x + 2). Taking the limit as
0 and R , we have
Z Z 0 n
X
f (x) log x dx + f (x)(log x + 2) dx = 2 Res (f (z) log z, zk ) .
0 k=1
Solution 13.31
Consider the integral of f (z) log2 z on the contour in Figure 13.11. We put a branch cut on the
positive real axis and choose the branch 0 < arg z < 2. Let z1 , . . . zn be the singularities of f (z).
By the residue theorem,
Z n
X
f (z) log2 z dz = 2 Res f (z) log2 z, zk .
C k=1
If f (z) z as z 0 for some > 1 then the integral on C will vanish as 0. f (z) z as
z for some < 1 then the integral on CR will vanish as R . Below the branch cut the
integrand is f (x)(log x + 2)2 . Thus we have
Z Z 0 n
X
f (x) log2 x dx + f (x)(log2 x + 4 log x 4 2 ) dx = 2 Res f (z) log2 z, zk .
0 k=1
Z Z n
X
f (x) log x dx + 4 2 Res f (z) log2 z, zk .
4 f (x) dx = 2
0 0 k=1
Z n n
1 X X
Res f (z) log2 z, zk +
f (x) log x dx = Res (f (z) log z, zk )
0 2
k=1 k=1
444
CR
za
Figure 13.12: Possible path of integration for f (z) = 1+z 4
Solution 13.32
Convergence. We consider
xa
Z
dx.
0 1 + x4
Since the integrand behaves like xa near x = 0 we must have <(a) > 1. Since the integrand behaves
like xa4 at infinity we must have <(a 4) < 1. The integral converges for 1 < <(a) < 3.
Contour Integration. The function
za
f (z) =
1 + z4
has first order poles at z = (1 )/ 2 and a branch point at z = 0. We could evaluate the real
integral by putting a branch cut on the positive real axis with 0 < arg(z) < 2 and integrating f (z)
on the contour in Figure 13.12.
Integrating on this contour would work because the value of the integrand below the branch cut
is a constant times the value of the integrand above the branch cut. After demonstrating that the
integrals along C and CR vanish in the limits as 0 and R we would see that the value of
the integral is a constant times the sum of the residues at the four poles. However, this is not the
only, (and not the best), contour that can be used to evaluate the real integral. Consider the value
of the integral on the line arg(z) = .
ra ea
f (r e ) =
1 + r4 e4
ra
f (x) = .
1 + r4
Thus any of the contours in Figure 13.13 can be used to evaluate the real integral. The only difference
is how many residues we have to calculate. Thus we choose the first contour in Figure 13.13. We put
a branch cut on the negative real axis and choose the branch < arg(z) < to satisfy f (1) = 1.
We evaluate the integral along C with the Residue Theorem.
za za
Z
1+
dz = 2 Res ,z =
C 1 + z4 1 + z4 2
|z a | = |(r e )+ | = r e .
445
CR CR CR
C C C
za
Figure 13.13: Possible Paths of Integration for f (z) = 1+z 4
The integral on C vanishes as 0. We demonstrate this with the maximum modulus integral
bound.
za
Z a
z
4
dz
max
C 1 + z 2 zC 1 + z 4
e||/2
2 1 4
0 as 0
za
Z a
R z
dz max
4 2 zCR 1 + z 4
CR 1 + z
R R e||/2
2 R4 1
0 as R
(x e/2 )a xa ea/2
= .
1 + (x e/2 )4 1 + x4
446
Solution 13.33
Consider the branch of f (z) = z 1/2 log z/(z + 1)2 with a branch cut on the positive real axis and
0 < arg z < 2. We integrate this function on the contour in Figure 13.11.
We use the maximum modulus integral bound to show that the integral on C vanishes as 0.
Z
z 1/2 log z
1/2
z log z
dz 2 max
C (z + 1)2 (z + 1)2
C
1/2 (2 log )
= 2
(1 )2
0 as 0
z 1/2 log z
Z 1/2
z log z
dz 2R max
CR (z + 1)2 CR (z + 1)2
R1/2 (log R + 2)
= 2R
(R 1)2
0 as R
x1/2 log x
f (x e0 ) = .
(x + 1)2
x1/2 (log x + )
f (x e2 ) = .
(x + 1)2
x1/2 log x x1/2
Z Z
d 1/2
2 dx + 2 dx = 2 lim (z log z)
0 (x + 1)2 0 (x + 1)2 z1 dz
x1/2 log x x1/2
Z Z
1 1/2 1
2 dx + 2 dx = 2 lim z log z + z 1/2
0 (x + 1)2 0 (x + 1)2 z1 2 z
x1/2 log x x1/2
Z Z
1
2 dx + 2 dx = 2 ()()
0 (x + 1)2 0 (x + 1)2 2
x1/2 log x x1/2
Z Z
2 dx + 2 dx = 2 + 2
0 (x + 1)2 0 (x + 1)2
Equating real and imaginary parts,
x1/2 log x x1/2
Z Z
dx = , dx = .
0 (x + 1)2 0 (x + 1)2 2
Exploiting Symmetry
447
Solution 13.34
Convergence. The integrand,
eaz eaz
= ,
ez ez 2 sinh(z)
has first order poles at z = n, n Z. To study convergence, we split the domain of integration.
Z Z 1 Z 1 Z
= + +
1 1
448
eax ea(x+ z eaz (z ) eaz
Z Z
x x
dx + x
dz = lim + lim
e e ex+ e z0 2 sinh(z) z 2 sinh(z)
Z ax az az
e e +az e e +a(z ) eaz
az
(1 + ea ) x x
dx = lim + lim
e e z0 2 cosh(z) z 2 cosh(z)
Z ax a
e 1 e
(1 + ea ) x ex
dx = +
e 2 2
Z ax a
e (1 e )
x ex
dx =
e 2(1 + ea )
Z
eax (ea/2 ea/2 )
x x
dx =
e e 2 ea/2 + ea/2
Z
eax a
x x
dx = tan
e e 2 2
Solution 13.35
1.
Z Z
dx 1 dx
2 = 2
0 (1 + x2 ) 2 (1 + x2 )
We apply Result 13.4.1 to the integral on the real axis. First we verify that the integrand
vanishes fast enough in the upper half plane.
! !
1 1
lim R max = lim R =0
R zCR (1 + z 2 )2 R (R2 1)
2
d 1
= 2 lim
z dz (z + )2
2
= 2 lim
z (z + )3
=
2
Z
dx
2 =
0 (1 + x2 ) 4
2. We wish to evaluate
Z
dx
.
0 x3 + 1
Let the contour C be the boundary of the region 0 < r < R, 0 < < 2/3. We factor the
denominator of the integrand to see that the contour encloses the simple pole at e/3 for
R > 1.
z 3 + 1 = (z e/3 )(z + 1)(z e/3 )
449
We calculate the residue at that point.
1 /3 /3 1
Res ,z = e = lim (z e ) 3
z3 + 1 ze/3 z +1
1
= lim
ze/3 (z + 1)(z e/3 )
1
= /3
(e +1)(e/3 e/3 )
e/3
=
3
We use the residue theorem to evaluate the integral.
2 e/3
I
dz
3
=
C z +1 3
We show that the integral along CR vanishes as R with the maximum modulus integral
bound. Z
dz 2R 1
3 R3 1 0 as R
3
CR z + 1
We take R and solve for the desired integral.
Z dx 2 e/3
/3
1+ e =
x3 + 1 3
Z 0
dx 2
3+1
=
0 x 3 3
Solution 13.36
Method 1: Semi-Circle Contour. We wish to evaluate the integral
Z
dx
I= .
0 1 + x6
We note that the integrand is an even function and express I as an integral over the whole real axis.
1 dx
Z
I=
2 1 + x6
Now we will evaluate the integral using contour integration. We close the path of integration in the
upper half plane. Let R be the semicircular arc from R to R in the upper half plane. Let be
the union of R and the interval [R, R]. (See Figure 13.14.)
We can evaluate the integral along with the residue theorem. The integrand has first order
poles at z = e(1+2k)/6 , k = 0, 1, 2, 3, 4, 5. Three of these poles are in the upper half plane. For
R > 1, we have
Z 2
1 X 1 (1+2k)/6
6
dz = 2 Res , e
z +1 z6 + 1
k=0
2
X z e(1+2k)/6
= 2 lim
ze(1+2k)/6 z6 + 1
k=0
450
Figure 13.14: The semi-circle contour.
451
Figure 13.15: The wedge contour.
Method 2: Wedge Contour. Consider the contour , which starts at the origin, goes to the
point R along the real axis, then to the point R e/3 along a circle of radius R and then back to the
origin along the ray = /3. (See Figure 13.15.)
We can evaluate the integral along with the residue theorem. The integrand has one first order
pole inside the contour at z = e/6 . For R > 1, we have
Z
1 1
6
dz = 2 Res 6
, e/6
z +1 z +1
z e/6
= 2 lim
ze/6 z6 + 1
1
= 2 lim
ze/6 6z 5
5/6
= e
3
= e/3
3
Now we examine the integral along the circular arc, R . We use the maximum modulus integral
bound to show that the value of the integral vanishes as R .
Z
1 R 1
dz
max
R z6 + 1 3 zR z 6 + 1
R 1
=
3 R6 1
0 as R .
452
Figure 13.16: cos(2) and 1 4
Solution 13.37
First note that
4
cos(2) 1 , 0 .
4
These two functions are plotted in Figure 13.16. To prove this inequality analytically, note that the
two functions are equal at the endpoints of the interval and that cos(2) is concave downward on
the interval,
d2
cos(2) = 4 cos(2) 0 for 0 ,
d2 4
while 1 4/ is linear.
Let CR be the quarter circle of radius R from = 0 to = /4. The integral along this contour
vanishes as R .
Z Z /4
z 2 (R e )2
R e d
e dz e
CR 0
Z /4
2
R eR cos(2) d
0
Z /4
2
R eR (14/)
d
0
h 2
i/4
= R 2 eR (14/)
4R 0
R2
= 1e
4R
0 as R
Let C be the boundary of the domain 0 < r < R, 0 < < /4. Since the integrand is analytic
inside C the integral along C is zero. Taking the limit as R , the integral from r = 0 to
along = 0 is equal to the integral from r = 0 to along = /4.
Z Z 1+ 2
2 2 x 1+
ex dx = e dx
0 0 2
Z Z
2 1+ 2
ex dx = ex dx
0 2 0
453
1+
Z Z
2
ex dx = cos(x2 ) sin(x2 ) dx
0 2 0
Z Z Z Z Z
2 1
ex dx = 2
cos(x ) dx + 2
sin(x ) dx + 2
cos(x ) dx 2
sin(x ) dx
0 2 0 0 2 0 0
We equate the imaginary part of this equation to see that the integrals of cos(x2 ) and sin(x2 ) are
equal.
Z Z
cos(x2 ) dx = sin(x2 ) dx
0 0
The real part of the equation then gives us the desired identity.
Z Z Z
1 2
cos(x2 ) dx = sin(x2 ) dx = ex dx
0 0 2 0
Solution 13.38
Consider the box contour C that is the boundary of the rectangle R x R, 0 y . There
is a removable singularity at z = 0 and a first order pole at z = . By the residue theorem,
Z
z z
dz = Res ,
C sinh z sinh z
z(z )
= lim
z sinh z
2z
= lim
z cosh z
= 2
x + x +
= .
sinh(x + ) sinh x
Note that Z
1
dx = 0
sinh x
as there is a first order pole at x = 0 and the integrand is odd.
2
Z
x
dx =
sinh x 2
454
Solution 13.39
First we evaluate
eax
Z
dx.
ex+1
Consider the rectangular contour in the positive direction with corners at R and R + 2. With
the maximum modulus integral bound we see that the integrals on the vertical sides of the contour
vanish as R .
Z
R+2 eaz eaR
dz 2 0 as R
ez +1 eR 1
R
Z
R eaz eaR
dz 2 0 as R
z
R+2 e +1 1 eR
In the limit as R tends to infinity, the integral on the rectangular contour is the sum of the integrals
along the top and bottom sides.
Z ax Z a(x+2)
eaz e e
Z
dz = dx + dx
C ez +1 e x +1
e x+2 +1
Z ax
eaz e
Z
2a
z +1
dz = (1 e ) x +1
dx
C e e
The only singularity of the integrand inside the contour is a first order pole at z = . We use the
residue theorem to evaluate the integral.
eaz
az
e
Z
z
dz = 2 Res z ,
C e +1 e +1
(z ) eaz
= 2 lim
z ez +1
a(z ) eaz + eaz
= 2 lim
z ez
a
= 2 e
We equate the two results for the value of the contour integral.
Z ax
2a e
(1 e ) x +1
dx = 2 ea
e
Z ax
e 2
dx = a
e x +1 e ea
Z ax
e
x +1
dx =
e sin(a)
455
Now we set b = 2a 1.
ebx
Z
dx = = for 1 < b < 1
cosh x sin((b + 1)/2) cos(b/2)
ebx
Z
dx = for 1 < b < 1
cosh x cos(b/2)
Adding these two equations and dividing by 2 yields the desired result.
Z
cosh(bx)
dx = for 1 < b < 1
cosh x cos(b/2)
Solution 13.40
Real-Valued Parameters. For b = 0, the integral has the value: /a2 . If b is nonzero, then we
can write the integral as
1
Z
d
F (a, b) = 2 .
b 0 (a/b + cos )2
If 1 c 1 then the integrand has a double pole on the path of integration. The integral diverges.
Otherwise the integral exists. To evaluate the integral, we extend the range of integration to (0..2)
and make the change of variables, z = e to integrate along the unit circle in the complex plane.
Z 2
1 d
G(c) =
2 0 (c + cos )2
z + z 1 dz
cos = , d = .
2 z
Z
1 dz/(z)
G(c) = 1 )/2)2
2C (c + (z + z
Z
z
= 2 dz
(2cz + z 2 + 1)2
ZC
z
= 2 dz
(z + c + c2 1)2 (z + c c2 1)2
C
If c > 1, then c c2 1 is outside the unit circle and c + c2 1 is inside the unit circle.
The integrand has a second order pole inside the path of integration. We evaluate the integral with
456
the residue theorem.
z p
G(c) = 22 Res 2
, z = c + c 1
(z + c + c2 1)2 (z + c c2 1)2
d z
= 4 lim
zc+ c 1 dz (z + c +
2 c2 1)2
1 2z
= 4 lim
zc+ c2 1 (z + c + c2 1)2 (z + c + c2 1)3
c + c2 1 z
= 4 lim
zc+ c2 1 (z + c + c2 1)3
2c
= 4
(2 c2 1)3
c
=p
(c 1)3
2
If c < 1, then c c2 1 is inside the unit circle and c + c2 1 is outside the unit circle.
z p
G(c) = 22 Res 2
, z = c c 1
(z + c + c2 1)2 (z + c c2 1)2
d z
= 4 lim
zc c 12 dz (z + c c2 1)2
1 2z
= 4 lim
zc c2 1 (z + c c2 1)2 (z + c c2 1)3
c c2 1 z
= 4 lim
zc c2 1 (z + c c2 1)3
2c
= 4
(2 c2 1)3
c
= p
(c 1)3
2
= a
for a/b > 1,
(a2 b2 )3
F (a, b) = a for a/b < 1,
(a2 b2 )3
is divergent for 1 a/b 1.
for complex c. Except for real-valued c between 1 and 1, the integral converges uniformly. We can
457
interchange differentiation and integration. The derivative of G(c) is
Z
d d
G0 (c) =
dc (c + cos )2
Z 0
2
= 3
d
0 (c + cos )
Thus we see that G(c) is analytic in the complex plane with a cut on the real axis from 1 to 1.
The value of the function on the positive real axis for c > 1 is
c
G(c) = p .
(c2 1)3
We use analytic continuation to determine G(c) for complex c. By inspection we see that G(c) is
the branch of
c
,
(c2 1)3/2
with a branch cut on the real axis from 1 to 1 and which is real-valued and positive for real c > 1.
Using F (a, b) = G(c)/b2 we can determine F for complex-valued a and b.
Solution 13.41
First note that
ex
Z Z
cos x
dx = dx
ex + ex ex + ex
x
since sin x/(e + ex ) is an odd function. For the function
ez
f (z) =
ez + ez
we have
ex ex
f (x + ) = x
= e x = e f (x).
ex+ +e e + ex
Thus we consider the integral
ez
Z
dz
C ez + ez
where C is the box contour with corners at R and R + . We can evaluate this integral with
the residue theorem. We can write the integrand as
ez
.
2 cosh z
We see that the integrand has first order poles at z = (n + 1/2). The only pole inside the path of
integration is at z = /2.
ez ez
Z
z
dz = 2 Res , z =
z
C e +e ez + ez 2
(z /2) ez
= 2 lim
z/2 ez + ez
e +(z /2) ez
z
= 2 lim
z/2 ez ez
e/2
= 2
e/2 e/2
= e/2
458
The integrals along the vertical sides of the box vanish as R .
Z
R+ ez ez
dz max
ez + ez z[R...R+] ez + ez
R
1
max R+y
Ry
y[0...] e + e
1
max R R2y
y[0...] e + e
1
=
2 sinh R
0 as R
Z +
ex ez
Z
dx + dz = e/2
ex + ex + ez + ez
Z
ex
(1 + e ) x x
dx = e/2
e + e
Z
ex
x
dx = /2
x
e + e e + e/2
Finally we have,
Z
cos x
dx = /2 .
ex + ex e + e/2
Solution 13.42
1. To evaluate the integral we make the change of variables z = e . The path of integration in
the complex plane is the positively oriented unit circle.
Z Z
d 1 dz
=
1 + sin2 C 1 (z z 1 )2 /4 z
Z
4z
= 4 6z 2 + 1
dz
C z
Z
4z
= dz
C z1 2 z1+ 2 z+1 2 z+1+ 2
There are first order poles at z = 1 2. The poles at z = 1 + 2 and z = 1 2 are
459
inside the path of integration. We evaluate the integral with Cauchys Residue Formula.
Z
4z 4z
4 2
dz = 2 Res , z = 1 + 2
C z 6z + 1 z 4 6z 2 + 1
4z
+ Res ,z = 1 2
z 4 6z 2 + 1
z
= 8
z1 2 z1+ 2 z+1+ 2
z=1+! 2
z
+
z1 2 z+1 2 z+1+ 2
z=1 2
1 1
= 8
8 2 8 2
= 2
460
2. First consider the case a = 0.
(
for n Z+
Z
0
cos(n) d =
2 for n = 0
sin(n)
1 2a cos + a2
is an even function,
en
Z Z
cos(n)
d = d
1 2a cos + a2 1 2a cos + a2
Let C be the positively oriented unit circle about the origin. We parametrize this contour.
z = e , dz = e d, ( . . . )
We write the integrand and the differential d in terms of z. Then we evaluate the integral
with the Residue theorem.
Z
en zn
I
dz
2
d = 2 z
1 2a cos + a C 1 a(z + 1/z) + a
zn
I
= dz
az 2 + (1 + a2 )z a
IC
zn
= 2
dz
a C z (a + 1/a)z + 1
zn
I
= dz
a C (z a)(z 1/a)
zn
= 2 Res ,z = a
a (z a)(z 1/a)
2 an
=
a a 1/a
2an
=
1 a2
We write the value of the integral for |a| < 1 and n Z0+ .
(
Z
cos(n) 2 for a = 0, n = 0
d = 2an
1 2a cos + a2 1a2 otherwise
Solution 13.44
Convergence. We consider the integral
Z
cos(n) sin(n)
I() = d = .
0 cos cos sin
We assume that is real-valued. If is an integer, then the integrand has a second order pole on
the path of integration, the principal value of the integral does not exist. If is real, but not an
integer, then the integrand has a first order pole on the path of integration. The integral diverges,
but its principal value exists.
461
Contour Integration. We will evaluate the integral for real, non-integer .
Z
cos(n)
I() = d
0 cos cos
1 2 cos(n)
Z
= d
2 0 cos cos
Z 2
1 en
= < d
2 0 cos cos
zn
Z
1 dz
I() = <
2 C (z + 1/z)/2 cos z
z n
Z
=< )
dz
C (z e )(z e
zn
= < () Res ,z = e
(z e )(z e )
zn
+ Res ,z = e
(z e )(z e )
zn zn
= < lim + lim
ze z e ze z e
en en
= < +
e e e e
n
n
e e
= <
e e
sin(n)
= <
sin()
Z
cos(n) sin(n)
I() = d = .
0 cos cos sin
Solution 13.45
Consider the integral
1
x2
Z
dx.
0 (1 + x2 ) 1 x2
We make the change of variables x = sin to obtain,
/2
sin2
Z
p cos d
0 (1 + sin2 ) 1 sin2
/2
sin2
Z
d
0 1 + sin2
/2
1 cos(2)
Z
d
0 3 cos(2)
2
1 cos
Z
1
d
4 0 3 cos
462
Now we make the change of variables z = e to obtain a contour integral on the unit circle.
1 (z + 1/z)/2
Z
1
dz
4 C 3 (z + 1/z)/2 z
(z 1)2
Z
dz
4 C z(z 3 + 2 2)(z 3 2 2)
There are two first order poles inside the contour. The value of the integral is
(z 1)2 (z 1)2
2 Res , 0 + Res ,z = 3 2 2
4 z(z 3 + 2 2)(z 3 2 2) z(z 3 + 2 2)(z 3 2 2)
(z 1)2 (z 1)2
lim + lim .
2 z0 (z 3 + 2 2)(z 3 2 2) z32 2 z(z 3 2 2)
Z 1
x2 (2 2)
dx =
2
0 (1 + x ) 1 x
2 4
Infinite Sums
Solution 13.46
From Result 13.10.1 we see that the sum of the residues of cot(z)/z 4 is zero. This function has
simples poles at nonzero integers z = n with residue 1/n4 . There is a fifth order pole at z = 0.
Finding the residue with the formula
1 d4
lim 4 (z cot(z))
4! z0 dz
would be a real pain. After doing the differentiation, we would have to apply LHospitals rule
multiple times. A better way of finding the residue is with the Laurent series expansion of the
function. Note that
1 1
=
sin(z) z (z) /6 + (z)5 /120
3
1 1
=
z 1 (z)2 /6 + (z)4 /120
2 2 !
4 4
2
1 2 2 4 4
= 1+ z z + + z z + + .
z 6 120 6 120
X 1 4
4
=
n=1
n 90
463
Solution 13.47
For this problem we will use the following result: If
then the sum of all the residues of cot(z)f (z) is zero. If in addition, f (z) is analytic at z = n Z
then
X
f (n) = ( sum of the residues of cot(z)f (z) at the poles of f (z) ).
n=
We assume that is not an integer, otherwise the sum is not defined. Consider f (z) = 1/(z 2 2 ).
Since
1
lim z 2
2
= 0,
|z| z
and f (z) is analytic at z = n, n Z, we have
X 1
2 2
= ( sum of the residues of cot(z)f (z) at the poles of f (z) ).
n=
n
464
Part IV
465
Chapter 14
-Tetsuyasu Uekuma
14.1 Notation
A differential equation is an equation involving a function, its derivatives, and independent variables.
If there is only one independent variable, then it is an ordinary differential equation. Identities such
as
d dy dx
f 2 (x) = 2f (x)f 0 (x), and
=1
dx dx dy
are not differential equations.
The order of a differential equation is the order of the highest derivative. The following equations
for y(x) are first, second and third order, respectively.
y 0 = xy 2
y 00 + 3xy 0 + 2y = x2
y 000 = y 00 y
The degree of a differential equation is the highest power of the highest derivative in the equation.
The following equations are first, second and third degree, respectively.
y 0 3y 2 = sin x
(y 00 )2 + 2x cos y = ex
(y 0 )3 + y 5 = 0
A differential equation is homogeneous if it has no terms that are functions of the independent
variable alone. Thus an inhomogeneous equation is one in which there are terms that are functions
of the independent variables alone.
y 00 + xy + y = 0 is a homogeneous equation.
y 0 + y + x2 = 0 is an inhomogeneous equation.
467
16 16
12 12
8 8
4 4
1 2 3 4 1 2 3 4
Figure 14.2: The discrete population of bacteria and a continuous population approximation.
A first order differential equation may be written in terms of differentials. Recall that for the
function y(x) the differential dy is defined dy = y 0 (x) dx. Thus the differential equations
y 0 = x2 y and y 0 + xy 2 = sin(x)
can be denoted:
dy = x2 y dx and dy + xy 2 dx = sin(x) dx.
A solution of a differential equation is a function which when substituted into the equation yields
an identity. For example, y = x ln |x| is a solution of
y
y0 = 1.
x
We verify this by substituting it into the differential equation.
ln |x| + 1 ln |x| = 1
We can also verify that y = c ex is a solution of y 00 y = 0 for any value of the parameter c.
c ex c ex = 0
468
In the discrete problem, the growth of the population is proportional to its number; the popula-
tion doubles every hour. For the continuous problem, we assume that this is true for y(t). We write
this as an equation:
y 0 (t) = y(t).
That is, the rate of change y 0 (t) in the population is proportional to the population y(t), (with
constant of proportionality ). We specify the population at time t = 0 with the initial condition:
y(0) = n0 . Note that y(t) = n0 et satisfies the problem:
Example 14.3.1 The equation y = cx defines family of lines with slope c, passing through the
origin. The equation x2 + y 2 = c2 defines circles of radius c, centered at the origin.
Consider a chicken dropped from a height h. The elevation y of the chicken at time t after its
release is y(t) = h gt2 , where g is the acceleration due to gravity. This is family of functions for
the parameter h.
It turns out that the general solution of any first order differential equation is a one-parameter
family of functions. This is not easy to prove. However, it is easy to verify the converse. We
differentiate Equation 14.1 with respect to x.
Fx + Fy y 0 = 0
(We assume that F has a non-trivial dependence on y, that is Fy 6= 0.) This gives us two equa-
tions involving the independent variable x, the dependent variable y(x) and its derivative and the
parameter c. If we algebraically eliminate c between the two equations, the eliminant will be a first
order differential equation for y(x). Thus we see that every one-parameter family of functions y(x)
satisfies a first order differential equation. This y(x) is the primitive of the differential equation.
Later we will discuss why y(x) is the general solution of the differential equation.
Example 14.3.2 Consider the family of circles of radius c centered about the origin.
x2 + y 2 = c2
469
y = x/y
It is trivial to eliminate the parameter and obtain a differential equation for the family of circles.
x + yy 0 = 0
We can see the geometric meaning in this equation by writing it in the form:
x
y0 = .
y
For a point on the circle, the slope of the tangent y 0 is the negative of the cotangent of the angle
x/y. (See Figure 14.3.)
y 0 = f 0 + cg 0 .
gy 0 g 0 y = gf 0 g 0 f
g0 g0 f
y0 y = f0
g g
Thus we see that y(x) = f (x) + cg(x) satisfies a first order linear differential equation. Later we
will prove the converse: the general solution of a first order linear differential equation has the form:
y(x) = f (x) + cg(x).
We have shown that every one-parameter family of functions satisfies a first order differential
equation. We do not prove it here, but the converse is true as well.
470
Result 14.3.1 Every first order differential equation has a one-parameter
family of solutions y(x) defined by an equation of the form:
F (x, y(x); c) = 0.
This y(x) is called the general solution. If the equation is linear then the
general solution expresses the totality of solutions of the differential equation.
If the equation is nonlinear, there may be other special singular solutions,
which do not depend on a parameter.
This is strictly an existence result. It does not say that the general solution of a first order
differential equation can be determined by some method, it just says that it exists. There is no
method for solving the general first order differential equation. However, there are some special
forms that are soluble. We will devote the rest of this chapter to studying these forms.
In this section we will introduce a few forms of differential equations that we may solve through
integration.
P (x) + Q(y)y 0 = 0
is a separable equation, (because the dependent and independent variables are separated). We can
obtain an implicit solution by integrating with respect to x.
Z Z
dy
P (x) dx + Q(y) dx = c
dx
Z Z
P (x) dx + Q(y) dy = c
Example 14.4.1 Consider the differential equation y 0 = xy 2 . We separate the dependent and
471
independent variables and integrate to find the solution.
dy
= xy 2
dx
y 2 dy = x dx
Z Z
y 2 dy = x dx + c
x2
y 1 = +c
2
1
y= 2
x /2 + c
y0
=1
y y2
y
= ex+c
y1
y
= c ex 1
y1
c ex
y= x
c e 1
1
y=
1 + c ex
du = P dx + Q dy,
then this equation is called exact. The (implicit) solution of the differential equation is
u(x, y) = c,
472
where c is an arbitrary constant. Since the differential of a function, u(x, y), is
u u
du dx + dy,
x y
P and Q are the partial derivatives of u:
u u
P (x, y) = , Q(x, y) = .
x y
du u u dy dy
+ = P (x, y) + Q(x, y) .
dx x y dx dx
Example 14.4.3
dy
x+y =0
dx
is an exact differential equation since
d 1 2 dy
(x + y 2 ) = x + y
dx 2 dx
473
Consider the exact equation,
P + Qy 0 = 0,
with primitive u, where we assume that the functions P and Q are continuously differentiable. Since
the mixed partial derivatives of u are equal,
2u 2u
= ,
xy yx
A sufficient condition for exactness. This necessary condition for exactness is also a sufficient
condition. We demonstrate this by deriving the general solution of (14.2). Assume that P + Qy 0 = 0
is not necessarily exact, but satisfies the condition Py = Qx . If the equation has a primitive,
du u u dy dy
+ = P (x, y) + Q(x, y) ,
dx x y dx dx
then it satisfies
u u
= P, = Q. (14.3)
x y
Integrating the first equation of (14.3), we see that the primitive has the form
Z x
u(x, y) = P (, y) d + f (y),
x0
for some f (y). Now we substitute this form into the second equation of (14.3).
u
= Q(x, y)
y
Z x
Py (, y) d + f 0 (y) = Q(x, y)
x0
is a primitive of the derivative; the equation is exact. The solution of the differential equation is
Z x Z y
P (, y) d + Q(x0 , ) d = c.
x0 y0
Even though there are three arbitrary constants: x0 , y0 and c, the solution is a one-parameter family.
This is because changing x0 or y0 only changes the left side by an additive constant.
474
Result 14.4.2 Any first order differential equation of the first degree can be
written in the form
dy
P (x, y) + Q(x, y) = 0.
dx
This equation is exact if and only if
Py = Qx .
Exercise 14.1
Solve the following differential equations by inspection. That is, group terms into exact derivatives
and then integrate. f (x) and g(x) are known functions.
y 0 (x)
1. y(x) = f (x)
F (, ) = n F (x, y).
475
Setting = 1, (and hence = x, = y), proves Eulers theorem.
dy
P (x, y) + Q(x, y) = 0, (14.4)
dx
is called a homogeneous coefficient equation. They are often referred to simply as homogeneous
equations.
du
P (1, u) + Q(1, u) u + x =0
dx
du
P (1, u) + uQ(1, u) + xQ(1, u) =0
dx
1 Q(1, u) du
+ =0
x P (1, u) + uQ(1, u) dx
Z
1
ln |x| + du = c
u + P (1, u)/Q(1, u)
1
(x, y) =
xP (x, y) + yQ(x, y)
is an integrating factor for the Equation 14.4. The proof of this is left as an exercise for the reader.
(See Exercise 14.2.)
476
Result 14.4.4 Homogeneous Coefficient Differential Equations. If
P (x, y) and Q(x, y) are homogeneous functions of degree n, then the equa-
tion
dy
P (x, y) + Q(x, y) =0
dx
y(x)
is made separable by the change of independent variable u(x) = x
. The
solution is determined by
Z
1 c
du = ln .
u + P (1, u)/Q(1, u) x
Alternatively, the homogeneous equation can be made exact with the integrat-
ing factor
1
(x, y) = .
xP (x, y) + yQ(x, y)
dy
x2 y 2 + xy = 0.
dx
Exercise 14.2
Show that
1
(x, y) =
xP (x, y) + yQ(x, y)
is an integrating factor for the homogeneous equation,
dy
P (x, y) + Q(x, y) = 0.
dx
dy y y 2
=2 + .
dt t t
477
14.5 The First Order, Linear Differential Equation
14.5.1 Homogeneous Equations
The first order, linear, homogeneous equation has the form
dy
+ p(x)y = 0.
dx
Note that if we can find one solution, then any constant times that solution also satisfies the equation.
If fact, all the solutions of this equation differ only by multiplicative constants. We can solve any
equation of this type because it is separable.
y0
= p(x)
y
Z
ln |y| = p(x) dx + c
R
y = e p(x) dx+c
R
y = c e p(x) dx
478
To solve Equation 14.6, we multiply by an integrating factor. Multiplying a differential equation by
its integrating factor changes it to an exact equation. Multiplying Equation 14.6 by the function,
I(x), yields,
dy
I(x) + p(x)I(x)y = f (x)I(x).
dx
In order that I(x) be an integrating factor, it must satisfy
d
I(x) = p(x)I(x).
dx
This is a first order, linear, homogeneous equation with the solution
R
p(x) dx
I(x) = c e .
This is an integrating factor for any constant c. For simplicity we will choose c = 1.
R
To solve Equation 14.6 we multiply by the integrating factor and integrate. Let P (x) = p(x) dx.
dy
eP (x) + p(x) eP (x) y = eP (x) f (x)
dx
d P (x)
e y = eP (x) f (x)
dx Z
y = eP (x) eP (x) f (x) dx + c eP (x)
y yp + c y h
Note that the general solution is the sum of a particular solution, yp , that satisfies y 0 + p(x)y = f (x),
and an arbitrary constant times a homogeneous solution, yh , that satisfies y 0 + p(x)y = 0.
479
10
-1 1
-5
-10
d
yp + p(x)yp = f (x)
dx
d
(uyh ) + p(x)uyh = f (x)
dx
u0 yh + u(yh0 + p(x)yh ) = f (x)
f (x)
u0 =
yh
Z
f (x)
u= dx
yh (x)
For the moment, we will assume that this problem is well-posed. A problem is well-posed R if there is a
unique solution to the differential equation that satisfies the constraint(s). Recall that eP (x) f (x) dx
480
Rx
denotes any integral of eP (x) f (x). For convenience, we choose x0 eP () f () d. The initial condition
requires that Z x0
y(x0 ) = y0 = eP (x0 ) eP () f () d + c eP (x0 ) = c eP (x0 ) .
x0
is continuous since the integral of a piecewise continuous function is continuous. The first derivative
of the solution can be found directly from the differential equation.
y 0 = p(x)y + f (x)
481
8
6
4
2
-1 1 2
With the condition y2 (1) = y1 (1) on the second equation, we demand that the solution be continuous.
The solution to the first equation is y = ex . The solution for the second equation is
Z x
y = ex e d + e1 ex1 = 1 + ex1 + ex .
1
y 0 + sign(x)y = 0, y(1) = 1.
Recall that
1
for x < 0
sign x = 0 for x = 0
1 for x > 0.
482
2
-3 -2 -1 1 2 3
dy
1. + xy = x2n+1 , y(1) = 1, nZ
dx
dy
2. 2xy = 1, y(0) = 1
dx
dy
+ y(x) = ex
dx
satisfies limx+ y(x) = 0. (The case = requires special treatment.) Find the solution for
= = 1 which satisfies y(0) = 1. Sketch this solution for 0 x < for several values of . In
particular, show what happens when 0 and .
483
1
-1 1
-1
1
y0 y = 0, y(0) = 1.
x
The general solution is y = cx. Applying the initial condition demands that 1 = c 0, which cannot
be satisfied. The general solution for various values of c is plotted in Figure 14.7.
1 1
y0 y= , y(0) = 1.
x x
The general solution is
y = 1 + cx.
The initial condition is satisfied for any value of c so there are an infinite number of solutions.
1
y0 + y = 0, y(0) = 1.
x
The general solution is y = xc . Depending on whether c is nonzero, the solution is either singular or
zero at the origin and cannot satisfy the initial condition.
The above problems in which there were either no solutions or an infinite number of solutions
are said to be ill-posed. If there is a unique solution that satisfies the initial condition, the problem
is said to be well-posed. We should have suspected that we would run into trouble in the above
examples as the initial condition was given at a singularity of the coefficient function, p(x) = 1/x.
Consider the problem,
y 0 + p(x)y = f (x), y(x0 ) = y0 .
We assume that f (x) bounded in a neighborhood of x = x0 . The differential equation has the
general solution, Z
P (x)
y=e eP (x) f (x) dx + c eP (x) .
484
If the homogeneous solution, eP (x) , is nonzero and finite at x = x0 , then there is a unique value of
c for which the initial condition is satisfied. If the homogeneous solution vanishes at x = x0 then
either the initial condition cannot be satisfied or the initial condition is satisfied for all values of c.
The homogeneous solution can vanish or be infinite only if P (x) as x x0 . This can occur
only if the coefficient function, p(x), is unbounded at that point.
Result 14.7.1 If the initial condition is given where the homogeneous solution
to a first order, linear differential equation is zero or infinite then the problem
may be ill-posed. This may occur only if the coefficient function, p(x), is
unbounded at that point.
dw
+ p(z)w = 0,
dz
where p(z), a function of a complex variable, is analytic in some domain D. The integrating factor,
Z
I(z) = exp p(z) dz ,
is an analytic function in that domain. As with the case of real variables, multiplying by the
integrating factor and integrating yields the solution,
Z
w(z) = c exp p(z) dz .
dw
+ |z|w = 0.
dz
For the solution to exist, w and hence w0 (z) must be analytic. Since p(z) = |z| is not analytic
anywhere in the complex plane, the equation has no solution.
Any point at which p(z) is analytic is called an ordinary point of the differential equation. Since
the solution is analytic we can expand it in a Taylor series about an ordinary point. The radius
of convergence of the series will be at least the distance to the nearest singularity of p(z) in the
complex plane.
dw 1
w = 0.
dz 1z
c
The general solution is w = 1z . Expanding this solution about the origin,
c X
w= =c zn.
1z n=0
485
The radius of convergence of the series is,
an
R = lim
= 1,
n an+1
1
which is the distance from the origin to the nearest singularity of p(z) = 1z .
We do not need to solve the differential equation to find the Taylor series expansion of the
homogeneous solution. We could substitute a general Taylor series expansion into the differential
equation and solve for the coefficients. Since we can always solve first order equations, this method
is of limited usefulness. However, when we consider higher order equations in which we cannot solve
the equations exactly, this will become an important method.
dw 1
w = 0.
dz 1z
Since
P we know that the solution has a Taylor series expansion about z = 0, we substitute w =
n
a
n=0 n z into the differential equation.
d X X
(1 z) an z n an z n = 0
dz n=0 n=0
X
X
X
nan z n1 nan z n an z n = 0
n=1 n=1 n=0
X
X
X
(n + 1)an+1 z n nan z n an z n = 0
n=0 n=0 n=0
X
((n + 1)an+1 (n + 1)an ) z n = 0.
n=0
Now we equate powers of z to zero. For z n , the equation is (n+1)an+1 (n+1)an = 0, or an+1 = an .
Thus we have that an = a0 for all n 1. The solution is then
X
w = a0 zn,
n=0
486
Exercise 14.7
Find the Taylor series expansion about the origin of the solution to
dw 1
+ w=0
dz 1z
P
with the substitution w = n=0 an z n . What is the radius of convergence of the series? What is the
1
distance to the nearest singularity of 1z ?
dw
+ w = 0, 6= 0.
dz z
This equation has a regular singular point at z = 0. The solution is w = cz . Depending on the
value of , the solution can have three different kinds of behavior.
is a positive integer The solution has a pole at the origin. w is analytic in the annulus, 0 < |z|.
is not an integer. w has a branch point at z = 0. The solution is analytic in the cut annulus
0 < |z| < , 0 < arg z < 0 + 2.
The exponential factor has a removable singularity at z = 0 and is analytic in |z| < r. We
consider the following cases for the z b0 factor:
b0 is a negative integer. Since z b0 is analytic at the origin, the solution to the differential
equation is analytic in the circle |z| < r.
b0 is a positive integer. The solution has a pole of order b0 at the origin and is analytic in the
annulus 0 < |z| < r.
b0 is not an integer. The solution has a branch point at the origin and thus is not single-valued.
The solution is analytic in the cut annulus 0 < |z| < r, 0 < arg z < 0 + 2.
487
Since the exponential factor has a convergent Taylor series in |z| < r, the solution can be
expanded in a series of the form
X
w = z b0 an z n , where a0 6= 0 and b0 = lim z p(z).
z0
n=0
Series of this form are known as Frobenius series. Since we can write the solution as
Z
b0
w = c(z z0 )b0 exp p(z) dz ,
z z0
we see that the Frobenius expansion of the solution will have a radius of convergence at least the
distance to the nearest singularity of p(z).
The radius of convergence of the expansion will be at least the distance to the
nearest singularity of p(z).
Example 14.8.5 We will find the first two nonzero terms in the series solution about z = 0 of the
differential equation,
dw 1
+ w = 0.
dz sin z
First we note that the coefficient function has a simple pole at z = 0 and
z 1
lim = lim = 1.
z0 sin z z0 cos z
Thus we look for a series solution of the form
X
w = z 1 an z n , a0 6= 0.
n=0
The nearest singularities of 1/ sin z in the complex plane are at z = . Thus the radius of
convergence of the series will be at least .
Substituting the first three terms of the expansion into the differential equation,
d 1
(a0 z 1 + a1 + a2 z) + (a0 z 1 + a1 + a2 z) = O(z).
dz sin z
488
4
2 4 6
-2
-4
Figure 14.8: Plot of the exact solution and the two term approximation.
z3
z + O(z 5 ) (a0 z 2 + a2 ) + (a0 z 1 + a1 + a2 z) = O(z 2 )
6
a0
a0 z 1 + a2 + z + a0 z 1 + a1 + a2 z = O(z 2 )
6 a0
a1 + 2a2 + z = O(z 2 )
6
a0 is arbitrary. Equating powers of z,
z0 : a1 = 0.
a0
z1 : 2a2 + = 0.
6
Thus the solution has the expansion,
z
w = a0 z 1 + O(z 2 ).
12
In Figure 14.8 the exact solution is plotted in a solid line and the two term approximation is plotted
in a dashed line. The two term approximation is very good near the point x = 0.
Example 14.8.6 Find the first two nonzero terms in the series expansion about z = 0 of the
solution to
cos z
w0 i w = 0.
z
Since cosz z has a simple pole at z = 0 and limz0 i cos z = i we see that the Frobenius series will
have the form
X
w = zi an z n , a0 6= 0.
n=0
P (1)n z 2n
Recall that cos z has the Taylor expansion n=0 (2n)! . Substituting the Frobenius expansion
into the differential equation yields
! ! !
i1
X
n i
X
n1
X (1)n z 2n i
X
n
z iz an z + z nan z i z an z =0
n=0 n=0 n=0
(2n)! n=0
! !
X
n
X (1)n z 2n X
n
(n + i)an z i an z = 0.
n=0 n=0
(2n)! n=0
489
Equating powers of z,
w0 + zw = 0
w0 z 2 w = 0
w0 + exp(1/z)w = 0
dw
+ z w = 0, 6= 0, 6= 1, 0, 1, 2, . . .
dz
This equation has an irregular singular point at the origin. Solving this equation,
Z
d
exp z dz w = 0
dz
n
(1)n
+1 X
w = c exp z =c z (+1)n .
+1 n=0
n! + 1
If is not an integer, then the solution has a branch point at the origin. If is an integer, < 1,
then the solution has an P essential singularity at the origin. The solution cannot be expanded in a
Frobenius series, w = z n=0 an z n .
Although we will not show it, this result holds for any irregular singular point of the differential
equation. We cannot approximate the solution near an irregular singular point using a Frobenius
expansion.
Now would be a good time to summarize what we have discovered about solutions of first order
differential equations in the complex plane.
490
Result 14.8.3 Consider the first order differential equation
dw
+ p(z)w = 0.
dz
Ordinary Points If p(z) is analytic at z = z0 then z0 is an ordinary point
of the differential
P equation.n The solution can be expanded in the Taylor
series w = n=0 an (z z0 ) . The radius of convergence of the series is at
least the distance to the nearest singularity of p(z) in the complex plane.
Regular Singular Points If p(z) has a simple pole at z = z0 and is analytic
in some annulus 0 < |z z0 | < r then z0 is a regular singular point
of the differential equation. The solution at z0 will either be analytic,
have a pole, or have a branch point.PThe solution can be expanded in
the Frobenius series w = (z z0 ) n
n=0 an (z z0 ) where a0 6= 0 and
= limzz0 (z z0 )p(z). The radius of convergence of the Frobenius series
will be at least the distance to the nearest singularity of p(z).
Irregular Singular Points If the point z = z0 is not an ordinary point
or a regular singular point, then it is an irregular singular point of the
differential equation. The solution cannot be expanded in a Frobenius
series about that point.
Example 14.8.8 Lets examine the behavior of sin z at infinity. We make the substitution z = 1/
and find the Laurent expansion about = 0.
X (1)n
sin(1/) =
n=0
(2n + 1)! (2n+1)
Since sin(1/) has an essential singularity at = 0, sin z has an essential singularity at infinity.
We use the same approach if we want to examine the behavior at infinity of a differential equation.
Starting with the first order differential equation,
dw
+ p(z)w = 0,
dz
we make the substitution
1 d d
z= , = 2 , w(z) = u()
dz d
to obtain
du
2 + p(1/)u = 0
d
du p(1/)
u = 0.
d 2
491
Result 14.8.4 The behavior at infinity of
dw
+ p(z)w = 0
dz
is the same as the behavior at = 0 of
du p(1/)
u = 0.
d 2
du 1 1
2 u=0
d (1/)2 + 9
du 1
2 u=0
d 9 + 1
Since the equation for u has a ordinary point at = 0, z = is a ordinary point of the equation
for w.
492
14.9 Additional Exercises
Exact Equations
Exercise 14.8 (mathematica/ode/first order/exact.nb)
Find the general solution y = y(x) of the equations
dy x2 + xy + y 2
1. = ,
dx x2
2. (4y 3x) dx + (y 2x) dy = 0.
Exercise 14.11
Are the following equations exact? If so, solve them.
dy
y 2 sin t + yf (t) =0 (14.7)
dt
is exact. Solve the differential equation for these f (t).
Initial Conditions
Well-Posed Problems
Exercise 14.14
Find the solutions of
dy
+ Ay = 1 + t2 , t > 0
t
dt
which are bounded at t = 0. Consider all (real) values of A.
493
Equations in the Complex Plane
Exercise 14.15
Classify the singular points of the following first order differential equations, (include the point at
infinity).
1. w0 + sin z
z w =0
2. w0 + 1
z3 w =0
3. w0 + z 1/2 w = 0
Exercise 14.16
Consider the equation
w0 + z 2 w = 0.
The point z = 0 is an irregular singular point of the differential equation. Thus we know that we
cannot expand the solution about z = 0 in a Frobenius series. Try substituting the series solution
X
w = z an z n , a0 6= 0
n=0
494
14.10 Hints
Hint 14.1
d 1
1. dx ln |u| = u
2. d c
dx u = uc1 u0
Hint 14.2
Hint 14.3
The equation is homogeneous. Make the change of variables u = y/t.
Hint 14.4
Make sure you consider the case = 0.
Hint 14.5
Hint 14.6
Hint 14.7
1
The radius of convergence of the series and the distance to the nearest singularity of 1z are not
the same.
Exact Equations
Hint 14.8
1.
2.
Hint 14.9
1. The equation is exact. Determine the primitive u by solving the equations ux = P , uy = Q.
Hint 14.10
1. This equation is separable. Integrate to get the general solution. Apply the initial condition
to determine the constant of integration.
2. Ditto. You will have to numerically solve an equation to determine where the solution is
defined.
Hint 14.11
Hint 14.12
495
Initial Conditions
Well-Posed Problems
Hint 14.14
Hint 14.16
Try to find the value of by substituting the series into the differential equation and equating powers
of z.
496
14.11 Solutions
Solution 14.1
1.
y 0 (x)
= f (x)
y(x)
d
ln |y(x)| = f (x)
dx Z
ln |y(x)| = f (x) dx + c
R
f (x) dx+c
y(x) = e
R
f (x) dx
y(x) = c e
2.
3.
y0 tan x
+y = cos x
cos x cos x
d y
= cos x
dx cos x
y
= sin x + c
cos x
y(x) = sin x cos x + c cos x
Solution 14.2
We consider the homogeneous equation,
dy
P (x, y) + Q(x, y) = 0.
dx
That is, both P and Q are homogeneous of degree n. We hypothesize that multiplying by
1
(x, y) =
xP (x, y) + yQ(x, y)
will make the equation exact. To prove this we use the result that
dy
M (x, y) + N (x, y) =0
dx
is exact if and only if My = Nx .
P
My =
y xP + yQ
Py (xP + yQ) P (xPy + Q + yQy )
=
(xP + yQ)2
497
Q
Nx =
x xP + yQ
Qx (xP + yQ) Q(P + xPx + yQx )
=
(xP + yQ)2
My = N x
Py (xP + yQ) P (xPy + Q + yQy ) = Qx (xP + yQ) Q(P + xPx + yQx )
yPy Q yP Qy = xP Qx xPx Q
xPx Q + yPy Q = xP Qx + yP Qy
(xPx + yPy )Q = P (xQx + yQy )
nP Q = P nQ
Thus the equation is exact. (x, y) is an integrating factor for the homogeneous equation.
Solution 14.3
We note that this is a homogeneous differential equation. The coefficient of dy/dt and the inhomo-
geneity are homogeneous of degree zero.
dy y y 2
=2 + .
dt t t
We make the change of variables u = y/t to obtain a separable equation.
tu0 + u = 2u + u2
u0 1
=
u2 + u t
Now we integrate to solve for u.
u0 1
=
u(u + 1) t
u0 u0 1
=
u u+1 t
ln |u| ln |u + 1| = ln |t| + c
u
ln = ln |ct|
u + 1
u
= ct
u+1
u
= ct
u+1
ct
u=
1 ct
t
u=
ct
t2
y=
ct
Solution 14.4
We consider
1
y0 y = x , x > 0.
x
498
First we find the integrating factor.
Z
1 1
I(x) = exp dx = exp ( ln x) = .
x x
We multiply by the integrating factor and integrate.
1 0 1
y y = x1
x x2
d 1
y = x1
dx x
Z
1
y = x1 dx + c
x
Z
y = x x1 dx + cx
(
x+1
+ cx for 6= 0,
y=
x ln x + cx for = 0.
Solution 14.5
1.
y 0 + xy = x2n+1 , y(1) = 1, nZ
We find the integrating factor.
2
R
x dx
I(x) = e = ex /2
We multiply by the integrating factor and integrate. Since the initial condition is given at
x = 1, we will take the lower bound of integration to be that point.
d x2 /2 2
e y = x2n+1 ex /2
dx Z
x
x2 /2 2 2
y=e 2n+1 e /2 d + c ex /2
1
If n 0 then we can use integration by parts to write the integral as a sum of terms. If n < 0
we can write the integral in terms of the exponential integral function. However, the integral
form above is as nice as any other and we leave the answer in that form.
2.
dy
2xy(x) = 1, y(0) = 1.
dx
We determine the integrating factor and then integrate the equation.
2
R
I(x) = e 2x dx = ex
d x2 2
e y = ex
dx Z
x
2 2 2
y = ex e d + c ex
0
499
We can write the answer in terms of the Error function,
Z x
2 2
erf(x) e d.
0
2
y = ex 1 + erf(x)
2
Solution 14.6
We determine the integrating factor and then integrate the equation.
R
I(x) = e dx = ex
d x
(e y) = e()x
dx Z
y = ex e()x dx + c ex
500
1
4 8 12 16
1 2 3 4 1 2 3 4
This behavior is shown in Figure 14.10. The first graph plots the solutions for = 1/128, 1/64, . . . , 1.
The second graph plots the solutions for = 1, 2, . . . , 128.
Solution 14.7 P dw 1
We substitute w = n=0 an z n into the equation dz + 1z w = 0.
d X 1 X
an z n + an z n = 0
dz n=0 1 z n=0
X
X
(1 z) nan z n1 + an z n = 0
n=1 n=0
X
X
X
(n + 1)an+1 z n nan z n + an z n = 0
n=0 n=0 n=0
X
((n + 1)an+1 (n 1)an ) z n = 0
n=0
501
1
The radius of convergence of the series in infinite. The nearest singularity of 1z is at z = 1. Thus
we see the radius of convergence can be greater than the distance to the nearest singularity of the
coefficient function, p(z).
Exact Equations
Solution 14.8
1.
dy x2 + xy + y 2
=
dx x2
Since the right side is a homogeneous function of order zero, this is a homogeneous differential
equation. We make the change of variables u = y/x and then solve the differential equation
for u.
xu0 + u = 1 + u + u2
du dx
=
1 + u2 x
arctan(u) = ln |x| + c
u = tan(ln(|cx|))
y = x tan(ln(|cx|))
2.
(4y 3x) dx + (y 2x) dy = 0
Since the coefficients are homogeneous functions of order one, this is a homogeneous differential
equation. We make the change of variables u = y/x and then solve the differential equation
for u.
y y
4 3 dx + 2 dy = 0
x x
(4u 3) dx + (u 2)(u dx + x du) = 0
(u2 + 2u 3) dx + x(u 2) du = 0
dx u2
+ du = 0
x (u + 3)(u 1)
dx 5/4 1/4
+ du = 0
x u+3 u1
5 1
ln(x) + ln(u + 3) ln(u 1) = c
4 4
x4 (u + 3)5
=c
u1
x4 (y/x + 3)5
=c
y/x 1
(y + 3x)5
=c
yx
Solution 14.9
1.
(3x2 2xy + 2) dx + (6y 2 x2 + 3) dy = 0
We check if this form of the equation, P dx + Q dy = 0, is exact.
Py = 2x, Qx = 2x
502
Since Py = Qx , the equation is exact. Now we find the primitive u(x, y) which satisfies
ux = P, uy = Q. (14.8)
ux = 3x2 2xy + 2
u = x3 x2 y + 2x + f (y)
We substitute this into the second equation of 14.8 to determine the function of integration
up to an additive constant.
x2 + f 0 (y) = 6y 2 x2 + 3
f 0 (y) = 6y 2 + 3
f (y) = 2y 3 + 3y
x3 x2 y + 2x + 2y 3 + 3y = c
2.
dy ax + by
=
dx bx + cy
(ax + by) dx + (bx + cy) dy = 0
Py = b, Qx = b
Since Py = Qx , the equation is exact. Now we find the primitive u(x, y) which satisfies
ux = P, uy = Q. (14.9)
ux = ax + by
1 2
u = ax + bxy + f (y)
2
We substitute this into the second equation of 14.9 to determine the function of integration
up to an additive constant.
bx + f 0 (y) = bx + cy
f 0 (y) = cy
1
f (y) = cy 2
2
The solution of the differential equation is determined by the implicit equation u = d.
ax2 + 2bxy + cy 2 = d
503
Solution 14.10
Note that since these equations are nonlinear, we cannot predict where the solutions will be defined
from the equation alone.
1. This equation is separable. We integrate to get the general solution.
dy
= (1 2x)y 2
dx
dy
= (1 2x) dx
y2
1
= x x2 + c
y
1
y= 2
x xc
Now we apply the initial condition.
1 1
y(0) = =
c 6
1
y= 2
x x6
1
y=
(x + 2)(x 3)
x dx + y ex dy = 0
x ex dx + y dy = 0
1
(x 1) ex + y 2 = c
p 2
y = 2(c + (1 x) ex )
The function 2(1 x) ex 1 is plotted in Figure 14.11. We see that the argument of the
square root in the solution is non-negative only on an interval about the origin. Because 2(1
x) ex 1 == 0 is a mixed algebraic / transcendental equation, we cannot solve it analytically.
The solution of the differential equation is defined on the interval (1.67835 . . . 0.768039).
Solution 14.11
1. We consider the differential equation,
1 y 9x2 = 1
Py =
y
Qx = (4y x) = 1
x
504
1
-5 -4 -3 -2 -1 1
-1
-2
-3
This equation is exact. It is simplest to solve the equation by rearranging terms to form exact
derivatives.
4yy 0 xy 0 y + 1 9x2 = 0
d 2
2y xy + 1 9x2 = 0
dx
2y 2 xy + x 3x3 + c = 0
1 p
y= x x2 8(c + x 3x3 )
4
Py = (2x + 4y) = 4
y
Qx = (2x 2y) = 2
x
Since Py 6= Qx , this is not an exact equation.
Solution 14.12
Recall that the differential equation
2y sin t = yf 0 (t)
f 0 (t) = 2 sin t
f (t) = 2(a cos t).
505
The First Order, Linear Differential Equation
Solution 14.13
Consider the differential equation
y
y0 + = 0.
sin x
We use Equation 14.5 to determine the solution.
R
1/ sin x dx
y = ce
y = c e ln | tan(x/2)|
x
y = c cot
2
x
y = c cot
2
Initial Conditions
Well-Posed Problems
Solution 14.14
First we write the differential equation in the standard form.
dy A 1
+ y = + t, t>0
dt t t
We determine the integrating factor.
R
A/t dt
I(t) = e = eA ln t = tA
For positive A, the solution is bounded at the origin only for c = 0. For A = 0, there are no bounded
solutions. For negative A, the solution is bounded there for any value of c and thus we have a
one-parameter family of solutions.
In summary, the solutions which are bounded at the origin are:
t2
1
A + A+2 ,
A>0
t2
y = A1 + A+2 + ctA , A < 0, A 6= 2
1
2 + t2 ln t + ct2 , A = 2
506
1. Consider the equation w0 + sinz z w = 0. The point z = 0 is the only point we need to examine
in the finite plane. Since sinz z has a removable singularity at z = 0, there are no singular points
in the finite plane. The substitution z = 1 yields the equation
sin(1/)
u0 u = 0.
Since sin(1/)
has an essential singularity at = 0, the point at infinity is an irregular singular
point of the original differential equation.
2. Consider the equation w0 + z31 1
w = 0. Since z3 has a simple pole at z = 3, the differential
equation has a regular singular point there. Making the substitution z = 1/, w(z) = u()
1
u0 u=0
2 (1/ 3)
1
u0 u = 0.
(1 3)
Since this equation has a simple pole at = 0, the original equation has a regular singular
point at infinity.
3. Consider the equation w0 + z 1/2 w = 0. There is an irregular singular point at z = 0. With the
substitution z = 1/, w(z) = u(),
1/2
u0 u=0
2
u0 5/2 u = 0.
We see that the point at infinity is also an irregular singular point of the original differential
equation.
Solution 14.16
We start with the equation
w0 + z 2 w = 0.
P
Substituting w = z n=0 an z n , a0 6= 0 yields
!
d X X
z
an z n
+ z 2 z an z n = 0
dz n=0 n=0
X
X
X
z 1 an z n + z nan z n1 + z an z n2 = 0
n=0 n=1 n=0
The lowest power of z in the expansion is z 2 . The coefficient of this term is a0 . Equating powers
of z demands that a0 = 0 which contradicts our initial assumption that it was nonzero. Thus we
cannot find a such that the solution can be expanded in the form,
X
w = z an z n , a0 6= 0.
n=0
507
14.12 Quiz
Problem 14.1
What is the general solution of a first order differential equation?
Problem 14.2
Write a statement about the functions P and Q to make the following statement correct.
The first order differential equation
dy
P (x, y) + Q(x, y) =0
dx
is exact if and only if . It is separable if .
Problem 14.3
Derive the general solution of
dy
+ p(x)y = f (x).
dx
Problem 14.4
Solve y 0 = y y 2 .
508
14.13 Quiz Solutions
Solution 14.1
The general solution of a first order differential equation is a one-parameter family of functions which
satisfies the equation.
Solution 14.2
The first order differential equation
dy
P (x, y) + Q(x, y) =0
dx
is exact if and only if Py = Qx . It is separable if P = P (x) and Q = Q(y).
Solution 14.3
dy
+ p(x)y = f (x)
dx
R
We multiply by the integrating factor (x) = exp(P (x)) = exp p(x) dx , and integrate.
dy P (x)
e +p(x)y eP (x) = eP (x) f (x)
dx
d P (x)
ye = eP (x) f (x)
dx Z
y eP (x) = eP (x) f (x) dx + c
Z
y = eP (x) eP (x) f (x) dx + c eP (x)
Solution 14.4
y 0 = y y 2 is separable.
y0 = y y2
y0
=1
y y2
y0 y0
=1
y y1
ln y ln(y 1) = x + c
We do algebraic simplifications and rename the constant of integration to write the solution in a
nice form.
y
= c ex
y1
y = (y 1)c ex
c ex
y=
1 c ex
ex
y= x
e c
1
y=
1 c ex
509
510
Chapter 15
- Niels Bohr
15.1 Introduction
In this chapter we consider first order linear systems of differential equations. That is, we consider
equations of the form,
Initially we will consider the homogeneous problem, x0 (t) = Ax(t). (Later we will find particular
solutions with variation of parameters.) The best way to solve these equations is through the use
of the matrix exponential. Unfortunately, using the matrix exponential requires knowledge of the
Jordan canonical form and matrix functions. Fortunately, we can solve a certain class of problems
using only the concepts of eigenvalues and eigenvectors of a matrix. We present this simple method
in the next section. In the following section we will take a detour into matrix theory to cover Jordan
canonical form and its applications. Then we will be able to solve the general case.
Recall that the single differential equation x0 (t) = Ax has the general solution x = c eAt . Maybe
the system of differential equations
x0 (t) = Ax(t) (15.1)
511
has similiar solutions. Perhaps it has a solution of the form x(t) = et for some constant vector
and some value . Lets substitute this into the differential equation and see what happens.
x0 (t) = Ax(t)
et = A et
A =
We see that if is an eigenvalue of A with eigenvector then x(t) = et satisfies the differential
equation. Since the differential equation is linear, c et is a solution.
Suppose that the n n matrix A has the eigenvalues {k } with a complete set of linearly
independent eigenvectors { k }. Then each of k ek t is a homogeneous solution of Equation 15.1.
We note that each of these solutions is linearly independent. Without any kind of justification I
will tell you that the general solution of the differential equation is a linear combination of these n
linearly independent solutions.
512
10
7.5
2.5
-2.5
-5
-7.5
-10
513
Exercise 15.2 (mathematica/ode/systems/systems.nb)
Find the solution of the following initial value problem. Describe the behavior of the solution as
t .
3 0 2 1
x0 = Ax 1 1 0 x, x(0) = x0 0
2 1 0 0
Exercise 15.3
Use the matrix form of the method of variation of parameters to find the general solution of
3
dx 4 2 t
= x+ 2 , t > 0.
dt 8 4 t
We can define the function to take square matrices as arguments. The function of the square matrix
A is defined in terms of the Taylor series.
X f (n) (0) n
f (A) = A
n=0
n!
(Note that this definition is usually not the most convenient method for computing a function of a
matrix. Use the Jordan canonical form for that.)
514
Generalized Eigenvectors. A vector xk is a generalized eigenvector of rank k if
(A I)k xk = 0 but (A I)k1 xk 6= 0.
Eigenvectors are generalized eigenvectors of rank 1. An n n matrix has n linearly independent
generalized eigenvectors. A chain of generalized eigenvectors generated by the rank m generalized
eigenvector xm is the set: {x1 , x2 , . . . , xm }, where
xk = (A I)xk+1 , for k = m 1, . . . , 1.
One can compute the generalized eigenvectors of a matrix by looping through the following three
steps until all the the Nk are zero:
1. Select the largest k for which Nk is positive. Find a generalized eigenvector xk of rank k which
is linearly independent of all the generalized eigenvectors found thus far.
2. From xk generate the chain of eigenvectors {x1 , x2 , . . . , xk }. Add this chain to the known
generalized eigenvectors.
3. Decrement each positive Nk by one.
The rank of nullspace((A 2I)2 ) is less than 3 as well, so we have to take one more step.
0 0 0
(A 2I)3 = 0 0 0
0 0 0
515
The rank of nullspace((A 2I)3 ) is 3. Thus there are generalized eigenvectors of ranks 1, 2 and 3.
The generalized eigenvector of rank 3 satisfies:
(A 2I)3 x3 = 0
0 0 0
0 0 0 x3 = 0
0 0 0
We choose the solution
1
x3 = 0 .
0
Now to compute the chain generated by x3 .
1
x2 = (A 2I)x3 = 2
3
0
x1 = (A 2I)x2 = 1
1
Thus a set of generalized eigenvectors corresponding to the eigenvalue = 2 are
0 1 1
x1 = 1 , x2 = 2 , x3 = 0 .
1 3 0
Jordan Block. A Jordan block is a square matrix which has the constant, , on the diagonal and
ones on the first super-diagonal:
1 0 0 0
0 1 0 0
0 0 . . . 0 0
.. .. . . .. .. ..
. .
. . . .
0 0 0 . . . 1
0 0 0 0
Jordan Canonical Form. A matrix J is in Jordan canonical form if all the elements are zero
except for Jordan blocks Jk along the diagonal.
J1 0 0 0
..
0 J2
. 0 0
J = ... . . . .. .. ..
. . .
..
0 0 . Jn1 0
0 0 0 Jn
The Jordan canonical form of a matrix is obtained with the similarity transformation:
J = S1 AS,
where S is the matrix of the generalized eigenvectors of A and the generalized eigenvectors are
grouped in chains.
516
Example 15.3.2 Again consider the matrix
1 1 1
A= 2 1 1 .
3 2 4
In Example 15.3.1 we found the generalized eigenvectors of A. We define the matrix with generalized
eigenvectors as columns:
0 1 1
S = 1 2 0 .
1 3 0
We can verify that J = S1 AS.
J = S1 AS
0 3 2 1 1 1 0 1 1
= 0 1 1 2 1 1 1 2 0
1 1 1 3 2 4 1 3 0
2 1 0
= 0 2 1
0 0 2
f (J) = S1 f (A)S,
where S is the matrix of the generalized eigenvectors of A. This gives us a convenient method for
computing functions of matrices.
517
Example 15.3.3 Consider the matrix exponential function eA for our old friend:
1 1 1
A = 2 1 1 .
3 2 4
In Example 15.3.2 we showed that the Jordan canonical form of the matrix is
2 1 0
J = 0 2 1 .
0 0 2
eA = S eJ S1
2 2 2
0 1 1 e e e /2 0 3 2
eA = 1 2 0 0 e2 e2 0 1 1
1 3 0 0 0 e2 1 1 1
0 2 2 e2
eA = 3 1 1
2
5 3 5
x0 (t) = Ax(t)
x(t) = eA(tt0 ) x0 .
518
has the solution Z t
x(t) = eA(tt0 ) x0 + eAt eA f ( ) d.
t0
519
Example 15.4.3 Consider an inhomogeneous system of differential equations.
3
dx 4 2 t
= Ax + f (t) x+ , t > 0.
dt 8 4 t2
The general solution is Z
x(t) = eAt c + eAt eAt f (t) dt.
First we find homogeneous solutions. The characteristic equation for the matrix is
4 2
() =
= 2 = 0
8 4
= 0 is an eigenvalue of multiplicity 2. Thus the Jordan canonical form of the matrix is
0 1
J= .
0 0
Since rank(nullspace(A 0I)) = 1 there is only one eigenvector. A generalized eigenvector of
rank 2 satisfies
(A 0I)2 x2 = 0
0 0
x2 = 0
0 0
We choose
1
x2 =
0
Now we generate the chain from x2 .
4
x1 = (A 0I)x2 =
8
We define the matrix of generalized eigenvectors S.
4 1
S=
8 0
520
We can tidy up the answer a little bit. First we take linear combinations of the homogeneous
solutions to obtain a simpler form.
2 2 Log t + 6t 2t12
1 2t
x(t) = c1 + c2 +
2 4t 1 4 4 Log t + 13t
Then we subtract 2 times the first homogeneous solution from the particular solution.
2 Log t + 6t 2t12
1 2t
x(t) = c1 + c2 +
2 4t 1 4 Log t + 13t
521
15.5 Exercises
Exercise 15.4 (mathematica/ode/systems/systems.nb)
Find the solution of the following initial value problem.
0 2 1 1
x = Ax x, x(0) = x0
5 4 3
Exercise 15.10
1. Consider the system
1 1 1
x0 = Ax = 2 1 1 x. (15.2)
3 2 4
(a) Show that = 2 is an eigenvalue of multiplicity 3 of the coefficient matrix A, and that
there is only one corresponding eigenvector, namely
0
(1) = 1 .
1
(b) Using the information in part (i), write down one solution x(1) (t) of the system (15.2).
There is no other solution of a purely exponential form x = et .
522
(c) To find a second solution use the form x = t e2t + e2t , and find appropriate vectors
and . This gives a solution of the system (15.2) which is independent of the one obtained
in part (ii).
(d) To find a third linearly independent solution use the form x = (t2 /2) e2t +t e2t + e2t .
Show that , and satisfy the equations
The first two equations can be taken to coincide with those obtained in part (iii). Solve
the third equation, and write down a third independent solution of the system (15.2).
2. Consider the system
5 3 2
x0 = Ax = 8 5 4 x. (15.3)
4 3 3
(a) Show that = 1 is an eigenvalue of multiplicity 3 of the coefficient matrix A, and that
there are only two linearly independent eigenvectors, which we may take as
1 0
(1) = 0 , (2) = 2
2 3
(A I) = 0, (A I) = .
Show that the most general solution of the first of these equations is = c1 1 + c2 2 ,
where c1 and c2 are arbitrary constants. Show that, in order to solve the second of these
equations it is necessary to take c1 = c2 . Obtain such a vector , and use it to obtain a
third independent solution of the system (15.3).
1. Find the eigenvalues and associated eigenvectors of A. [HINT: notice that = 1 is a root of
the characteristic polynomial of A.]
2. Use the results from part (a) to construct eAt and therefore the solution to the initial value
problem above.
3. Use the results of part (a) to find the general solution to
dx 1
= Ax.
dt t
523
where
2 0 1
A = 0 2 0
0 1 3
2. Solve
dx
= Ax + g(t), x(0) = 0
dt
using A from part (a).
Exercise 15.13
Let A be an n n matrix of constants. The system
dx 1
= Ax, (15.4)
dt t
is analogous to the Euler equation.
1. Verify that when A is a 2 2 constant matrix, elimination of (15.4) yields a second order Euler
differential equation.
2. Now assume that A is an n n matrix of constants. Show that this system, in analogy with
the Euler equation has solutions of the form x = at where a is a constant vector provided a
and satisfy certain conditions.
3. Based on your experience with the treatment of multiple roots in the solution of constant
coefficient systems, what form will the general solution of (15.4) take if is a multiple eigenvalue
in the eigenvalue problem derived in part (b)?
4. Verify your prediction by deriving the general solution for the system
dx 1 1 0
= x.
dt t 1 1
524
15.6 Hints
Hint 15.1
Hint 15.2
Hint 15.3
Hint 15.4
Hint 15.5
Hint 15.6
Hint 15.7
Hint 15.8
Hint 15.9
Hint 15.10
Hint 15.11
Hint 15.12
Hint 15.13
525
15.7 Solutions
Solution 15.1
We consider an initial value problem.
0 1 5 1
x = Ax x, x(0) = x0
1 3 1
The matrix has the distinct eigenvalues 1 = 1, 2 = 1+. The corresponding eigenvectors
are
2 2+
x1 = , x2 = .
1 1
The general solution of the system of differential equations is
2 (1)t 2 + (1+)t
x = c1 e +c2 e .
1 1
We can take the real and imaginary parts of either of these solution to obtain real-valued solutions.
2 + (1+)t 2 cos(t) sin(t) t cos(t) + 2 sin(t) t
e = e + e
1 cos(t) sin(t)
2 cos(t) sin(t) t cos(t) + 2 sin(t) t
x = c1 e +c2 e
cos(t) sin(t)
Plotted in the phase plane, the solution spirals in to the origin as t increases. Both coordinates tend
to zero as t .
Solution 15.2
We consider an initial value problem.
3 0 2 1
x0 = Ax 1 1 0 x, x(0) = x0 0
2 1 0 0
The matrix has the distinct eigenvalues 1 = 2, 2 = 1 2, 3 = 1 + 2. The
corresponding eigenvectors are
2 2 + 2 2 2
x1 = 2 , x2 = 1 + 2 , x3 = 1 2 .
1 3 3
526
We can take the real and imaginary parts of the second or third solution to obtain two real-valued
solutions.
2 + 2 2 cos( 2t) + 2 sin( 2t) 2 cos( 2t) 2 sin( 2t)
1 + 2 e(1 2)t = cos( 2t) + 2 sin( 2t) et + 2 cos( 2t) + sin( 2t) et
3 3 cos( 2t) 3 sin( 2t)
2 2 cos( 2t) + 2 sin( 2t) 2 cos( 2t) 2 sin( 2t)
x = c1 2 e2t +c2 cos( 2t) + 2 sin( 2t) et +c3 2 cos( 2t) + sin( 2t) e
t
1 3 cos( 2t) 3 sin( 2t)
We apply the initial condition to determine the constants.
2 2 2 c1 1
2 1 2 c2 = 0
1 3 0 c3 0
1 1 5
c1 = , c2 = , c3 =
3 9 9 2
The solution subject to the initial condition is
2 2 cos( 2t) 4 2 sin( 2t)
1 2t 1
x= 2 e + 4 cos( 2t) + 2 sin( 2t) et .
3 6
1 2 cos( 2t) 5 2 sin( 2t)
As t , all coordinates tend to infinity. Plotted in the phase plane, the solution would spiral in
to the origin.
Solution 15.3
Homogeneous Solution, Method 1. We designate the inhomogeneous system of differential
equations
x0 = Ax + g(t).
First we find homogeneous solutions. The characteristic equation for the matrix is
4 2
() =
= 2 = 0
8 4
= 0 is an eigenvalue of multiplicity 2. The eigenvectors satisfy
4 2 1 0
= .
8 4 2 0
Thus we see that there is only one linearly independent eigenvector. We choose
1
= .
2
One homogeneous solution is then
1 0t 1
x1 = e = .
2 2
We look for a second homogeneous solution of the form
x2 = t + .
We substitute this into the homogeneous equation.
x02 = Ax2
= A(t + )
527
We see that and satisfy
A = 0, A = .
We choose to be the eigenvector that we found previously. The equation for is then
4 2 1 1
= .
8 4 2 2
We can write this in matrix notation using the fundamental matrix (t).
1 t c1
xh = (t)c =
2 2t 1/2 c2
y20 = 0.
y2 = c2
y10 = c2 .
y1 = c1 + c2 t
528
The solution for y is then
1 t
y = c1 + c2 .
0 1
We multiply this by C to obtain the homogeneous solution for x.
1 t
xh = c1 + c2
2 2t 1/2
2 log t + 2t1 12 t2
xp =
4 log t + 5t1
2 log t + 2t1 12 t2
1 t
x = c1 + c2 +
2 2t 1/2 4 log t + 5t1
Solution 15.4
We consider an initial value problem.
2 1 1
x0 = Ax x, x(0) = x0
5 4 3
x = eAt x0
= S eJt S1 x0
t
1 1 e 0 1 5 1 1
=
1 5 0 e3t 4 1 1 3
t 3t
1 e +e
=
2 et +5 e3t
1 1 t 1 1 3t
x= e + e
2 1 2 5
529
Solution 15.5
We consider an initial value problem.
1 1 2 2
x0 = Ax 0 2 2 x, x(0) = x0 0
1 1 3 1
The Jordan canonical form of the matrix is
1 0 0
J = 0 2 0 .
0 0 3
The solution of the initial value problem is x = eAt x0 .
x = eAt x0
= S eJt S1 x0
t
0 1 2 e 0 0 1 1 0 2
1
= 2 1 2
0 e2t 0 4 2 4 0
1 0 1 0 0 e3t 2 1 1 2 1
2 e2t
= 2 et +2 e2t
et
0 2
x = 2 et + 2 e2t .
1 0
Solution 15.6
We consider an initial value problem.
1 5 1
x0 = Ax x, x(0) = x0
1 3 1
The Jordan canonical form of the matrix is
1 0
J= .
0 1 +
The solution of the initial value problem is x = eAt x0 .
x = eAt x0
= S eJt S1 x0
(1)t
2 2+ e 0 1 1 2 1
=
1 1 0 e(1+)t 2 1 + 2 1
(cos(t) 3 sin(t)) et
=
(cos(t) sin(t)) et
1 t 3 t
x= e cos(t) e sin(t)
1 1
Solution 15.7
We consider an initial value problem.
3 0 2 1
x0 = Ax 1 1 0 x, x(0) = x0 0
2 1 0 0
530
The Jordan canonical form of the matrix is
2 0 0
J = 0 1 2 0 .
0 0 1 + 2
x = eAt x0
= S eJt S1 x0
2t
e 0 0
6 2 + 2 2 2
1 (1 2)t
= 6 1 + 2 1 2 0 e 0
3
3 3 3 0 0 e(1+ 2)t
2 2 2 1
1
1 52/2 1 22 4 + 2 0
6
1 + 5 2/2 1 + 2 2 4 2 0
2 2 cos( 2t) 4 2 sin( 2t)
1 2t 1
x= 2 e + 4 cos( 2t) + 2 sin( 2t) et .
3 6
1 2 cos( 2t) 5 2 sin( 2t)
Solution 15.8
We consider an initial value problem.
0 1 4 3
x = Ax x, x(0) = x0
4 7 2
Method 1. Find Homogeneous Solutions. The matrix has the double eigenvalue 1 = 2 =
3. There is only one corresponding eigenvector. We compute a chain of generalized eigenvectors.
(A + 3I)2 x2 = 0
0x2 = 0
1
x2 =
0
(A + 3I)x2 = x1
4
x1 =
4
531
Both coordinates tend to zero as t .
Method 2. Use the Exponential Matrix. The Jordan canonical form of the matrix is
3 1
J= .
0 3
x = eAt x0
= S eJt S1 x0
3t
t e3t
1 1/4 e 0 1 3
=
1 0 0 e3t 4 4 2
3 + 4t 3t
x= e .
2 + 4t
Solution 15.9
We consider an initial value problem.
1 0 0 1
x0 = Ax 4 1 0 x, x(0) = x0 2
3 6 2 30
Method 1. Find Homogeneous Solutions. The matrix has the distinct eigenvalues 1 = 1,
2 = 1, 3 = 2. The corresponding eigenvectors are
1 0 0
x1 = 2 , x2 = 1 , x3 = 0 .
5 6 1
As t , the first coordinate vanishes, the second coordinate tends to and the third coordinate
tends to
Method 2. Use the Exponential Matrix. The Jordan canonical form of the matrix is
1 0 0
J = 0 1 0 .
0 0 2
532
The solution of the initial value problem is x = eAt x0 .
x = eAt x0
= S eJt S1 x0
t
1 0 0 e 0 0 1 0 0 1
1
= 2 1
0 0 et 0 2 1 0 2
5 6 1 0 0 e2t 2 7 6 1 30
1 0 0
x = 2 et 4 1 et 11 0 e2t .
5 6 1
Solution 15.10
1. (a) We compute the eigenvalues of the matrix.
1 1 1
1 = 3 + 62 12 + 8 = ( 2)3
() = 2 1
3 2 4
(c) We substitute the form x = t e2t + e2t into the differential equation.
x0 = Ax
e2t +2t e2t +2 e2t = At e2t +A e2t
(A 2I) = 0, (A 2I) =
We already have a solution of the first equation, we need the generalized eigenvector .
Note that is only determined up to a constant times . Thus we look for the solution
533
whose second component vanishes to simplify the algebra.
(A 2I) =
1 1 1 1 0
2 1 1 0 = 1
3 2 2 3 1
1 + 3 = 0, 21 3 = 1, 31 + 23 = 1
1
= 0
1
A second linearly independent solution is
0 1
x(2) = 1 t e2t + 0 e2t .
1 1
(d) To find a third solution we substutite the form x = (t2 /2) e2t +t e2t + e2t into the
differential equation.
x0 = Ax
2(t2 /2) e2t +( + 2)t e2t +( + 2) e2t = A(t2 /2) e2t +At e2t +A e2t
(A 2I) = 0, (A 2I) = , (A 2I) =
We have already solved the first two equations, we need the generalized eigenvector .
Note that is only determined up to a constant times . Thus we look for the solution
whose second component vanishes to simplify the algebra.
(A 2I) =
1 1 1 1 1
2 1 1 0 = 0
3 2 2 3 1
1 + 3 = 1, 21 3 = 0, 31 + 23 = 1
1
= 0
2
A third linearly independent solution is
0 1 1
x(3) = 1 (t2 /2) e2t + 0 t e2t + 0 e2t
1 1 2
534
Thus there are two eigenvectors.
4 3 2 1
8 6 4 2 = 0
4 3 2 3
1 0
(1) = 0 , (2) = 2
2 3
x0 = Ax
et +t et + et = At et +A et
(A I) = 0, (A I) =
The general solution of the first equation is a linear combination of the two solutions we
found in the previous part.
= c1 1 + c2 2
Now we find the generalized eigenvector, . Note that is only determined up to a linear
combination of 1 and 2 . Thus we can take the first two components of to be zero.
4 3 2 0 1 0
8 6 4 0 = c1 0 + c2 2
4 3 2 3 2 3
23 = c1 , 43 = 2c2 , 23 = 2c1 3c2
c1
c1 = c2 , 3 =
2
We see that we must take c1 = c2 in order to obtain a solution. We choose c1 = c2 = 2
A third linearly independent solution of the differential equation is
2 0
x(3) = 4 t et + 0 et .
2 1
Solution 15.11
1. The characteristic polynomial of the matrix is
1 1 1
() = 2 1 1
8 5 3
= (1 )2 (3 ) + 8 10 5(1 ) 2(3 ) 8(1 )
= 3 2 + 4 + 4
= ( + 2)( + 1)( 2)
(A I) = 0.
535
For = 2, we have
(A + 2I) = 0.
3 1 1 1 0
2 3 1 2 = 0
8 5 1 3 0
If we take 3 = 1 then the first two rows give us the system,
3 1 1 1
=
2 3 2 1
which has the solution 1 = 4/7, 2 = 5/7. For the first eigenvector we choose:
4
= 5
7
For = 1, we have
(A + I) = 0.
2 1 1 1 0
2 2 1 2 = 0
8 5 2 3 0
If we take 3 = 1 then the first two rows give us the system,
2 1 1 1
=
2 2 2 1
which has the solution 1 = 3/2, 2 = 2. For the second eigenvector we choose:
3
= 4
2
For = 2, we have
(A + I) = 0.
1 1 1 1 0
2 1 1 2 = 0
8 5 5 3 0
If we take 3 = 1 then the first two rows give us the system,
1 1 1 1
=
2 1 2 1
which has the solution 1 = 0, 2 = 1. For the third eigenvector we choose:
0
= 1
1
536
2. The matrix is diagonalized with the similarity transformation
J = S1 AS,
eA = S eJ S1 .
2t
4 3 0 e 0 0 6 3 3
1
eA = 5 4 1 0 et 0 12 4 4 .
12
7 2 1 0 0 e2t 18 13 1
x = tA c eA log t c,
Solution 15.12
1. The characteristic polynomial of the matrix is
2 0 1
() = 0 2 0
0 1 3
= (2 )2 (3 )
Since rank(nullspace(A 2I)) = 1 there is one eigenvector and one generalized eigenvector of
rank two for = 2. The generalized eigenvector of rank two satisfies
(A 2I)2 2 = 0
0 1 1
0 0 0 2 = 0
0 1 1
537
We choose the solution
0
2 = 1 .
1
The eigenvector for = 2 is
1
1 = (A 2I) 2 = 0 .
0
The eigenvector for = 3 satisfies
(A 3I)2 = 0
1 0 1
0 1 0 = 0
0 1 0
We want to compute eJt so we consider the function f () = et , which has the derivative
f 0 () = t et . Thus we see that
2t
t e2t 0
e
eJt = 0 e2t 0
0 0 e3t
538
The exponential matrix is
eAt = S eJt S1 ,
2t
(1 + t) e2t + e3t e2t + e3t
e
eAt =0 e2t 0 .
0 e + e3t
2t e 3t
x = eAt C.
2. The solution of the inhomogeneous differential equation subject to the initial condition is
Z t
At
x=e 0+e At
eA g( ) d
0
Z t
x = eAt eA g( ) d
0
Solution 15.13
1.
dx 1
= Ax
0 t
dt
x a b x1
t 10 =
x2 c d x2
We differentiate and multiply by t to obtain a second order coupled equation for x1 . We use
(15.4) to eliminate the dependence on x2 .
Thus we see that x1 satisfies a second order, Euler equation. By symmetry we see that x2
satisfies,
t2 x002 + (1 b c)tx02 + (bc ad)x2 = 0.
539
Substituting this into the differential equation yields
(A I) = 0, (A I) =
These equations have solutions because = has generalized eigenvectors of first and second
order.
Note that the change of independent variable = log t, y( ) = x(t), will transform (15.4) into
a constant coefficient system.
dy
= Ay
d
Thus all the methods for solving constant coefficient systems carry over directly to solving
(15.4). In the case of eigenvalues with multiplicity greater than one, we will have solutions of
the form,
2
t , t log t + t , t (log t) + t log t + t , . . . ,
analogous to the form of the solutions for a constant coefficient system,
e , e + e , 2 e + e + e , ....
x2 = at log t + t.
540
Thus a second linearly independent solution is
0 1
x2 = t log t + t.
1 0
By substituting the solution for x1 into (15.5), we obtain an uncoupled equation for x2 .
1
x02 = (c1 t + x2 )
t
1
x02 x2 = c1
t
0
1 c1
x2 =
t t
1
x2 = c1 log t + c2
t
x2 = c1 t log t + c2 t
541
542
Chapter 16
Exercise 16.2
Determine an equation for the integrating factor (x) for Equation 16.1.
Exercise 16.3
Show that
y 00 + xy 0 + y = 0
is exact. Find the solution.
Result 16.2.1 Consider the nth order ordinary differential equation of the
form
dn y dn1 y dy
L[y] = + p n1 (x) + + p 1 (x) + p0 (x)y = f (x). (16.2)
dxn dxn1 dx
If the coefficient functions pn1 (x), . . . , p0 (x) and the inhomogeneity f (x) are
continuous on some interval a < x < b then the differential equation subject
to the conditions,
y(x0 ) = v0 , y 0 (x0 ) = v1 , ... y (n1) (x0 ) = vn1 , a < x0 < b,
has a unique solution on the interval.
543
Exercise 16.4
On what intervals do the following problems have unique solutions?
1. xy 00 + 3y = x
2. x(x 1)y 00 + 3xy 0 + 4y = 2
3. ex y 00 + x2 y 0 + y = tan x
Particular Solutions. Any function, yp , that satisfies the inhomogeneous equation, L[yp ] = f (x),
is called a particular solution or particular integral of the equation. Note that for linear differential
equations the particular solution is not unique. If yp is a particular solution then yp + yh is also a
particular solution where yh is any homogeneous solution.
The general solution to the problem L[y] = f (x) is the sum of a particular solution and a linear
combination of the homogeneous solutions
y = yp + c1 y1 + + cn yn .
Exercise 16.5
Suppose you are able to find three linearly independent particular solutions u1 (x), u2 (x) and u3 (x)
of the second order linear differential equation L[y] = f (x). What is the general solution?
544
Real-Valued Solutions. If the coefficient function and the inhomogeneity in Equation 16.2 are
real-valued, then the general solution can be written in terms of real-valued functions. Let y be any,
homogeneous solution, (perhaps complex-valued). By taking the complex conjugate of the equation
L[y] = 0 we show that y is a homogeneous solution as well.
L[y] = 0
L[y] = 0
y (n) + pn1 y (n1) + + p0 y = 0
(n) (n1)
y + pn1 y + + p0 y = 0
L [y] = 0
For the same reason, if yp is a particular solution, then yp is a particular solution as well.
Since the real and imaginary parts of a function y are linear combinations of y and y,
y + y y y
<(y) = , =(y) = ,
2 2
if y is a homogeneous solution then both <y and =(y) are homogeneous solutions. Likewise, if yp is
a particular solution then <(yp ) is a particular solution.
yp + y p f f
L [<(yp )] = L = + =f
2 2 2
Thus we see that the homogeneous solution, the particular solution and the general solution of a
linear differential equation with real-valued coefficients and inhomogeneity can be written in terms
of real-valued functions.
y1 = y, y2 = y 0 , ,..., yn = y (n1) .
545
The differential equation is equivalent to the system
y10 = y2
y20 = y3
.. ..
.=.
0
yn = f (x) pn1 yn p0 y1 .
The first order system is more useful when numerically solving the differential equation.
y 00 + x2 y 0 + cos x y = sin x.
y10 = y2
y20 = sin x x2 y2 cos x y1 .
Result 16.4.1 Let aij (x), the elements of the matrix A, be differentiable func-
tions of x. Then n
d X
[A(x)] = k [A(x)]
dx k=1
where k [A(x)] is the determinant of the matrix A with the k th row replaced
by the derivative of the k th row.
x2
x
A(x) =
x2 x4
The determinant is x5 x4 thus the derivative of the determinant is 5x4 4x3 . To check the theorem,
d d x x2
[A(x)] =
dx dx x2 x4
1 2x x x2
= 2
+
x x4 2x 4x3
= x4 2x3 + 4x4 2x3
= 5x4 4x3 .
546
16.4.2 The Wronskian of a Set of Functions.
A set of functions {y1 , y2 , . . . , yn } is linearly dependent on an interval if there are constants c1 , . . . , cn
not all zero such that
c1 y1 + c2 y2 + + cn yn = 0 (16.3)
identically on the interval. The set is linearly independent if all of the constants must be zero to
satisfy c1 y1 + cn yn = 0 on the interval.
Consider a set of functions {y1 , y2 , . . . , yn } that are linearly dependent on a given interval and
n 1 times differentiable. There are a set of constants, not all zero, that satisfy equation 16.3
Differentiating equation 16.3 n 1 times gives the equations,
From linear algebra, we know that this equation has a solution for a nonzero constant vector only if
the determinant of the matrix is zero. Here we define the Wronskian ,W (x), of a set of functions.
y1 y2 ... yn
0
y20 yn0
y1 ...
W (x) = .. ..
..
. . . . . .
(n1) (n1) (n1)
y y2 ... yn
1
Thus if a set of functions is linearly dependent on an interval, then the Wronskian is identically zero
on that interval. Alternatively, if the Wronskian is identically zero, then the above matrix equation
has a solution for a nonzero constant vector. This implies that the the set of functions is linearly
dependent.
547
Example 16.4.3 Consider the set {sin x, cos x, ex }. The Wronskian is
sin x
cos x ex
W (x) = cos x sin x ex .
sin x cos x ex
Since the last row is a constant multiple of the first row, the determinant is zero. The functions are
dependent. We could also see this with the identity ex = cos x + sin x.
We note that the all but the last term in this sum is zero. To see this, lets take a look at the first
term.
0
y20 yn0
y1
0
y1 y20 yn0
1 [Y (x)] = .. .. ..
..
. . . .
(n1) (n1) (n1)
y
1 y2 yn
The first two rows in the matrix are identical. Since the rows are dependent, the determinant is
zero.
The last term in the sum is
y1 y2 yn
. .. ..
. ..
. . . . .
n [Y (x)] = (n2) (n2) (n2)
y
1 y 2 y n
y (n) y2
(n)
(n)
yn
1
(n) (n1)
In the last row of this matrix we make the substitution yi = pn1 (x)yi p0 (x)yi .
Recalling that we can add a multiple of a row to another without changing the determinant, we add
p0 (x) times the first row, and p1 (x) times the second row, etc., to the last row. Thus we have the
determinant,
y1 y2 yn
.. .. .. ..
.
0
W (x) =
. . .
(n2) (n2) (n2)
y 1 y 2 y n
p (n1) (n1) (n1)
n1 (x)y1 pn1 (x)y2 pn1 (x)yn
y1 y2 yn
. .. ..
. ..
. . . .
= pn1 (x) (n2) (n2) (n2)
y
1 y2 yn
y (n1) y (n1) y (n1)
1 2 n
548
Thus the Wronskian satisfies the first order differential equation,
Thus regardless of the particular set of solutions that we choose, we can compute their Wronskian
up to a constant factor.
y 00 3y 0 + 2y = 0.
= c e3x .
Since the general solution to the differential equation is a linear combination of the n homogeneous
solutions plus the particular solution
y = yp + c1 y1 + c2 y2 + + cn yn ,
549
From linear algebra we know that this system of equations has a unique solution only if the deter-
minant of the matrix is nonzero. Note that the determinant of the matrix is just the Wronskian
evaluated at x0 . Thus if the Wronskian vanishes at x0 , the initial value problem for the differential
equation either has no solutions or infinitely many solutions. Such problems are said to be ill-posed.
From Abels formula for the Wronskian
Z
W (x) = exp pn1 (x) dx ,
we see that the only way the Wronskian can vanish is if the value of the integral goes to .
y = c1 x + c2 x2 .
We see that the general solution cannot satisfy the initial conditions. If instead we had the initial
conditions y(0) = 0, y 0 (0) = 1, then there would be an infinite number of solutions.
If the Wronskian Z
W (x) = exp pn1 (x) dx
550
16.6 The Fundamental Set of Solutions
Consider a set of linearly independent solutions {u1 , u2 , . . . , un } to an nth order linear homogeneous
differential equation. This is called the fundamental set of solutions at x0 if they satisfy the
relations
u1 (x0 ) = 1 u2 (x0 ) = 0 ... un (x0 ) = 0
u01 (x0 ) = 0 u02 (x0 ) = 1 ... u0n (x0 ) = 0
.. .. .. ..
. . . .
(n1) (n1) (n1)
u1 (x0 ) = 0 u2 (x0 ) = 0 ... un (x0 ) = 1
Knowing the fundamental set of solutions is handy because it makes the task of solving an initial
value problem trivial. Say we are given the initial conditions,
y(x0 ) = v1 , y 0 (x0 ) = v2 , ..., y (n1) (x0 ) = vn .
If the ui s are a fundamental set then the solution that satisfies these constraints is just
y = v1 u1 (x) + v2 u2 (x) + + vn un (x).
Of course in general, a set of solutions is not the fundamental set. If the Wronskian of the solutions
is nonzero and finite we can generate a fundamental set of solutions that are linear combinations of
our original set. Consider the case of a second order equation Let {y1 , y2 } be two linearly independent
solutions. We will generate the fundamental set of solutions, {u1 , u2 }.
u1 c c y1
= 11 12
u2 c21 c22 y2
For {u1 , u2 } to satisfy the relations that define a fundamental set, it must satisfy the matrix equation
u1 (x0 ) u01 (x0 ) y1 (x0 ) y10 (x0 )
c11 c12 1 0
= =
u2 (x0 ) u02 (x0 ) c21 c22 y2 (x0 ) y20 (x0 ) 0 1
1
y1 (x0 ) y10 (x0 )
c11 c12
=
c21 c22 y2 (x0 ) y20 (x0 )
If the Wronskian is non-zero and finite, we can solve for the constants, cij , and thus find the
fundamental set of solutions. To generalize this result to an equation of order n, simply replace all
the 2 2 matrices and vectors of length 2 with n n matrices and vectors of length n. I presented
the case of n = 2 simply to save having to write out all the ellipses involved in the general case. (It
also makes for easier reading.)
Example 16.6.1 Two linearly independent solutions to the differential equation y 00 + y = 0 are
y1 = ex and y2 = ex .
y1 (0) y10 (0)
1
=
y2 (0) y20 (0) 1 i
To find the fundamental set of solutions, {u1 , u2 }, at x = 0 we solve the equation
1
c11 c12 1
=
c21 c22 1
c11 c12 1
=
c21 c22 2 1 1
The fundamental set is
ex + ex ex ex
u1 = , u2 = .
2 2
Using trigonometric identities we can rewrite these as
u1 = cos x, u2 = sin x.
551
Result 16.6.1 The fundamental set of solutions at x = x0 , {u1 , u2 , . . . , un },
to an nth order linear differential equation, satisfy the relations
u1 (x0 ) = 1 u2 (x0 ) = 0 ... un (x0 ) = 0
u01 (x0 ) = 0 u02 (x0 ) = 1 ... u0n (x0 ) = 0
.. .. .. ..
. . . .
(n1) (n1) (n1)
u1 (x0 ) = 0 u2 (x0 ) = 0 . . . un (x0 ) = 1.
If the Wronskian of the solutions is nonzero and finite at the point x0 then you
can generate the fundamental set of solutions from any linearly independent
set of solutions.
Exercise 16.6
Two solutions of y 00 y = 0 are ex and ex . Show that the solutions are independent. Find the
fundamental set of solutions at x = 0.
Example 16.7.1
1 0
L[y] = xy 00 + y +y
x
has the adjoint
d2
d 1
L [y] = [xy] y +y
dx2 dx x
1 1
= xy 00 + 2y 0 y 0 + 2 y + y
x x
00 1 0 1
= xy + 2 y + 1 + 2 y.
x x
Taking the adjoint of L yields
d2
d 1 1
L [y] = [xy] 2 y + 1 + y
dx2 dx x x2
1 1 1
= xy 00 + 2y 0 2 y0 y + 1 + y
x x2 x2
1
= xy 00 + y 0 + y.
x
Thus by taking the adjoint of L , we obtain the original operator.
In general, L = L.
552
Consider L[y] = pn y (n) + + p0 y. If each of the pk is k times continuously differentiable and u
and v are n times continuously differentiable on some interval, then on that interval
d
vL[u] uL [v] = B[u, v]
dx
where B[u, v], the bilinear concomitant, is the bilinear form
n
X X
B[u, v] = (1)j u(k) (pm v)(j) .
m=1 j+k=m1
j0,k0
Example 16.7.2 Verify Lagranges identity for the second order operator, L[y] = p2 y 00 + p1 y 0 + p0 y.
2
d d
vL[u] uL [v] = v(p2 u00 + p1 u0 + p0 u) u (p 2 v) (p 1 v) + p 0 v
dx2 dx
= v(p2 u00 + p1 u0 + p0 u) u(p2 v 00 + (2p2 0 p1 )v 0 + (p2 00 p1 0 + p0 )v)
= u00 p2 v + u0 p1 v + u p2 v 00 + (2p02 + p1 )v 0 + (p002 + p01 )v .
553
16.8 Additional Exercises
Exact Equations
Nature of Solutions
Transformation to a First Order System
The Wronskian
Well-Posed Problems
The Fundamental Set of Solutions
Adjoint Equations
Exercise 16.7
Find the adjoint of the Bessel equation of order ,
x2 y 00 + xy 0 + (x2 2 )y = 0,
(1 x2 )y 00 2xy 0 + ( + 1)y = 0.
Exercise 16.8
Find the adjoint of
x2 y 00 xy 0 + 3y = 0.
554
16.9 Hints
Hint 16.1
Hint 16.2
Hint 16.3
Hint 16.4
Hint 16.5
The difference of any two of the ui s is a homogeneous solution.
Hint 16.6
Exact Equations
Nature of Solutions
Transformation to a First Order System
The Wronskian
Well-Posed Problems
The Fundamental Set of Solutions
Adjoint Equations
Hint 16.7
Hint 16.8
555
16.10 Solutions
Solution 16.1
The second order, linear, homogeneous differential equation is
We equate the coefficients of Equations 16.4 and 16.5 to obtain a set of equations.
In order to eliminate f (x), we differentiate the first equation and substitute in the expression for
f 0 (x) from the second equation. This gives us a necessary condition for Equation 16.4 to be exact:
Now we demonstrate that Equation 16.6 is a sufficient condition for exactness. Suppose that Equa-
tion 16.6 holds. Then we can replace R by Q0 P 00 in the differential equation.
P y 00 + Qy 0 + (Q0 P 00 )y = 0
(P y 0 + (Q P 0 )y)0 = 0
Thus Equation 16.6 is a sufficient condition for exactness. We can integrate to reduce the problem
to a first order differential equation.
P y 0 + (Q P 0 )y = c
Solution 16.2
Suppose that there is an integrating factor (x) that will make
We apply the exactness condition from Exercise 16.1 to obtain a differential equation for the inte-
grating factor.
(P )00 (Q)0 + R = 0
P + 20 P 0 + P 00 0 Q Q0 + R = 0
00
P 00 + (2P 0 Q)0 + (P 00 Q0 + R) = 0
556
Solution 16.3
We consider the differential equation,
y 00 + xy 0 + y = 0.
Since
(1)00 (x)0 + 1 = 0
we see that this is an exact equation. We rearrange terms to form exact derivatives and then
integrate.
(y 0 )0 + (xy)0 = 0
y 0 + xy = c
d h x2 /2 i 2
e y = c ex /2
dx
Z
2 2 2
y = c ex /2 ex /2 dx + d ex /2
Solution 16.4
Consider the initial value problem,
If p(x), q(x) and f (x) are continuous on an interval (a . . . b) with x0 (a . . . b), then the problem
has a unique solution on that interval.
1.
xy 00 + 3y = x
3
y 00 + y = 1
x
Unique solutions exist on the intervals ( . . . 0) and (0 . . . ).
2.
ex y 00 + x2 y 0 + y = tan x
y 00 + x2 ex y 0 + ex y = ex tan x
Unique solutions exist on the intervals (2n1) 2 . . . (2n+1)
2 for n Z.
Solution 16.5
We know that the general solution is
y = yp + c1 y1 + c2 y2 ,
where yp is a particular solution and y1 and y2 are linearly independent homogeneous solutions.
Since yp can be any particular solution, we choose yp = u1 . Now we need to find two homogeneous
557
solutions. Since L[ui ] = f (x), L[u1 u2 ] = L[u2 u3 ] = 0. Finally, we note that since the ui s are
linearly independent, y1 = u1 u2 and y2 = u2 u3 are linearly independent. Thus the general
solution is
y = u1 + c1 (u1 u2 ) + c2 (u2 u3 ).
Solution 16.6
The Wronskian of the solutions is
ex
x
e
W (x) = x = 2.
e ex
ex
u1 c11 c12
=
u2 c21 c22 ex
1 x 1 x
u1 = (e + ex ), u2 = (e ex ).
2 2
The fundamental set of solutions at x = 0 is
Exact Equations
Nature of Solutions
Transformation to a First Order System
The Wronskian
Well-Posed Problems
The Fundamental Set of Solutions
Adjoint Equations
Solution 16.7
1. The Bessel equation of order is
x2 y 00 + xy 0 + (x2 2 )y = 0.
x2 00 + (4x x)0 + (2 1 + x2 2 ) = 0
x2 00 + 3x0 + (1 + x2 2 ) = 0.
558
2. The Legendre equation of order is
(1 x2 )y 00 2xy 0 + ( + 1)y = 0
Solution 16.8
The adjoint of
x2 y 00 xy 0 + 3y = 0
is
d2 2 d
2
(x y) + (xy) + 3y = 0
dx dx
(x2 y 00 + 4xy 0 + 2y) + (xy 0 + y) + 3y = 0
x2 y 00 + 5xy 0 + 6y = 0.
559
16.11 Quiz
Problem 16.1
What is the differential equation whose solution is the two parameter family of curves y = c1 sin(2x+
c2 )?
560
16.12 Quiz Solutions
Solution 16.1
We take the first and second derivative of y = c1 sin(2x + c2 ).
y 0 = 2c1 cos(2x + c2 )
y 00 = 4c1 sin(2x + c2 )
This gives us three equations involving x, y, y 0 , y 00 and the parameters c1 and c2 . We eliminate the
the parameters to obtain the differential equation. Clearly we have,
y 00 + 4y = 0.
561
562
Chapter 17
My new goal in life is to take the meaningless drivel out of human interaction.
-Dave Ozenne
The nth order linear homogeneous differential equation can be written in the form:
In general it is not possible to solve second order and higher linear differential equations. In this
chapter we will examine equations that have special forms which allow us to either reduce the order
of the equation or solve it.
We will find that solving a constant coefficient differential equation is no more difficult than finding
the roots of a polynomial. For notational simplicity, we will first consider second order equations.
Then we will apply the same techniques to higher order equations.
563
Once we have factored the differential equation, we can solve it by solving a series of two first order
d
differential equations. We set u = dx y to obtain a first order equation:
d
u = 0,
dx
y = c1 ex +c2 ex
y = c1 x ex +c2 ex
Example 17.1.1 Consider the differential equation: y 00 + y = 0. To obtain the general solution, we
factor the equation and apply the result in Equation 17.2.
d d
+ y =0
dx dx
y = c1 e +c2 ex .
x
y = c1 + c2 x
564
Substituting the Form of the Solution into the Differential Equation. Note that if we
substitute y = ex into the differential equation (17.1), we will obtain the quadratic polynomial
(17.1.1) for .
y 00 + 2ay 0 + by = 0
2 ex +2a ex +b ex = 0
2 + 2a + b = 0
This gives us a superficially different method for solving constant coefficient equations. We substitute
y = ex into the differential equation. Let and be the roots of the quadratic in . If the roots
are distinct, then the linearly independent solutions are y1 = ex and y2 = ex . If the quadratic has
a double root at = , then the linearly independent solutions are y1 = ex and y2 = x ex .
y 00 3y 0 + 2y = 0.
2 3 + 2 = ( 1)( 2) = 0.
y 00 2y 0 + 4y = 0.
2 2 + 4 = ( 2)2 = 0.
Because the polynomial has a double root, the solutions are e2x and x e2x .
565
Shift Invariance. Note that if u(x) is a solution of a constant coefficient equation, then u(x + c)
is also a solution. This is useful in applying initial or boundary conditions.
y 00 3y 0 + 2y = 0, y(0) = a, y 0 (0) = b.
y = c1 ex +c2 e2x .
c1 + c2 = a, c1 + 2c2 = b.
The solution is
y = (2a b) ex +(b a) e2x .
Now suppose we wish to solve the same differential equation with the boundary conditions y(1) = a
and y 0 (1) = b. All we have to do is shift the solution to the right.
n + an1 n1 + + a1 + a0 = 0.
The corresponding solutions of the differential equation are e(+)x and e()x . Note that the
linear combinations
e(+)x + e()x e(+)x e()x
= ex cos(x), = ex sin(x),
2 2
are real-valued solutions of the differential equation. We could also obtain real-valued solution by
taking the real and imaginary parts of either e(+)x or e()x .
< e(+)x = ex cos(x), = e(+)x = ex sin(x)
y 00 2y 0 + 2y = 0.
2 2 + 2 = ( 1 )( 1 + ) = 0.
y = c1 ex cos x + c2 ex sin x
566
Exercise 17.1
Find the general solution of
y 00 + 2ay 0 + by = 0
for a, b R. There are three distinct forms of the solution depending on the sign of a2 b.
Exercise 17.2
Find the fundamental set of solutions of
y 00 + 2ay 0 + by = 0
at the point x = 0, for a, b R. Use the general solutions obtained in Exercise 17.1.
y 00 + 2ay 0 + by = 0.
The general solution of this differential equation is
ax a2 b x a2 b x
e c 1 e +c 2 e if a2 > b,
y= e ax
c 1 cos( b a 2 x) + c sin( b a2 x)
2 if a2 < b,
ax
e (c1 + c2 x) if a2 = b.
The substitution y = ex will transform this differential equation into an algebraic equation.
n + an1 n1 + + a1 + a0 = 0
Assume that the roots of this equation, 1 , . . . , n , are distinct. Then the n linearly independent
solutions of Equation 17.3 are
e1 x , . . . , en x .
If the roots of the algebraic equation are not distinct then we will not obtain all the solutions
of the differential equation. Suppose that 1 = is a double root. We substitute y = ex into the
differential equation.
L[ex ] = [( )2 ( 3 ) ( n )] ex = 0
567
Setting = will make the left side of the equation zero. Thus y = ex is a solution. Now we
differentiate both sides of the equation with respect to and interchange the order of differentiation.
d d x
L[ex ] = L = L x ex
e
d d
Let p() = ( 3 ) ( n ). We calculate L x ex by applying L and then differentiating with
respect to .
d
L x ex = L[ex ]
d
d
= [( )2 ( 3 ) ( n )] ex
d
d
= [( )2 p()] ex
d
= 2( )p() + ( )2 p0 () + ( )2 p()x ex
Since setting = will make this expression zero, L[x ex ] = 0, x ex is a solution of Equation 17.3.
You can verify that ex and x ex are linearly independent. Now we have generated all of the
solutions for the differential equation.
If = is a root of multiplicity m then by repeatedly differentiating with respect to you can
show that the corresponding solutions are
ex , x ex , x2 ex , . . . , xm1 ex .
y 000 3y 0 + 2y = 0.
3 3 + 2 = ( 1)2 ( + 2) = 0.
If the coefficients of the differential equation are real, then we can find a real-
valued set of solutions.
568
Example 17.1.8 Consider the equation
d4 y d2 y
+ 2 + y = 0.
dx4 dx2
The substitution y = ex yields
4 + 22 + 1 = ( i)2 ( + i)2 = 0.
ex , x ex , ex and x ex .
Noting that
ex = cos(x) + sin(x),
we can write the general solution in terms of sines and cosines.
(distance) (distance)
(time)2 = (time) = (distance)
(time)2 (time)
Thus this is a second order Euler, or equidimensional equation. We know that the first order Euler
equation, xy 0 + ay = 0, has the solution y = cxa . Thus for the second order equation we will try a
solution of the form y = x . The substitution y = x will transform the differential equation into
an algebraic equation.
d2 d
L[x ] = x2 2
[x ] + ax [x ] + bx = 0
dx dx
( 1)x + ax + bx = 0
( 1) + a + b = 0
Factoring yields
( 1 )( 2 ) = 0.
If the two roots, 1 and 2 , are distinct then the general solution is
y = c1 x1 + c2 x2 .
If the roots are not distinct, 1 = 2 = , then we only have the one solution, y = x . To generate
the other solution we use the same approach as for the constant coefficient equation. We substitute
y = x into the differential equation and differentiate with respect to .
d d
L[x ] = L[ x ]
d d
= L[ln x x ]
569
Note that
d d ln x
x = e = ln x e ln x = ln x x .
d d
Now we apply L and then differentiate with respect to .
d d
L[x ] = ( )2 x
d d
= 2( )x + ( )2 ln x x
L[ln x x ] = 2( )x + ( )2 ln x x .
Setting = will make the right hand side zero. Thus y = ln x x is a solution.
If you are in the mood for a little algebra you can show by repeatedly differentiating with respect
to that if = is a root of multiplicity m in an nth order Euler equation then the associated
solutions are
x , ln x x , (ln x)2 x , . . . , (ln x)m1 x .
( 1) + 1 = ( 1)2 = 0.
( 1) + a + b = 0
x+ and x .
x e ln x +x e ln x x e ln x x e ln x
= x cos( ln x), and = x sin( ln x),
2 2
are real-valued solutions when x is real and positive. Equivalently, we could take the real and
imaginary parts of either x+ or x .
570
Result 17.2.1 Consider the second order Euler equation
x2 y 00 + (2a + 1)xy 0 + by = 0.
The general solution of this differential equation is
a a2 b a2 b
x c 1 x + c 2 x if a2 > b,
y = xa c1 cos b a2 ln x + c2 sin b a2 ln x
if a2 < b,
a
x (c1 + c2 ln x) if a2 = b.
x2 y 00 3xy 0 + 13y = 0.
( 1) 3 + 13 = ( 2 3)( 2 + 3) = 0.
x2+3 , x23 .
y = c1 x2 cos(3 ln x) + c2 x2 sin(3 ln x)
571
Result 17.2.2 Consider the nth order Euler equation
n n1
nd y n1 d y dy
x + an1 x + + a1 x + a0 y = 0.
dxn dx n1 dx
Let the factorization of the algebraic equation obtained with the substitution
y = x be
( 1 )m1 ( 2 )m2 ( p )mp = 0.
A set of linearly independent solutions is given by
If the coefficients of the differential equation are real, then we can find a set
of solutions that are real valued when x is real and positive.
d x3 /3 3
e y = c1 ex /3
dx Z
x3 /3 3
e y = c1 ex /3 dx + c2
Z
x3 /3 3 3
y = c1 e ex /3
dx + c2 ex /3
572
17.4 Equations Without Explicit Dependence on y
Example 17.4.1 Consider the equation
y 00 + xy 0 = 0.
This is a second order equation for y, but note that it is a first order equation for y 0 . We can solve
directly for y 0 .
d 2 3/2 0
exp x y =0
dx 3
0 2 3/2
y = c1 exp x
3
Result 17.4.1 If an nth order equation does not explicitly depend on y then
you can consider it as an equation of order n 1 for y 0 .
Suppose that we know one homogeneous solution y1 . We make the substitution y = uy1 and use
that L[y1 ] = 0.
Thus we have reduced the problem to a first order equation for u0 . An analogous result holds for
higher order equations.
573
Example 17.5.1 Consider the equation
y 00 + xy 0 y = 0.
By inspection we see that y1 = x is a solution. We would like to find another linearly independent
solution. The substitution y = xu yields
xu00 + (2 + x2 )u0 = 0
2
u00 + + x u0 = 0
x
574
The equation L[y] = f (x) is equivalent to the equation
d
B[y, ] = f
dx Z
B[y, ] = (x)f (x) dx,
L[y] = y 00 x2 y 0 2xy = 0.
L [y] = y 00 + x2 y 0 = 0.
By inspection we see that = (constant) is a solution of the adjoint equation. To simplify the
algebra we will choose = 1. Thus the equation L[y] = 0 is equivalent to
B[y, 1] = c1
d d
y(x2 ) + [y](1) y [1] = c1
dx dx
y 0 x2 y = c1 .
By using the adjoint equation to reduce the order we obtain the same solution as with Method 1.
575
17.7 Additional Exercises
Constant Coefficient Equations
Exercise 17.3 (mathematica/ode/techniques linear/constant.nb)
Find the solution of each one of the following initial value problems. Sketch the graph of the solution
and describe its behavior as t increases.
1. 6y 00 5y 0 + y = 0, y(0) = 4, y 0 (0) = 0
2. y 00 2y 0 + 5y = 0, y(/2) = 0, y 0 (/2) = 2
3. y 00 + 4y 0 + 4y = 0, y(1) = 2, y 0 (1) = 1
y 00 4y 0 + 13y = 0.
Exercise 17.6
Substitute y = ex to find the fundamental set of solutions at x = 0 for each of the equations:
1. y 00 + y = 0,
2. y 00 y = 0,
3. y 00 = 0.
What are the fundamental set of solutions at x = 1 for each of these equations.
Exercise 17.7
Consider a ball of mass m hanging by an ideal spring of spring constant k. The ball is suspended in
a fluid which damps the motion. This resistance has a coefficient of friction, . Find the differential
equation for the displacement of the mass from its equilibrium position by balancing forces. Denote
this displacement by y(t). If the damping force is weak, the mass will have a decaying, oscillatory
motion. If the damping force is strong, the mass will not oscillate. The displacement will decay to
zero. The value of the damping which separates these two behaviors is called critical damping.
Find the solution which satisfies the initial conditions y(0) = 0, y 0 (0) = 1. Use the solutions
obtained in Exercise 17.2 or refer to Result 17.1.2.
Consider the case m = k = 1. Find the coefficient of friction for which the displacement of the
mass decays most rapidly. Plot the displacement for strong, weak and critical damping.
Exercise 17.8
Show that y = c cos(x ) is the general solution of y 00 + y = 0 where c and are constants of
integration. (It is not sufficient to show that y = c cos(x) satisfies the differential equation. y = 0
satisfies the differential equation, but is is certainly not the general solution.) Find constants c and
such that y = sin(x).
Is y = c cosh(x ) the general solution of y 00 y = 0? Are there constants c and such that
y = sinh(x)?
576
Exercise 17.9 (mathematica/ode/techniques linear/constant.nb)
Let y(t) be the solution of the initial-value problem
y 00 + 5y 0 + 6y = 0; y(0) = 1, y 0 (0) = V.
For what values of V does y(t) remain nonnegative for all t > 0?
where sign(x) = 1 according as x is positive or negative. (The solution should be continuous and
have a continuous first derivative.)
Euler Equations
Exercise 17.11
Find the general solution of
x2 y 00 + xy 0 + y = 0, x > 0.
Exercise 17.12
Substitute y = x to find the general solution of
x2 y 00 2xy + 2y = 0.
Exercise 17.14
Find the general solution of
x2 y 00 + (2a + 1)xy 0 + by = 0.
Exercise 17.15
Show that
ex ex
y1 = eax , y2 = lim
a
are linearly indepedent solutions of
y 00 a2 y = 0
for all values of a. It is common to abuse notation and write the second solution as
eax eax
y2 =
a
where the limit is taken if a = 0. Likewise show that
xa xa
y1 = xa , y2 =
a
are linearly indepedent solutions of
x2 y 00 + xy 0 a2 y = 0
577
Exercise 17.16 (mathematica/ode/techniques linear/constant.nb)
Find two linearly independent solutions (i.e., the general solution) of
Exact Equations
Exercise 17.17
Solve the differential equation
y 00 + y 0 sin x + y cos x = 0.
Exercise 17.19
Consider the differential equation
x+1 0 1
y 00 y + y = 0.
x x
x+1 1
Since the coefficients sum to zero, (1 x + x = 0), y = ex is a solution. Find another linearly
independent solution.
Exercise 17.20
One solution of
(1 2x)y 00 + 4xy 0 4y = 0
is y = x. Find the general solution.
Exercise 17.21
Find the general solution of
(x 1)y00 xy0 + y = 0,
given that one solution is y = ex . (you may assume x > 1)
578
17.8 Hints
Hint 17.1
Substitute y = ex into the differential equation.
Hint 17.2
The fundamental set of solutions is a linear combination of the homogeneous solutions.
Hint 17.4
Hint 17.5
It is a constant coefficient equation.
Hint 17.6
Use the fact that if u(x) is a solution of a constant coefficient equation, then u(x + c) is also a
solution.
Hint 17.7
The force on the mass due to the spring is ky(t). The frictional force is y 0 (t).
Note that the initial conditions describe the second fundamental solution at t = 0.
Note that for large t, t et is much small than et if < . (Prove this.)
Hint 17.8
By definition, the general solution of a second order differential equation is a two parameter family
of functions that satisfies the differential equation. The trigonometric identities in Appendix Q may
be useful.
Hint 17.9
Hint 17.10
Euler Equations
Hint 17.11
Hint 17.12
Hint 17.13
Hint 17.14
Substitute y = x into the differential equation. Consider the three cases: a2 > b, a2 < b and a2 = b.
Hint 17.15
579
Hint 17.16
Exact Equations
Hint 17.17
It is an exact equation.
Hint 17.19
Use reduction of order to find the other solution.
Hint 17.20
Use reduction of order to find the other solution.
Hint 17.21
580
17.9 Solutions
Solution 17.1
We substitute y = ex into the differential equation.
y 00 + 2ay 0 + by = 0
2 + 2a + b = 0
p
= a a2 b
If a2 > b then the two roots are distinct and real. The general solution is
y = c1 e(a+ a2 b)x
+c2 e(a a2 b)x
.
If a2 < b then the two roots are distinct and complex-valued. We can write them as
p
= a b a2 .
The general solution is
y = c1 e(a+ ba2 )x
+c2 e(a ba2 )x
.
By taking the sum and difference of the two linearly independent solutions above, we can write the
general solution as
p p
y = c1 eax cos b a2 x + c2 eax sin b a2 x .
If a2 = b then the only root is = a. The general solution in this case is then
y = c1 eax +c2 x eax .
In summary, the general solution is
ax a2 b x a2 b x
e c1 e +c2 e if a2 > b,
y = eax c1 cos b a2 x + c2 sin b a2 x
if a2 < b,
eax (c + c x)
if a2 = b.
1 2
Solution 17.2
First we note that the general solution can be written,
ax
e c1 cosh a2 b x + c2 sinh a2 b x if a2 > b,
y = eax c1 cos b a2 x + c2 sin b a2 x
if a2 < b,
eax
(c1 + c2 x) if a2 = b.
The conditions, y1 (0) = 1 and y10 (0) = 0, for the first solution become,
p
c1 = 1, ac1 + a2 b c2 = 0,
a
c1 = 1, c2 = .
a2 b
The conditions, y2 (0) = 0 and y20 (0) = 1, for the second solution become,
p
c1 = 0, ac1 + a2 b c2 = 1,
1
c1 = 0, c2 = .
2
a b
581
The fundamental set of solutions is
p a p 1 p
eax cosh a2 b x + sinh a2 b x , eax sinh a2 b x .
a2 b a2 b
The conditions, y1 (0) = 1 and y10 (0) = 0, for the first solution become,
c1 = 1, ac1 + c2 = 0,
c1 = 1, c2 = a.
The conditions, y2 (0) = 0 and y20 (0) = 1, for the second solution become,
c1 = 0, ac1 + c2 = 1,
c1 = 0, c2 = 1.
e cosh a a2 b
sinh a2 bx , e
a2 b
sinh a 2 bx if a2 > b,
n
ax
a
ax 1 o
e cos b a2 x +
ba 2
sin b a2 x , e
ba 2
sin b a2x if a2 < b,
6y 00 5y 0 + y = 0, y(0) = 4, y 0 (0) = 0.
62 5 + 1 = 0
(2 1)(3 1) = 0
1 1
= ,
3 2
582
1 2 3 4 5
-5
-10
-15
-20
-25
-30
y = 12 et/3 8 et/2 .
y 00 2y 0 + 5y = 0, y(/2) = 0, y 0 (/2) = 2.
2 2 + 5 = 0
=1 15
= {1 + 2, 1 2}
y = c1 et cos(2t) + c2 et sin(2t).
y(/2) = 0 c1 e/2 = 0 c1 = 0
y 0 (/2) = 2 2c2 e/2 = 2 c2 = e/2
y = et/2 sin(2t).
The solution is plotted in Figure 17.2. The solution oscillates with an amplitude that tends to
as t .
3. We consider the problem
y 00 + 4y 0 + 4y = 0, y(1) = 2, y 0 (1) = 1.
2 + 4 + 4 = 0
( + 2)2 = 0
= 2
583
50
40
30
20
10
3 4 5 6
-10
1.5
0.5
-1 1 2 3 4 5
c1 e2 c2 e2 = 2, 2c1 e2 +3c2 e2 = 1
c1 = 7 e2 , c2 = 5 e2
y = (7 + 5t) e2(t+1)
7 + 5t 5
lim (7 + 5t) e2(t+1) = lim = lim =0
t t e2(t+1) t 2 e2(t+1)
Solution 17.4
y 00 4y 0 + 13y = 0.
With the substitution y = ex we obtain
2 ex 4 ex +13 ex = 0
2 4 + 13 = 0
= 2 3i.
584
Noting that
Solution 17.5
We note that
y 000 y 00 + y 0 y = 0
is a constant coefficient equation. The substitution, y = ex , yields
3 2 + 1 = 0
( 1)( i)( + i) = 0.
The corresponding solutions are ex , ex , and ex . We can write the general solution as
Solution 17.6
We start with the equation y 00 + y = 0. We substitute y = ex into the differential equation to obtain
2 + 1 = 0, = i.
{ex , ex }.
y1 = c1 ex +c2 ex ,
y2 = c3 ex +c4 ex .
we obtain
ex + ex
y1 = = cos x,
2
ex + ex
y2 = = sin x.
2
Now consider the equation y 00 y = 0. By substituting y = ex we find that a set of solutions is
{ex , ex }.
585
Next consider y 00 = 0. We can find the solutions by substituting y = ex or by integrating the
equation twice. The fundamental set of solutions as x = 0 is
{1, x}.
Note that if u(x) is a solution of a constant coefficient differential equation, then u(x + c) is also
a solution. Also note that if u(x) satisfies y(0) = a, y 0 (0) = b, then u(x x0 ) satisfies y(x0 ) = a,
y 0 (x0 ) = b. Thus the fundamental sets of solutions at x = 1 are
3. {1, x 1}.
Solution 17.7
Let y(t) denote the displacement of the mass from equilibrium. The forces on the mass are ky(t)
due to the spring and y 0 (t) due to friction. We equate the external forces to my 00 (t) to find the
differential equation of the motion.
my 00 = ky y 0
0 k
y 00 + y + y=0
m m
We respectively call these cases: strongly damped, weakly damped and critically damped. In the
case that m = k = 1 the solution is
p
et/2 22 sinh 2 4 t/2 if > 2,
4 p
y(t) = et/2 2 sin 4 2 t/2 if < 2,
42
t et
if = 2.
Note that when t is large, t et is much smaller than et/2 for < 2. To prove this we examine
the ratio of these functions as t .
t et t
lim = lim (1/2)t
t et/2 t e
1
= lim
t (1 /2) e(1)t
=0
Using this result, we see that the critically damped solution decays faster than the weakly damped
solution.
We can write the strongly damped solution as
2 2 2
et/2 p e 4 t/2 e 4 t/2 .
2 4
586
0.5
Strong
0.4 Weak
0.3 Critical
0.2
0.1
2 4 6 8 10
-0.1
2 4 t/2
For large t, the dominant factor is e . Note that for > 2,
p p
2 4 = ( + 2)( 2) > 2.
Solution 17.8
Clearly y = c cos(x ) satisfies the differential equation y 00 + y = 0. Since it is a two-parameter
family of functions, it must be the general solution.
Using a trigonometric identity we can rewrite the solution as
c cos = 0,
c sin = 1,
which has the solutions c = 1, = (2n + 1/2), and c = 1, = (2n 1/2), for n Z.
Clearly y = c cosh(x ) satisfies the differential equation y 00 y = 0. Since it is a two-parameter
family of functions, it must be the general solution.
Using a trigonometric identity we can rewrite the solution as
c cosh = 0,
c sinh = 1,
which has the solutions c = i, = (2n + 1/2), and c = i, = (2n 1/2), for n Z.
Solution 17.9
We substitute y = et into the differential equation.
2 et +5 et +6 et = 0
2 + 5 + 6 = 0
( + 2)( + 3) = 0
587
The general solution of the differential equation is
c1 + c2 = 1,
2c1 3c2 = V.
y = (3 + V ) e2t (2 + V ) e3t .
Solution 17.10
For negative x, the differential equation is
y 00 y = 0.
2 1 = 0
= 1
y = ex , ex
We can take linear combinations to write the solutions in terms of the hyperbolic sine and cosine.
y = {cosh(x), sinh(x)}
y 00 + y = 0.
2 + 1 = 0
=
y = ex , ex
We can take linear combinations to write the solutions in terms of the sine and cosine.
y = {cos(x), sin(x)}
We will find the fundamental set of solutions at x = 0. That is, we will find a set of solutions,
{y1 , y2 } that satisfy the conditions:
588
Euler Equations
Solution 17.11
We consider an Euler equation,
x2 y 00 + xy 0 + y = 0, x > 0.
u00 + u = 0.
2 + 1 = 0
= i
{e , e }.
Since
e + e e e
cos = and sin = ,
2 2
another linearly independent set of solutions is
{cos , sin }.
Solution 17.12
Consider the differential equation
x2 y 00 2xy + 2y = 0.
With the substitution y = x this equation becomes
( 1) 2 + 2 = 0
2 3 + 2 = 0
= 1, 2.
Solution 17.13
We note that
1 0
xy 000 + y 00 + y =0
x
is an Euler equation. The substitution y = x yields
3 32 + 2 + 2 + = 0
3 22 + 2 = 0.
= 0, = 1 + i, =1
589
The corresponding solutions to the differential equation are
y = x0 y = x1+ y = x1
y=1 y = x e ln x y = x e ln x .
Solution 17.14
We substitute y = x into the differential equation.
x2 y 00 + (2a + 1)xy 0 + by = 0
( 1) + (2a + 1) + b = 0
2 + 2a + b = 0
p
= a a2 b
By taking the sum and difference of these solutions, we can write the general solution as
p p
y = c1 xa cos b a2 ln x + c2 xa sin b a2 ln x .
For a2 = b, the quadratic in lambda has a double root at = a. The general solution of the
differential equation is
y = c1 xa + c2 xa ln x.
In summary, the general solution is:
a 2 2
x
c1 x a b + c2 x a b if a2 > b,
y = xa c1 cos b a2 ln x + c2 sin b a2 ln x
if a2 < b,
xa (c + c ln x)
if a2 = b.
1 2
Solution 17.15
For a 6= 0, two linearly independent solutions of
y 00 a2 y = 0
are
y1 = eax , y2 = eax .
For a = 0, we have
y1 = e0x = 1, y2 = x e0x = x.
In this case the solution are defined by
d ax
y1 = [eax ]a=0 , y2 = e .
da a=0
590
By the definition of differentiation, f 0 (0) is
f (a) f (a)
f 0 (0) = lim .
a0 2a
Thus the second solution in the case a = 0 is
eax eax
y2 = lim
a0 a
Consider the solutions
ex ex
y1 = eax , y2 = lim .
a
Clearly y1 is a solution for all a. For a 6= 0, y2 is a linear combination of eax and eax and is
thus a solution. Since the coefficient of eax in this linear combination is non-zero, it is linearly
independent to y1 . For a = 0, y2 is one half the derivative of eax evaluated at a = 0. Thus it is a
solution.
For a 6= 0, two linearly independent solutions of
x2 y 00 + xy 0 a2 y = 0
are
y1 = xa , y2 = xa .
For a = 0, we have
a d a
y1 = [x ]a=0 = 1, y2 = x = ln x.
da a=0
Solution 17.16
1.
x2 y 00 2xy 0 + 2y = 0
We substitute y = x into the differential equation.
( 1) 2 + 2 = 0
2 3 + 2 = 0
( 1)( 2) = 0
y = c1 x + c2 x2
2.
x2 y 00 2y = 0
We substitute y = x into the differential equation.
( 1) 2 = 0
2 2 = 0
( + 1)( 2) = 0
c1
y= + c2 x2
x
591
3.
x2 y 00 xy 0 + y = 0
( 1) + 1 = 0
2 2 + 1 = 0
( 1)2 = 0
y = c1 x + c2 x ln x.
Exact Equations
Solution 17.17
We note that
y 00 + y 0 sin x + y cos x = 0
is an exact equation.
d 0
[y + y sin x] = 0
dx
y 0 + y sin x = c1
d cos x
ye = c1 e cos x
dx
Z
y = c1 ecos x e cos x dx + c2 ecos x
Solution 17.18
(1 x2 )(0) 2x(1) + 2x = 0
We look for a second solution of the form y = xu. We substitute this into the differential equation
592
and use the fact that x is a solution.
(1 x2 )(xu00 + 2u0 ) 2x(xu0 + u) + 2xu = 0
(1 x2 )(xu00 + 2u0 ) 2x(xu0 ) = 0
(1 x2 )xu00 + (2 4x2 )u0 = 0
u00 2 4x2
=
u0 x(x2 1)
00
u 2 1 1
= +
u0 x 1x 1+x
ln(u0 ) = 2 ln(x) ln(1 x) ln(1 + x) + const
0 c
ln(u ) = ln
x2 (1 x)(1 + x)
c
u0 = 2
x (1 x)(1 + x)
0 1 1 1
u =c + +
x2 2(1 x) 2(1 + x)
1 1 1
u = c ln(1 x) + ln(1 + x) + const
x 2 2
1 1 1+x
u = c + ln + const
x 2 1x
A second linearly independent solution is
x 1+x
y = 1 + ln .
2 1x
Solution 17.19
We are given that y = ex is a solution of
x+1 0 1
y 00 y + y = 0.
x x
To find another linearly independent solution, we will use reduction of order. Substituting
y = u ex
y 0 = (u0 + u) ex
y 00 = (u00 + 2u0 + u) ex
into the differential equation yields
x+1 0 1
u00 + 2u0 + u (u + u) + u = 0.
x x
x1 0
u00 + u =0
Z x
d 0 1
u exp 1 dx =0
dx x
u0 exln x = c1
u0 = c1 x ex
Z
u = c1 x ex dx + c2
u = c1 (x ex + ex ) + c2
y = c1 (x + 1) + c2 ex
593
Thus a second linearly independent solution is
y = x + 1.
Solution 17.20
We are given that y = x is a solution of
(1 2x)y 00 + 4xy 0 4y = 0.
To find another linearly independent solution, we will use reduction of order. Substituting
y = xu
y 0 = xu0 + u
y 00 = xu00 + 2u0
Solution 17.21
One solution of
(x 1)y00 xy0 + y = 0,
is y1 = ex . We find a second solution with reduction of order. We make the substitution y2 = u ex
in the differential equation. We determine u up to an additive constant.
594
Chapter 18
In mathematics you dont understand things. You just get used to them.
- Johann von Neumann
595
2
dx) = x2 .
R
The integrating factor is I(x) = exp( x
d 2
(x u) = x2
dx
1
x2 u = x3 + c
3
1 c
u= x+ 2
3 x
1
1 c
y = x+ 2
3 x
3x2
y= .
c x2
then we would be able to solve the differential equation. Factoring reduces the problem to a system
of first order equations. We start with the factored equation
d d
+ a(x) + b(x) y = f (x).
dx dx
d
We set u = dx + b(x) y and solve the problem
d
+ a(x) u = f (x).
dx
Lets say by some insight or just random luck we are able to see that this equation can be factored
into
d d 1
+x y = 0.
dx dx x
596
We first solve the equation
d
+ x u = 0.
dx
u0 + xu = 0
d x2 /2
e u =0
dx
2
u = c1 ex /2
Then we solve for y with the equation
d 1 2
y = u = c1 ex /2 .
dx x
1 2
y 0 y = c1 ex /2
x
d 2
x y = c1 x1 ex /2
1
dx
Z
2
y = c1 x x1 ex /2 dx + c2 x
If we were able to solve for a and b in Equation 18.1 in terms of p and q then we would be able
to solve any second order differential equation. Equating the two operators,
d2
d d d
+p +q = +a +b
dx2 dx dx dx
d2 d
= 2
+ (a + b) + (b0 + ab).
dx dx
Thus we have the two equations
a + b = p, and b0 + ab = q.
Eliminating a,
b0 + (p b)b = q
b0 = b2 pb + q
Now we have a nonlinear equation for b that is no easier to solve than the original second order
linear equation.
597
Now we have a second order linear equation for u.
u 0
Result 18.2.1 The substitution y = au transforms the Riccati equation
( 1) + 1 = ( 1)2 = 0.
c1 + c2 (1 + log x)
y=
c1 x + c2 x log x
1 + c(1 + log x)
y=
x + cx log x
dy 1
= 3
dx y xy 2
dx
= y 3 xy 2
dy
x0 + y 2 x = y 3
598
Now we have a first order equation for x.
d y3 /3 3
e x = y 3 ey /3
dy
Z
y 3 /3 3 3
x=e y 3 ey /3 dy + c ey /3
y 0 = u(y)
d
y 00 = u(y)
dx
dy d
= u(y)
dx dy
= y 0 u0
= u0 u
y 000 = (u00 u + (u0 )2 )u.
599
Thus we see that the equation for u(y) will have an order of one less than the original equation.
y 00 = y + (y 0 )2 .
u0 u = y + u2
u0 = u + yu1 .
1 0
v =v+y
2
v 0 2v = 2y
d 2y
e v = 2y e2y
dy
Z
2y
v(y) = c1 e + e 2y
2y e2y dy
Z
2y 2y 2y 2y
v(y) = c1 e + e y e + e dy
1
v(y) = c1 e2y + e2y y e2y e2y
2
1
v(y) = c1 e2y y .
2
Now we solve for u.
1/2
2y 1
u(y) = c1 e y .
2
1/2
dy 1
= c1 e2y y
dx 2
dy
dx =
1 1/2
c1 e2y y 2
Z
1
x + c2 = dy
1 1/2
c1 e2y y 2
y 00 + y 3 = 0.
600
With the change of variables, u(y) = y 0 , the equation becomes
u0 u + y 3 = 0.
This equation is separable.
u du = y 3 dy
1 2 1
u = y 4 + c1
2 4
1/2
1
u = 2c1 y 4
2
1/2
1
y 0 = 2c1 y 4
2
dy
= dx
(2c1 12 y 4 )1/2
601
You may recall that the change of variables x = et reduces an Euler equation to a constant
coefficient equation. To generalize this result to nonlinear equations we will see that the same
change of variables reduces an equidimensional-in-x equation to an autonomous equation.
Writing the derivatives with respect to x in terms of t,
d dt d d
x = et , = = et
dx dx dt dt
d d
x =
dx dt
2
d2
d d d d d
x2 2 = x x x = 2 .
dx dx dx dx dt dt
602
The change of variables y(x) = eu(x) reduces an nth order equidimensional-in-y equation to an
equation of order n 1 for u0 . Writing the derivatives of eu(x) ,
d u
e = u0 eu
dx
d2 u
e = (u00 + (u0 )2 ) eu
dx2
d3 u
e = (u000 + 3u00 u00 + (u0 )3 ) eu .
dx3
y 00 + p(x)y 0 + q(x)y = 0.
Thus we have a Riccati equation for u0 . This transformation might seem rather useless since lin-
ear equations are usually easier to work with than nonlinear equations, but it is often useful in
determining the asymptotic behavior of the equation.
y 00 y + (y 0 )2 y 2 = 0.
v 00 (v 0 )2 (v 0 )2
= 2 +1
2v 2v 2 4v 2
v 00 2v = 0
v = c1 e 2x
+c2 e 2x
0
c1 e 2x c2 e 2x
u =2 2
c1 e 2x +c2 e 2x
c1 2 e 2x c2 2 e 2x
Z
u=2 dx + c3
c1 e 2x +c2 e 2x
u = 2 log c1 e 2x +c2 e 2x + c3
2
y = c1 e 2x +c2 e 2x ec3
603
Result 18.6.1 A differential equation is equidimensional-in-y if it is invariant
under the change of variables y(x) = cv(x). An nth order equidimensional-in-y
equation can be reduced to an equation of order n 1 in u0 with the change
of variables y(x) = eu(x) .
y 00 + x2 y 2 = 0.
c 00
v () + c2 x2 c2 v 2 () = 0.
c2
Equating powers of c in the two terms yields = 4.
Introducing the change of variables y(x) = x4 u(x) yields
d2 4
x u(x) + x2 (x4 u(x))2 = 0
dx 2
604
18.8 Exercises
Exercise 18.1
1. Find the general solution and the singular solution of the Clairaut equation,
y = xp + p2 .
2. Show that the singular solution is the envelope of the general solution.
Bernoulli Equations
Exercise 18.2 (mathematica/ode/techniques nonlinear/bernoulli.nb)
Consider the Bernoulli equation
dy
+ p(t)y = q(t)y .
dt
1. Solve the Bernoulli equation for = 1.
2. Show that for 6= 1 the substitution u = y 1 reduces Bernoullis equation to a linear
equation.
3. Find the general solution to the following equations.
dy
t2 + 2ty y 3 = 0, t > 0
dt
(a)
dy
+ 2xy + y 2 = 0
dx
(b)
Exercise 18.3
Consider a population, y. Let the birth rate of the population be proportional to y with constant
of proportionality 1. Let the death rate of the population be proportional to y 2 with constant of
proportionality 1/1000. Assume that the population is large enough so that you can consider y to
be continuous. What is the population as a function of time if the initial population is y0 ?
Exercise 18.4
Show that the transformation u = y 1n reduces the equation to a linear first order equation. Solve
the equations
dy
1. t2 + 2ty y 3 = 0 t > 0
dt
dy
2. = ( cos t + T ) y y 3 , and T are real constants. (From a fluid flow stability problem.)
dt
Riccati Equations
Exercise 18.5
1. Consider the Ricatti equation,
dy
= a(x)y 2 + b(x)y + c(x).
dx
Substitute
1
y = yp (x) +
u(x)
into the Ricatti equation, where yp is some particular solution to obtain a first order linear
differential equation for u.
605
2. Consider a Ricatti equation,
y 0 = 1 + x2 2xy + y 2 .
Verify that yp (x) = x is a particular solution. Make the substitution y = yp + 1/u to find the
general solution.
What would happen if you continued this method, taking the general solution for yp ? Would
you be able to find a more general solution?
3. The substitution
u0
y=
au
gives us the second order, linear, homogeneous differential equation,
0
a
u00 + b u0 + acu = 0.
a
The general solution for u has two constants of integration. However, the solution for y should
only have one constant of integration as it satisfies a first order equation. Write y in terms of
the solution for u and verify tha y has only one constant of integration.
Autonomous Equations
*Equidimensional-in-x Equations
*Equidimensional-in-y Equations
*Scale-Invariant Equations
606
18.9 Hints
Hint 18.1
Bernoulli Equations
Hint 18.2
Hint 18.3
The differential equation governing the population is
dy y2
=y , y(0) = y0 .
dt 1000
This is a Bernoulli equation.
Hint 18.4
Riccati Equations
Hint 18.5
Autonomous Equations
*Equidimensional-in-x Equations
*Equidimensional-in-y Equations
*Scale-Invariant Equations
607
2
-4 -2 2 4
-2
-4
18.10 Solutions
Solution 18.1
We consider the Clairaut equation,
y = xp + p2 . (18.2)
1. We differentiate Equation 18.2 with respect to x to obtain a second order differential equation.
y 0 = y 0 + xy 00 + 2y 0 y 00
y 00 (2y 0 + x) = 0
Equating the first or second factor to zero will lead us to two distinct solutions.
x
y 00 = 0 or y0 =
2
If y 00 = 0 then y 0 p is a constant, (say y 0 = c). From Equation 18.2 we see that the general
solution is,
y(x) = cx + c2 . (18.3)
Recall that the general solution of a first order differential equation has one constant of inte-
gration.
If y 0 = x/2 then y = x2 /4+const. We determine the constant by substituting the expression
into Equation 18.2.
x2 x x 2
+c=x +
4 2 2
Thus we see that a singular solution of the Clairaut equation is
1
y(x) = x2 . (18.4)
4
Recall that a singular solution of a first order nonlinear differential equation has no constant
of integration.
2. Equating the general and singular solutions, y(x), and their derivatives, y 0 (x), gives us the
system of equations,
1 1
cx + c2 = x2 , c = x.
4 2
Since the first equation is satisfied for c = x/2, we see that the solution y = cx + c2 is tangent
to the solution y = x2 /4 at the point (2c, |c|). The solution y = cx + c2 is plotted for
c = . . . , 1/4, 0, 1/4, . . . in Figure 18.1.
608
The envelope of a one-parameter family F (x, y, c) = 0 is given by the system of equations,
F (x, y, c) = 0, Fc (x, y, c) = 0.
y = cx + c2 , 0 = x + 2c.
Substituting the solution of the second equation, c = x/2, into the first equation gives the
envelope,
2
1 1 1
y = x x + x = x2 .
2 2 4
Thus we see that the singular solution is the envelope of the general solution.
Bernoulli Equations
Solution 18.2
1.
dy
+ p(t)y = q(t)y
dt
dy
= (q p) dt
y
Z
ln y = (q p) dt + c
R
(qp) dt
y = ce
609
The integrating factor is
R
=e (4/t) dt
= e4 ln t = t4 .
(b)
dy
+ 2xy + y 2 = 0
dx
y0 2x
+ = 1
y2 y
u0 2xu = 1
Solution 18.3
The differential equation governing the population is
dy y2
=y , y(0) = y0 .
dt 1000
We recognize this as a Bernoulli equation. The substitution u(t) = 1/y(t) yields
du 1 1
=u , u(0) = .
dt 1000 y0
1
u0 + u =
1000
t Z t
1 t e
u= e + e d
y0 1000 0
1 1 1
u= + et
1000 y0 1000
610
Solving for y(t),
1
1 1 1
y(t) = + et .
1000 y0 1000
As a check, we see that as t , y(t) 1000, which is an equilibrium solution of the differential
equation.
dy y2
=0=y y = 1000.
dt 1000
Solution 18.4
1.
dy
t2 + 2ty y 3 = 0
dt
dy
+ 2t1 y = t2 y 3
dt
u0 4t1 u = 2t2
1
y(t) = q
2 1
5t + ct4
5t
y(t) =
2 + ct5
2.
dy
( cos t + T ) y = y 3
dt
u0 + 2 ( cos t + T ) u = 2
611
We multiply by the integrating factor and integrate.
d 2( sin t+T t)
e u = 2 e2( sin t+T t)
dt Z
2( sin t+T t) 2( sin t+T t)
u = 2e e dt + c
e sin t+T t
y = q R
2 e2( sin t+T t) dt + c
Riccati Equations
Solution 18.5
We consider the Ricatti equation,
dy
= a(x)y 2 + b(x)y + c(x). (18.5)
dx
1. We substitute
1
y = yp (x) +
u(x)
into the Ricatti equation, where yp is some particular solution.
u0
yp 1 1
yp0 2 = +a(x) yp2 + 2 + 2 + b(x) yp + + c(x)
u u u u
u0
1 yp 1
2 = b(x) + a(x) 2 + 2
u u u u
u0 = (b + 2ayp ) u a
We obtain a first order linear differential equation for u whose solution will contain one constant
of integration.
2. We consider a Ricatti equation,
y 0 = 1 + x2 2xy + y 2 . (18.6)
1 = 1 + x2 2xx + x2
u0 = (2x + 2x) u 1
u = x + c
1
y =x+
cx
1
What would happen if we continued this method? Since y = x + cx is a solution of the
Ricatti equation we can make the substitution,
1 1
y =x+ + , (18.7)
c x u(x)
612
which will lead to a solution for y which has two constants of integration. Then we could
repeat the process, substituting the sum of that solution and 1/u(x) into the Ricatti equation
to find a solution with three constants of integration. We know that the general solution of
a first order, ordinary differential equation has only one constant of integration. Does this
method for Ricatti equations violate this theorem? Theres only one way to find out. We
substitute Equation 18.7 into the Ricatti equation.
0 1
u = 2x + 2 x + u1
cx
2
u0 = u1
cx
2
u0 + u = 1
cx
The integrating factor is
1
I(x) = e2/(cx) = e2 log(cx) = .
(c x)2
It appears that we we have found a solution that has two constants of integration, but appear-
ances can be deceptive. We do a little algebraic simplification of the solution.
1 1
y =x+ +
c x (b(c x) 1)(c x)
(b(c x) 1) + 1
y =x+
(b(c x) 1)(c x)
b
y =x+
b(c x) 1
1
y =x+
(c 1/b) x
This is actually a solution, (namely the solution we had before), with one constant of inte-
gration, (namely c 1/b). Thus we see that repeated applications of the procedure will not
produce more general solutions.
3. The substitution
u0
y=
au
gives us the second order, linear, homogeneous differential equation,
0
a
u00 + b u0 + acu = 0.
a
613
The solution to this linear equation is a linear combination of two homogeneous solutions, u1
and u2 .
u = c1 u1 (x) + c2 u2 (x)
The solution of the Ricatti equation is then
Since we can divide the numerator and denominator by either c1 or c2 , this answer has only
one constant of integration, (namely c1 /c2 or c2 /c1 ).
x0 y 1/2 x = y 1/2
2y 3/2 2y 3/2
d
x exp = y 1/2 exp
dy 3 3
3/2 3/2
2y 2y
x exp = exp + c1
3 3
3/2
2y
x = 1 + c1 exp
3
3/2
x+1 2y
= exp
c1 3
x+1 2 3/2
log = y
c1 3
2/3
3 x+1
y= log
2 c1
2/3
3
y = c + log(x + 1)
2
Autonomous Equations
*Equidimensional-in-x Equations
*Equidimensional-in-y Equations
*Scale-Invariant Equations
614
Chapter 19
Prize intensity more than extent. Excellence resides in quality not in quantity. The best is always
few and rare - abundance lowers value. Even among men, the giants are usually really dwarfs.
Some reckon books by the thickness, as if they were written to exercise the brawn more than the
brain. Extent alone never rises above mediocrity; it is the misfortune of universal geniuses that in
attempting to be at home everywhere are so nowhere. Intensity gives eminence and rises to the
heroic in matters sublime.
-Balthasar Gracian
We can solve this differential equation by making the substitution y = ex . This yields the algebraic
equation
2 + a + b = 0.
1 p
= a a2 4b
2
There are two cases to consider. If a2 6= 4b then the solutions are
a2 4b)x/2 a2 4b)x/2
y1 = e(a+ , y2 = e(a
If a2 = 4b then we have
y1 = eax/2 , y2 = x eax/2
Note that regardless of the values of a and b the solutions are of the form
y = eax/2 u(x)
615
We would like to write the solutions to the general differential equation in terms of the solutions
to simpler differential equations. We make the substitution
y = ex u
The derivatives of y are
y 0 = ex (u0 + u)
y 00 = ex (u00 + 2u0 + 2 u)
Substituting these into the differential equation yields
u00 + (2 + a)u0 + (2 + a + b)u = 0
In order to get rid of the u0 term we choose
a
= .
2
The equation is then
a2
u00 + b u = 0.
4
There are now two cases to consider.
616
19.2 Normal Form
19.2.1 Second Order Equations
Consider the second order equation
u00 + I(x)y = 0.
This is known as the normal form of (19.1). The function I(x) is known as the invariant of the
equation.
Now to find the change of variables that will accomplish this transformation. We make the
substitution y(x) = a(x)u(x) in (19.1).
a0
00
pa0
00 0 a
u + 2 +p u + + +q u=0
a a a
To eliminate the u0 term, a(x) must satisfy
a0
2 +p=0
a
1
a0 + pa = 0
2
Z
1
a = c exp p(x) dx .
2
For this choice of a, our differential equation for u becomes
p2 p0
u00 + q u = 0.
4 2
Two differential equations having the same normal form are called equivalent.
617
19.2.2 Higher Order Differential Equations
Consider the third order differential equation
y 00 + p(x)y 0 + q(x)y = 0.
d d d d
= = f0
dx dx d d
d2 d d d2 d
2
= f0 f0 = (f 0 )2 2 + f 00
dx d d d d
618
The differential equation becomes
(f 0 )2 u00 + f 00 u0 + pf 0 u0 + qu = 0.
f 00 + pf 0 = 0
Z
0
f = exp p(x) dx
Z Z
f= exp p(x) dx dx.
into Z
00
u () + q(x) exp 2 p(x) dx u() = 0.
y 00 + p(x)y 0 + q(x)y = 0.
(f 0 )2 u00 + (f 00 + pf 0 )u0 + qu = 0.
(f 0 )2 = c1 q, and f 00 + pf 0 = c2 q,
619
Z p
f =c q(x) dx.
If the expression
q 0 + 2pq
q 3/2
is a constant then the change of variables
Z p
=c q(x) dx, u() = y(x),
Fredholms Equations. Fredholms integral equations of the first and second kinds are
Z b
N (x, )f () d = f (x),
a
Z b
y(x) = f (x) + N (x, )y() d.
a
620
Integrating this equation twice yields
Z xZ Z x Z
00 0
y () + p()y () + q()y() d d = f () d d
a a a a
Z x Z x
(x )[y 00 () + p()y 0 () + q()y()] d = (x )f () d.
a a
Z x
0
(x a)y (a) + y(x) y(a) (x a)p(a)y(a) [(x )p0 () p()]y() d
a
Z x Z x
+ (x )q()y() d = (x )f () d.
a a
Note that the initial conditions for the differential equation are built into the Volterra equation.
Setting x = a in the Volterra equation yields y(a) = . Differentiating the Volterra equation,
Z x Z x
y 0 (x) = f () d + (p(a) + ) p(x)y(x) + [p0 () q()] p()y() d
a a
where
Z x
F (x) = (x )f () d + (x a)(p(a) + ) +
a
N (x, ) = (x )[p0 () q()] p().
621
19.4.2 Boundary Value Problems
Consider the boundary value problem
To obtain a problem with homogeneous boundary conditions, we make the change of variable
y(x) = u(x) + + (x a)
ba
to obtain the problem
u00 = f (x), u(a) = u(b) = 0.
Now we will use Greens functions to write the solution as an integral. First we solve the problem
The homogeneous solutions of the differential equation that satisfy the left and right boundary
conditions are
c1 (x a) and c2 (x b).
Thus the Greens function has the form
(
c1 (x a), for x
G(x|) =
c2 (x b), for x
From the above result we can see that the solution satisfies
Z b
y(x) = + (x a) + G(x|)[f () p()y 0 () q()y()] d.
ba a
622
Result 19.4.2 The boundary value problem
where
Z b
F (x) = + (x a) + G(x|)f () d,
ba a
Z b
N (x, ) = H(x|)y() d,
(a
(xa)(b)
ba
, for x
G(x|) = (xb)(a)
ba
, for x ,
(
(xa)
ba
p() + (xa)(b)
ba
[p0 () q()] for x
H(x|) = (xb) (xb)(a) 0
ba
p() + ba
[p () q()] for x .
623
19.5 Exercises
The Constant Coefficient Equation
Normal Form
Exercise 19.1
Solve the differential equation
4 1
y + 2 + x y0 +
00
24 + 12x + 4x2 y = 0.
3 9
v 00 + ( 2 + A)v = 0
v 00 = v
v 00 + v = 0
v 00 = 0.
Exercise 19.3
Show that the solution of the differential equation
00 b 0 d e
y +2 a+ y + c+ + 2 y =0
x x x
Exercise 19.4
Show that the second order Euler equation
d2 y dy
x2 2
+ a1 x + a0 y = 0
d x dx
can be transformed to a constant coefficient equation.
Exercise 19.5
Solve Bessels equation of order 1/2,
001 0 1
y + y + 1 2 y = 0.
x 4x
624
19.6 Hints
The Constant Coefficient Equation
Normal Form
Hint 19.1
Transform the equation to normal form.
Hint 19.3
Transform the equation to normal form and then apply the scale transformation x = .
Hint 19.4
Make the change of variables x = et , y(x) = u(t). Write the derivatives with respect to x in terms
of t.
x = et
dx = et dt
d d
= et
dx dt
d d
x =
dx dt
Hint 19.5
Transform the equation to normal form.
625
19.7 Solutions
The Constant Coefficient Equation
Normal Form
Solution 19.1
4 1
y 00 + 2 + x y 0 + 24 + 12x + 4x2 y = 0
3 9
To transform the equation to normal form we make the substitution
Z
1 4
y = exp 2 + x dx u
2 3
2
= exx /3
u
u00 + u = 0
u00 + ( + x + x2 )u = 0
626
= 0.
= 0. We immediately have the equation
u00 = 0.
6= 0. With the change of variables
v() = u(x), x = 1/2 ,
we obtain
v 00 + v = 0.
6= 0. We have the equation
y 00 + ( + x)y = 0.
The scale transformation x = + yields
v 00 + 2 ( + ( + ))y = 0
v 00 = [3 + 2 ( + )]v.
Choosing
= ()1/3 , =
yields the differential equation
v 00 = v.
6= 0. The scale transformation x = + yields
v 00 + 2 [ + ( + ) + ( + )2 ]v = 0
v 00 + 2 [ + + 2 + ( + 2) + 2 2 ]v = 0.
Choosing
= 1/4 , =
2
yields the differential equation
v 00 + ( 2 + A)v = 0
where
1
A = 1/2 3/2 .
4
Solution 19.3
The substitution that will transform the equation to normal form is
Z
1 b
y = exp 2 a+ dx u
2 x
= xb eax u.
The invariant of the equation is
2
d e 1 b 1 d b
I(x) = c + + 2 2 a+ 2 a+
x x 4 x 2 dx x
2
d 2ab e + b b
= c ax + +
x x2
+ + 2.
x x
The invariant form of the differential equation is
00
u + + + 2 u = 0.
x x
We consider the following cases:
627
= 0.
Choosing = 1 , we obtain
1
v 00 + + 2 u = 0.
1/2
v 00 + 1 + + 2 v = 0.
Solution 19.4
We write the derivatives with respect to x in terms of t.
x = et
dx = et dt
d d
= et
dx dt
d d
x =
dx dt
2
d
Now we express x2 dx 2 in terms of t.
d2 d2
d d d d
x2 =x x x = 2
dx2 dx dx dx dt dt
Thus under the change of variables, x = et , y(x) = u(t), the Euler equation becomes
u00 u0 + a1 u0 + a0 u = 0
u00 + (a1 1)u0 + a0 u = 0.
Solution 19.5
The transformation
Z
1 1
y = exp dx = x1/2 u
2 x
628
will put the equation in normal form. The invariant is
1 1 1 1 1
I(x) = 1 2 2
= 1.
4x 4 x 2 x2
629
630
Chapter 20
I do not know what I appear to the world; but to myself I seem to have been only like a boy
playing on a seashore, and diverting myself now and then by finding a smoother pebble or a prettier
shell than ordinary, whilst the great ocean of truth lay all undiscovered before me.
The derivative of the Heaviside function is zero for x 6= 0. At x = 0 the derivative is undefined. We
will represent the derivative of the Heaviside function by the Dirac delta function, (x). The delta
function is zero for x 6= 0 and infinite at the point x = 0. Since the derivative of H(x) is undefined,
(x) is not a function in the conventional sense of the word. One can derive the properties of the
delta function rigorously, but the treatment in this text will be almost entirely heuristic.
The second property comes from the fact that (x) represents the derivative of H(x). The Dirac
delta function is conceptually pictured in Figure 20.1.
631
Let f (x) be a continuous function that vanishes at infinity. Consider the integral
Z
f (x)(x) dx.
10
-1 1
The Dirac delta function (x) can be thought of as b(x, ) in the limit as 0. Note that the
delta function so defined satisfies the properties,
( Z
0 for x 6= 0
(x) = and (x) dx = 1
for x = 0
632
Delayed Limiting Process. When the Dirac delta function appears inside an integral, we can
think of the delta function as a delayed limiting process.
Z Z
f (x)(x) dx lim f (x)b(x, ) dx.
0
Let f (x) be a continuous function and let F 0 (x) = f (x). We compute the integral of f (x)(x).
1 /2
Z Z
f (x)(x) dx = lim f (x) dx
0 /2
1 /2
= lim [F (x)]/2
0
F (/2) F (/2)
= lim
0
= F 0 (0)
= f (0)
n (x) = 0 for x 6= 0
Z
n (x) dx = 1
Rn
It is easy to verify, that the n-dimensional Dirac delta function can be written as a product of
1-dimensional Dirac delta functions.
n
Y
n (x) = (xk )
k=1
Where the transformation is non-singular, one merely divides the Dirac delta function by the Jaco-
bian of the transformation to the coordinate system.
Example 20.4.1 Consider the Dirac delta function in cylindrical coordinates, (r, , z). The Jaco-
bian is J = r. Z Z 2 Z
3 (x x0 ) r dr d dz = 1
0 0
633
Z Z 2 Z
1
(r r0 ) ( 0 ) (z z0 ) r dr d dz
0 0 r
Z Z 2 Z
= (r r0 ) dr ( 0 ) d (z z0 ) dz = 1
0 0
For r0 = 0, we have
1
3 (x x0 ) =
(r) (z z0 )
2r
since this again satisfies the two defining properties.
1
(r) (z z0 ) = 0 for (r, z) 6= (0, z0 )
2r
Z Z 2 Z Z Z 2 Z
1 1
(r) (z z0 ) r dr d dz = (r) dr d (z z0 ) dz = 1
0 0 2r 2 0 0
634
20.5 Exercises
Exercise 20.1
Let f (x) be a function that is continuous except for a jump discontinuity at x = 0. Using a delayed
limiting process, show that Z
f (0 ) + f (0+ )
= f (x)(x) dx.
2
Exercise 20.2
Show that the Dirac delta function is symmetric.
(x) = (x)
Exercise 20.3
Show that
(x)
(cx) = .
|c|
Exercise 20.4
We will consider the Dirac delta function with a function as on argument, (y(x)). Assume that
y(x) has simple zeros at the points {xn }.
y(xn ) = 0, y 0 (xn ) 6= 0
Further assume that y(x) has no multiple zeros. (If y(x) has multiple zeros (y(x)) is not well-defined
in the same sense that 1/0 is not well-defined.) Prove that
X (x xn )
(y(x)) = .
n
|y 0 (xn )|
Exercise 20.5
Justify the identity
Z
f (x) (n) (x) dx = (1)n f (n) (0)
(n) (x) = (1)n (n) (x) and x (n) (x) = n (n1) (x).
Exercise 20.6
Consider x = (x1 , . . . , xn ) Rn and the curvilinear coordinate system = (1 , . . . , n ). Show that
( )
(x a) =
|J|
where a and are corresponding points in the two coordinate systems and J is the Jacobian of the
transformation from x to .
x
J
Exercise 20.7
Determine the Dirac delta function in spherical coordinates, (r, , ).
635
20.6 Hints
Hint 20.1
Hint 20.2
Verify that (x) satisfies the two properties of the Dirac delta function.
Hint 20.3
Evaluate the integral,
Z
f (x)(cx) dx,
by noting that the Dirac delta function is symmetric and making a change of variables.
Hint 20.4
Let the points {m } partition the interval ( . . . ) such that y 0 (x) is monotone on each interval
(m . . . m+1 ). Consider some such interval, (a . . . b) (m . . . m+1 ). Show that
(R
Z b (y)
|y 0 (xn )|
dy if y(xn ) = 0 for a < xn < b
(y(x)) dx =
a 0 otherwise
for = min(y(a), y(b)) and = max(y(a), y(b)). Now consider the integral on the interval
( . . . ) as the sum of integrals on the intervals {(m . . . m+1 )}.
Hint 20.5
Justify the identity,
Z
f (x) (n) (x) dx = (1)n f (n) (0),
Hint 20.6
The Dirac delta function is defined by the following two properties.
(x a) = 0 for x 6= a
Z
(x a) dx = 1
Rn
Hint 20.7
Consider the special cases 0 = 0, and r0 = 0.
636
20.7 Solutions
Solution 20.1
Let F 0 (x) = f (x).
Z Z
1
f (x)(x) dx = lim f (x)b(x, ) dx
0
!
Z 0 Z /2
1
= lim f (x)b(x, ) dx + f (x)b(x, ) dx
0 /2 0
1
= lim ((F (0) F (/2)) + (F (/2) F (0)))
0
1 F (0) F (/2) F (/2) F (0)
= lim +
0 2 /2 /2
0 0 +
F (0 ) + F (0 )
=
2
f (0 ) + f (0+ )
=
2
Solution 20.2
(x) satisfies the two properties of the Dirac delta function.
(x) = 0 for x 6= 0
Z Z Z
(x) dx = (x) (dx) = (x) dx = 1
Solution 20.3
We note the the Dirac delta function is symmetric and we make a change of variables to derive the
identity.
Z Z
(cx) dx = (|c|x) dx
Z
(x)
= dx
|c|
(x)
(cx) =
|c|
Solution 20.4
Let the points {m } partition the interval ( . . . ) such that y 0 (x) is monotone on each interval
(m . . . m+1 ). Consider some such interval, (a . . . b) (m . . . m+1 ). Note that y 0 (x) is either
entirely positive or entirely negative in the interval. First consider the case when it is positive. In
this case y(a) < y(b).
Z b Z y(b) 1
dy
(y(x)) dx = (y) dy
a y(a) dx
Z y(b)
(y)
= dy
y(a) y 0 (x)
(R y(b)
(y)
y(a) y 0 (xn )
dy for y(xn ) = 0 if y(a) < 0 < y(b)
=
0 otherwise
637
Now consider the case that y 0 (x) is negative on the interval so y(a) > y(b).
Z b Z y(b) 1
dy
(y(x)) dx = (y) dy
a y(a) dx
Z y(b)
(y)
= 0
dy
y(a) y (x)
Z y(a)
(y)
= 0 (x)
dy
y(b) y
(R y(a)
(y)
y(b) y 0 (xn )
dy for y(xn ) = 0 if y(b) < 0 < y(a)
=
0 otherwise
We conclude that
(R
Z b (y)
|y 0 (xn )|
dy if y(xn ) = 0 for a < xn < b
(y(x)) dx =
a 0 otherwise
X (x xn )
(y(x)) =
n
|y 0 (xn )|
Solution 20.5
To justify the identity,
Z
f (x) (n) (x) dx = (1)n f (n) (0),
we will use integration by parts.
Z h i Z
(n)
f (x) (x) dx = f (x) (n1)
(x) f 0 (x) (n1) (x) dx
Z
= f 0 (x) (n1) (x) dx
Z
n
= (1) f (n) (x)(x) dx
n (n)
= (1) f (0)
638
CONTINUE HERE
(n) (x) = (1)n (n) (x) and x (n) (x) = n (n1) (x).
Solution 20.6
The Dirac delta function is defined by the following two properties.
(x a) = 0 for x 6= a
Z
(x a) dx = 1
Rn
( ) (1 1 ) (n n )
=
|J| |J|
= 0 for 6=
( )
Z Z
|J| d = ( ) d
|J|
Z
= (1 1 ) (n n ) d
Z Z
= (1 1 ) d1 (n n ) dn
=1
We conclude that ( )/|J| is the Dirac delta function in the coordinate system.
( )
(x a) =
|J|
Solution 20.7
We consider the Dirac delta function in spherical coordinates, (r, , ). The Jacobian is J = r2 sin().
Z Z 2 Z
3 (x x0 ) r2 sin() dr d d = 1
0 0 0
Z Z 2 Z
1
(r r0 ) ( 0 ) ( 0 ) r2 sin() dr d d
0 0 0 r2 sin()
Z Z 2 Z
= (r r0 ) dr ( 0 ) d ( 0 ) d = 1
0 0 0
639
We check that the value of the integral is unity.
Z Z 2 Z
1
(r r0 ) ( 0 ) r2 sin() dr d d
0 0 0 2r2 sin()
Z Z 2 Z
1
= (r r0 ) dr d ( 0 ) d = 1
2 0 0 0
640
Chapter 21
Inhomogeneous Differential
Equations
-Homer Simpson
Any function yp which satisfies this equation is called a particular solution of the differential equation.
We want to know the general solution of the inhomogeneous equation. Later in this chapter we will
cover methods of constructing this solution; now we consider the form of the solution.
Let yp be a particular solution. Note that yp + h is a particular solution if h satisfies the
homogeneous equation.
L[yp + h] = L[yp ] + L[h] = f + 0 = f
Therefore yp + yh satisfies the homogeneous equation. We show that this is the general solution
of the inhomogeneous equation. Let yp and p both be solutions of the inhomogeneous equation
L[y] = f . The difference of yp and p is a homogeneous solution.
yp and p differ by a linear combination of the homogeneous solutions {yk }. Therefore the general
solution of L[y] = f is the sum of any particular solution yp and the general homogeneous solution
yh .
Xn
yp + yh = yp (x) + ck yk (x)
k=1
641
Result 21.1.1 The general solution of the nth order linear inhomogeneous
equation L[y] = f (x) is
y = yp + c1 y1 + c2 y2 + + cn yn ,
where yp is a particular solution, {y1 , . . . , yn } is a set of linearly independent
homogeneous solutions, and the ck s are arbitrary constants.
y 00 + y = sin(2x)
y1 = cos x, y2 = sin x,
is a particular solution.
If f (x) is one of a few simple forms, then we can guess the form of a particular solution. Below we
enumerate some cases.
yp = cm xm + c1 x + c0 .
642
f = p(x) eax . If f is a polynomial times an exponential then guess
yp = (cm xm + c1 x + c0 ) eax .
f = p(x) eax cos (bx). If f is a cosine or sine times a polynomial and perhaps an exponential, f (x) =
p(x) eax cos(bx) or f (x) = p(x) eax sin(bx) then guess
The homogeneous solutions are y1 = et and y2 = t et . We guess a particular solution of the form
yp = at2 + bt + c.
We substitute the expression into the differential equation and equate coefficients of powers of t to
determine the parameters.
yp00 2yp0 + yp = t2
(2a) 2(2at + b) + (at2 + bt + c) = t2
(a 1)t2 + (b 4a)t + (2a 2b + c) = 0
a 1 = 0, b 4a = 0, 2a 2b + c = 0
a = 1, b = 4, c = 6
A particular solution is
yp = t2 + 4t + 6.
If the inhomogeneity is a sum of terms, L[y] = f f1 + + fk , you can solve the problems
L[y] = f1 , . . . , L[y] = fk independently and then take the sum of the solutions as a particular
solution of L[y] = f .
yp = a e2t .
We substitute the expression into the differential equation to determine the parameter.
A particular solution of L[y] = e2t is yp = e2t . Thus a particular solution of Equation 21.1 is
yp = t2 + 4t + 6 + e2t .
643
The above guesses will not work if the inhomogeneity is a homogeneous solution. In this case,
multiply the guess by the lowest power of x such that the guess does not contain homogeneous
solutions.
yp = at2 et
We substitute the expression into the differential equation and equate coefficients of like terms to
determine the parameters.
yp00 2yp0 + yp = et
(at2 + 4at + 2a) et 2(at2 + 2at) et +at2 et = et
2a et = et
1
a=
2
A particular solution is
t2 t
yp = e .
2
Example 21.2.4 Consider
1 0 1
y 00 +
y + 2 y = x, x > 0.
x x
The homogeneous solutions are y1 = cos(ln x) and y2 = sin(ln x). We guess a particular solution of
the form
yp = ax3
We substitute the expression into the differential equation and equate coefficients of like terms to
determine the parameter.
1 0 1
yp00 + yp + 2 yp = x
x x
6ax + 3ax + ax = x
1
a=
10
A particular solution is
x3
yp = .
10
644
We assume that the coefficient functions in the differential equation are continuous on [a . . . b]. Let
y1 (x) and y2 (x) be two linearly independent solutions to the homogeneous equation. Since the
Wronskian, Z
W (x) = exp p(x) dx ,
is non-vanishing, we know that these solutions exist. We seek a particular solution of the form,
yp = u1 (x)y1 + u2 (x)y2 .
We substitute the expression for yp and its derivatives into the inhomogeneous equation and use the
fact that y1 and y2 are homogeneous solutions to simplify the equation.
u001 y1 + 2u01 y10 + u1 y100 + u002 y2 + 2u02 y20 + u2 y200 + p(u01 y1 + u1 y10 + u02 y2 + u2 y20 ) + q(u1 y1 + u2 y2 ) = f
u001 y1 + 2u01 y10 + u002 y2 + 2u02 y20 + p(u01 y1 + u02 y2 ) = f
This is an ugly equation for u1 and u2 , however, we have an ace up our sleeve. Since u1 and u2
are undetermined functions of x, we are free to impose a constraint. We choose this constraint to
simplify the algebra.
u01 y1 + u02 y2 = 0
This constraint simplifies the derivatives of yp ,
We substitute the new expressions for yp and its derivatives into the inhomogeneous differential
equation to obtain a much simpler equation than before.
u01 y10 + u1 y100 + u02 y20 + u2 y200 + p(u1 y10 + u2 y20 ) + q(u1 y1 + u2 y2 ) = f (x)
u01 y10 + u02 y20 + u1 L[y1 ] + u2 L[y2 ] = f (x)
u01 y10 + u02 y20 = f (x).
With the constraint, we have a system of linear equations for u01 and u02 .
u01 y1 + u02 y2 = 0
u01 y10 + u02 y20 = f (x).
0
y1 y2 u1 0
=
y10 y20 u02 f
We solve this system using Kramers rule. (See Appendix S.)
f (x)y2 f (x)y1
u01 = u02 =
W (x) W (x)
645
We integrate to get u1 and u2 . This gives us a particular solution.
Z Z
f (x)y2 (x) f (x)y1 (x)
yp = y1 dx + y2 dx.
W (x) W (x)
646
is non-vanishing, we know that these solutions exist. We seek a particular solution of the form
y p = u 1 y1 + u 2 y2 + + u n yn .
We substitute yp and its derivatives into the inhomogeneous differential equation and use the fact
that the yk are homogeneous solutions.
u01
y1 y2 yn 0
y10 y20 yn0 u02 ..
.
.. .. .. .. = .
..
. . . . . 0
(n1) (n1) (n1)
y1 y2 yn u0n f
647
Result 21.3.2 Let {y1 , . . . , yn } be linearly independent homogeneous solu-
tions of
L[y] = y(n) + pn1 (x)y (n1) + + p1 (x)y 0 + p0 (x)y = f (x), on a < x < b.
A particular solution is
y p = u1 y 1 + u2 y 2 + + un y n .
where
Z
n+k+1 W [y1 , . . . , yk1 , yk+1 , . . . , yn ](x)
uk = (1) f (x) dx, for k = 1, . . . , n,
W [y1 , y2 , . . . , yn ](x)
and W [y1 , y2 , . . . , yn ](x) is the Wronskian of {y1 (x), . . . , yn (x)}.
The homogeneous solutions of the differential equation are ex and ex . We use variation of param-
eters to find a particular solution for x > 0.
Z x Z x
x e e x e e
yp = e d + e d
2 2
Z x Z x
1 1
= ex e(+1) d ex e(1) d
2 2
1 1
= ex + ex
2( + 1) 2( 1)
ex
= 2 , for x > 0
1
A particular solution for x < 0 is
ex
yp = , for x < 0.
2 1
Thus a particular solution is
e|x|
yp = .
2 1
The general solution is
1
y= e|x| +c1 ex +c2 ex .
1 2
Applying the boundary conditions, we see that c1 = c2 = 0. Apparently the solution is
e|x|
y= .
2 1
This function is plotted in Figure 21.1. This function satisfies the differential equation for positive
and negative x. It also satisfies the boundary conditions. However, this is NOT a solution to the
differential equation. Since the differential equation has no singular points and the inhomogeneous
term is continuous, the solution must be twice continuously differentiable. Since the derivative of
648
-4 -2 2 4
0.3
-0.05
0.25
-0.1
0.2
-0.15
0.15
-0.2
0.1
-0.25
0.05
-0.3
-4 -2 2 4
Figure 21.1: The Incorrect and Correct Solution to the Differential Equation.
e|x| /(2 1) has a jump discontinuity at x = 0, the second derivative does not exist. Thus
this function could not possibly be a solution to the differential equation. In the next example we
examine the right way to solve this problem.
In order for the solution over the whole domain to be twice differentiable, the solution and its first
derivative must be continuous. Thus we impose the additional boundary conditions
0 0
y (0) = y+ (0), y (0) = y+ (0).
The solutions that satisfy the two differential equations and the boundary conditions at infinity are
ex ex
y = + c ex , y+ = + c+ ex .
2 1 2 1
The two additional boundary conditions give us the equations
y (0) = y+ (0) c = c+
0 0
y (0) = y+ (0) + c = 2 c+ .
2 1 1
We solve these two equations to determine c and c+ .
c = c+ =
2 1
Thus the solution over the whole domain is
( x
e ex
2 1 for x < 0,
y= ex ex
2 1 for x > 0
649
e|x| e|x|
y= .
2 1
This function is plotted in Figure 21.1. You can verify that this solution is twice continuously
differentiable.
Bj [y] = j , for j = 1, . . . , n,
Let g(x) be an n-times continuously differentiable function that satisfies the boundary conditions.
Substituting y = u + g into the differential equation and boundary conditions yields
Note that the problem for u has homogeneous boundary conditions. Thus a problem with inhomo-
geneous boundary conditions can be reduced to one with homogeneous boundary conditions. This
technique is of limited usefulness for ordinary differential equations but is important for solving some
partial differential equation problems.
g(x) = sin x cos x satisfies the inhomogeneous boundary conditions. Substituting y = u + sin x
cos x yields
u00 + u = cos 2x, u0 (0) = u() = 0.
Note that since g(x) satisfies the homogeneous equation, the inhomogeneous term in the equation
for u is the same as that in the equation for y.
650
1 1
g(x) = cos x 3 satisfies the boundary conditions. Substituting y = u + cos x 3 yields
1
u00 + u = cos 2x + , u(0) = u() = 0.
3
Result 21.5.1 The nth order differential equation with boundary conditions
L[y] = f (x), Bj [y] = bj , for j = 1, . . . , n
has the solution y = u + g where u satisfies
is y = u + v where
651
subject to the n inhomogeneous boundary conditions
Bj [y] = j , for j = 1, . . . , n
We assume that the coefficients in the differential equation are continuous on [a, b]. Since the
Wronskian of the solutions of the differential equation,
Z
W (x) = exp pn1 (x) dx ,
is non-vanishing on [a, b], there are n linearly independent solution on that range. Let {y1 , . . . , yn }
be a set of linearly independent solutions of the homogeneous equation. From Result 21.3.2 we know
that a particular solution yp exists. The general solution of the differential equation is
y = yp + c1 y1 + c2 y2 + + cn yn .
has only the trivial solution. (This is the case if and only if the determinant of the matrix is nonzero.)
Thus the problem
Bj [y] = j , for j = 1, . . . , n,
Bj [y] = 0, for j = 1, . . . , n,
652
Result 21.5.3 The problem
Bj [y] = j , for j = 1, . . . , n,
has a unique solution if and only if the problem
L[y] = y (n) + pn1 y (n1) + + p1 y 0 + p0 y = 0, for a < x < b,
subject to
Bj [y] = 0, for j = 1, . . . , n,
has only the trivial solution.
We can represent the solution to the inhomogeneous problem in Equation 21.2 as an integral involving
the Green function. To show that
Z
y(x) = G(x|)f () d
a
is the solution, we apply the linear operator L to the integral. (Assume that the integral is uniformly
convergent.)
Z Z
L G(x|)f () d = L[G(x|)]f () d
a a
Z
= (x )f () d
a
= f (x)
Now we consider the qualitiative behavior of the Green function. For x 6= , the Green function
is simply a homogeneous solution of the differential equation, however at x = we expect some
singular behavior. G0 (x|) will have a Dirac delta function type singularity. This means that G(x|)
653
will have a jump discontinuity at x = . We integrate the differential equation on the vanishing
interval ( . . . + ) to determine this jump.
G0 + p(x)G = (x )
Z +
+
G( |) G( |) + p(x)G(x|) dx = 1
G( + |) G( |) = 1 (21.3)
Since the Green function satisfies the homogeneous equation for x 6= , it will be a constant times
this homogeneous solution for x < and x > .
( R
c1 e p(x) dx a<x<
G(x|) = R
p(x) dx
c2 e <x
In order to satisfy the homogeneous initial condition G(a|) = 0, the Green function must vanish on
the interval (a . . . ). (
0 a<x<
G(x|) = R
p(x) dx
c e <x
The jump condition, (Equation 21.3), gives us the constraint G( + |) = 1. This determines the
constant in the homogeneous solution for x > .
(
0 R a<x<
G(x|) = x p(t) dt
e <x
We can use the Heaviside function to write the Green function without using a case statement.
Rx
G(x|) = e
p(t) dt
H(x )
Clearly the Green function is of little value in solving the inhomogeneous differential equation in
Equation 21.2, as we can solve that problem directly. However, we will encounter first order Green
function problems in solving some partial differential equations.
Result 21.6.1 The first order inhomogeneous differential equation with ho-
mogeneous initial condition
L[y] y 0 + p(x)y = f (x), for a < x, y(a) = 0,
654
21.7 Green Functions for Second Order Equations
Consider the second order inhomogeneous equation
B1 [y] = B2 [y] = 0.
The Green function is useful because you can represent the solution to the inhomogeneous problem
in Equation 21.4 as an integral involving the Green function. To show that
Z b
y(x) = G(x|)f () d
a
is the solution, we apply the linear operator L to the integral. (Assume that the integral is uniformly
convergent.)
"Z # Z
b b
L G(x|)f () d = L[G(x|)]f () d
a a
Z b
= (x )f () d
a
= f (x)
One of the advantages of using Green functions is that once you find the Green function for a
linear operator and certain homogeneous boundary conditions,
You do not need to do any extra work to obtain the solution for a different inhomogeneous term.
Qualitatively, what kind of behavior will the Green function for a second order differential equa-
tion have? Will it have a delta function singularity; will it be continuous? To answer these questions
we will first look at the behavior of integrals and derivatives of (x).
The integral of (x) is the Heaviside function, H(x).
Z x (
0 for x < 0
H(x) = (t) dt =
1 for x > 0
655
d
Figure 21.2: r(x), H(x), (x) and dx (x)
The derivative of the delta function is zero for x 6= 0. At x = 0 it goes from 0 up to +, down to
and then back up to 0.
In Figure 21.2 we see conceptually the behavior of the ramp function, the Heaviside function,
the delta function, and the derivative of the delta function.
We write the differential equation for the Green function.
we see that only the G00 (x|) term can have a delta function type singularity. If one of the other terms
had a delta function type singularity then G00 (x|) would be more singular than a delta function
and there would be nothing in the right hand side of the equation to match this kind of singularity.
Analogous to the progression from a delta function to a Heaviside function to a ramp function, we
see that G0 (x|) will have a jump discontinuity and G(x|) will be continuous.
Let y1 and y2 be two linearly independent solutions to the homogeneous equation, L[y] = 0. Since
the Green function satisfies the homogeneous equation for x 6= , it will be a linear combination of
the homogeneous solutions.
(
c1 y1 + c2 y2 for x <
G(x|) =
d 1 y1 + d 2 y2 for x >
c1 y1 () + c2 y2 () = d1 y1 () + d2 y2 ()
Z + Z +
00 0
[G (x|) + p(x)G (x|) + q(x)G(x|)] dx = (x ) dx.
656
Since G(x|) is continuous and G0 (x|) has only a jump discontinuity two of the terms vanish.
Z + Z +
p(x)G0 (x|) dx = 0 and q(x)G(x|) dx = 0
Z + Z +
G00 (x|) dx = (x ) dx
0 + +
G (x|) = H(x )
G0 ( + |) G0 ( |) = 1
Combined with the two boundary conditions, this gives us a total of four equations to determine
our four constants, c1 , c2 , d1 , and d2 .
Applying the two boundary conditions, we see that c1 = 0 and d1 = d2 . The Green function now
has the form (
cx for x <
G(x|) =
d(x 1) for x > .
657
0.1 0.1 0.1 0.1
0.5 1 0.5 1 0.5 1 0.5 1
-0.1 -0.1 -0.1 -0.1
-0.2 -0.2 -0.2 -0.2
-0.3 -0.3 -0.3 -0.3
The Green function is plotted in Figure 21.3 for various values of . The solution to y 00 = f (x) is
Z 1
y(x) = G(x|)f () d
0
Z x Z 1
y(x) = (x 1) f () d + x ( 1)f () d.
0 x
is Z x Z 1
u(x) = (x 1) f () d + x ( 1)f () d.
0 x
Now we have to find the solution to
v 00 = 0, v(0) = 1, u(1) = 2.
v = 1 + x.
658
Thus the solution for y is
Z x Z 1
y = 1 + x + (x 1) f () d + x ( 1)f ( xi) d.
0 x
1 3
y= x + c1 x + c2 .
6
Applying the boundary conditions, we find that the solution is
1 3
y= (x x).
6
1 3
y= (x x).
6
y 00 y = sin x,
The homogeneous solutions are y1 = ex , and y2 = ex . The Green function has the form
(
c1 ex +c2 ex for x <
G(x|) =
d1 ex +d2 ex for x > .
Since the solution must be bounded for all x, the Green function must also be bounded. Thus
c2 = d1 = 0. The Green function now has the form
(
c ex for x <
G(x|) = x
de for x > .
c e = d e d = c e2 .
659
0.6
0.4
0.2
-4 -2 2 4
-0.2
-0.4
-0.6
c e2 e c e = 1
1
c = e
2
The Green function is then (
12 ex for x <
G(x|) =
12 ex+ for x >
1
G(x|) = e|x| .
2
A plot of G(x|0) is given in Figure 21.4. The solution to y 00 y = sin x is
Z
1
y(x) = e|x| sin d
2
Z x Z
1 x x+
= sin e d + sin e d
2 x
1 sin x + cos x sin x + cos x
= ( + )
2 2 2
1
y= sin x.
2
660
This is known as a Sturm-Liouville problem. Equations of this type often occur when solving partial
differential equations. The Green function associated with this problem satisfies
Let y1 and y2 be two non-trivial homogeneous solutions that satisfy the left and right boundary
conditions, respectively.
The Green function satisfies the homogeneous equation for x 6= and satisfies the homogeneous
boundary conditions. Thus it must have the following form.
(
c1 ()y1 (x) for a x ,
G(x|) =
c2 ()y2 (x) for x b,
G( |) = G( + |)
c1 ()y1 () = c2 ()y2 ()
p0 0 q (x )
G00 (x|) + G (x|) + G(x|) =
p p p
1
G0 ( + |) G0 ( |) =
p()
1
c2 ()y20 () c1 ()y10 () =
p()
c1 ()y1 () c2 ()y2 () = 0
1
c1 ()y10 () c2 ()y20 () =
p()
y2 () y1 ()
c1 () = , c2 () =
p()(W ()) p()(W ())
Here W (x) is the Wronskian of y1 (x) and y2 (x). The Green function is
(y
1 (x)y2 ()
p()W () for a x ,
G(x|) = y2 (x)y1 ()
p()W () for x b.
661
Result 21.7.2 The problem
0
L[y] = (p(x)y 0 ) + q(x)y = f (x), subject to
B1 [y] = 1 y(a) + 2 y 0 (a) = 0, B2 [y] = 1 y(b) + 2 y 0 (b) = 0.
A set of solutions to the homogeneous equation is {ex , ex }. Equivalently, one could use the set
{cosh x, sinh x}. Note that sinh x satisfies the left boundary condition and sinh(x 1) satisfies the
right boundary condition. The Wronskian of these two homogeneous solutions is
sinh x sinh(x 1)
W (x) =
cosh x cosh(x 1)
= sinh x cosh(x 1) cosh x sinh(x 1)
1 1
= [sinh(2x 1) + sinh(1)] [sinh(2x 1) sinh(1)]
2 2
= sinh(1).
y(a) = 1 , y 0 (a) = 2 .
and
v 00 + p(x)v 0 + q(x)v = 0, v(a) = 1 , v 0 (a) = 2 .
662
Since the Wronskian Z
W (x) = c exp p(x) dx
is non-vanishing, the solutions of the differential equation for v are linearly independent. Thus there
is a unique solution for v that satisfies the initial conditions.
G( |) = G( + |), G0 ( |) + 1 = G0 ( + |).
Let u1 and u2 be two linearly independent solutions of the differential equation. For x < , G(x|)
is a linear combination of these solutions. Since the Wronskian is non-vanishing, only the trivial
solution satisfies the homogeneous initial conditions. The Green function must be
(
0 for x <
G(x|) =
u (x) for x > ,
u () = 0, u0 () = 1.
Note that the non-vanishing Wronskian ensures a unique solution for u . We can write the Green
function in the form
G(x|) = H(x )u (x).
This is known as the causal solution. The solution for u is
Z b
u= G(x|)f () d
a
Z b
= H(x )u (x)f () d
a
Z x
= u (x)f () d
a
is Z x
y = yh + y (x)f () d
a
where yh is the combination of the homogeneous solutions of the equation that
satisfy the initial conditions and y (x) is the linear combination of homoge-
neous solutions that satisfy y () = 0, y0 () = 1.
663
21.7.3 Problems with Unmixed Boundary Conditions
Consider
L[y] = y 00 + p(x)y 0 + q(x)y = f (x), for a < x < b,
subject the the unmixed boundary conditions
and
v 00 + p(x)v 0 + q(x)v = 0, 1 v(a) + 2 v 0 (a) = 1 , 1 v(b) + 2 v 0 (b) = 2 .
The problem for v may have no solution, a unique solution or an infinite number of solutions. We
consider only the case that there is a unique solution for v. In this case the homogeneous equation
subject to homogeneous boundary conditions has only the trivial solution.
G( |) = G( + |), G0 ( |) + 1 = G0 ( + |).
Let u1 and u2 be two solutions of the homogeneous equation that satisfy the left and right boundary
conditions, respectively. The non-vanishing of the Wronskian ensures that these solutions exist.
Let W (x) denote the Wronskian of u1 and u2 . Since the homogeneous equation with homogeneous
boundary conditions has only the trivial solution, W (x) is nonzero on [a, b]. The Green function has
the form (
c1 u1 for x < ,
G(x|) =
c2 u2 for x > .
The continuity and jump conditions for Green function gives us the equations
c1 u1 () c2 u2 () = 0
c1 u01 () c2 u02 () = 1.
u2 () u1 ()
c1 = , c2 = .
W () W ()
664
Result 21.7.4 Consider the problem
and
v 00 + p(x)v 0 + q(x)v = 0, B1 [v] = 1 , B2 [v] = 2 .
The problem for v may have no solution, a unique solution or an infinite number of solutions.
Again we consider only the case that there is a unique solution for v. In this case the homogeneous
equation subject to homogeneous boundary conditions has only the trivial solution.
Let y1 and y2 be two solutions of the homogeneous equation that satisfy the boundary conditions
B1 [y1 ] = 0 and B2 [y2 ] = 0. Since the completely homogeneous problem has no solutions, we know
that B1 [y2 ] and B2 [y1 ] are nonzero. The solution for v has the form
v = c1 y1 + c2 y2 .
665
The Green function for u satisfies
G00 (x|) + p(x)G0 (x|) + q(x)G(x|) = (x ), B1 [G] = 0, B2 [G] = 0.
The continuity and jump conditions are
G( |) = G( + |), G0 ( |) + 1 = G0 ( + |).
We write the Green function as the sum of the causal solution and the two homogeneous solutions
G(x|) = H(x )y (x) + c1 y1 (x) + c2 y2 (x)
With this form, the continuity and jump conditions are automatically satisfied. Applying the bound-
ary conditions yields
B1 [G] = B1 [H(x )y ] + c2 B1 [y2 ] = 0,
B2 [G] = B2 [H(x )y ] + c1 B2 [y1 ] = 0,
666
21.8 Green Functions for Higher Order Problems
Consider the nth order differential equation
Bj [y] = j
We assume that the coefficient functions in the differential equation are continuous on [a, b]. The
solution is y = u + v where u and v satisfy
and
L[v] = 0, with Bj [v] = j
has only the trivial solution, then the solution for y exists and is unique. We will construct this
solution using Green functions.
First we consider the problem for v. Let {y1 , . . . , yn } be a set of linearly independent solutions.
The solution for v has the form
v = c1 y1 + + cn yn
where the constants are determined by the matrix equation
B1 [y1 ] B1 [y2 ] B1 [yn ] c1 1
B2 [y1 ] B2 [y2 ] B2 [yn ] c2 2
.. .. = .. .
.. .. ..
. . . . . .
Bn [y1 ] Bn [y2 ] Bn [yn ] cn n
Let y (x) be the linear combination of the homogeneous solutions that satisfy the conditions
y () = 0
y0 () = 0
.. .
. = ..
(n2)
y () = 0
(n1)
y () = 1.
667
The Green function has the form
y 000 y 00 + y 0 y = f (x),
w = c1 cos x + c2 sin x + c2 ex .
668
We separate the inhomogeneous problem into the two problems
v = c1 cos x + c2 sin x + c2 ex .
1 cos(1 ) + sin(1 ) e1
H(x) ex cos(x)sin(x) + cos x+sin xex
G(x|) =
2 2(cos 1 + sin 1 e)
Z 1
1
y= G(x|)f () d + (e + sin 1 3) cos x
0 e cos 1 sin 1
+ (2e cos 1 3) sin x + (3 cos 1 2 sin 1) ex .
669
21.9 Fredholm Alternative Theorem
Orthogonality. Two real vectors, u and v are orthogonal if u v = 0. Consider two functions,
u(x) and v(x), defined in [a, b]. The dot product in vector space is analogous to the integral
Z b
u(x)v(x) dx
a
Bj [y] = 0, for j = 1, 2, . . . n.
The Fredholm alternative theorem tells us if the problem has a unique solution, an infinite
number of solutions, or no solution. Before presenting the theorem, we will consider a few motivating
examples.
Nontrivial Homogeneous Solutions Exist. If there are nonzero solutions to the homogeneous
problem L[y] = 0 that satisfy the homogeneous boundary conditions Bj [y] = 0 then the inhomoge-
neous problem L[y] = f (x) subject to the same boundary conditions either has no solution or an
infinite number of solutions.
Suppose there is a particular solution yp that satisfies the boundary conditions. If there is a
solution yh to the homogeneous equation that satisfies the boundary conditions then there will be
an infinite number of solutions since yp + cyh is also a particular solution.
The question now remains: Given that there are homogeneous solutions that satisfy the boundary
conditions, how do we know if a particular solution that satisfies the boundary conditions exists?
Before we address this question we will consider a few examples.
y2 = sin x satisfies the boundary conditions. Thus we know that there are either no solutions or an
670
infinite number of solutions. A particular solution is
cos2 x
Z Z
cos x sin x
yp = cos x dx + sin x dx
1 1
Z Z
1 1 1
= cos x sin(2x) dx + sin x + cos(2x) dx
2 2 2
1 1 1
= cos x cos(2x) + sin x x + sin(2x)
4 2 4
1 1
= x sin x + cos x cos(2x) + sin x sin(2x)
2 4
1 1
= x sin x + cos x
2 4
1
y= x sin x + c sin x.
2
y(0) = 0 c1 = 0
1
y() = 0 cos() + c2 sin() = 0
2
= 0.
2
Since this equation has no solution, there are no solutions to the inhomogeneous problem.
In both of the above examples there is a homogeneous solution y = sin x that satisfies the bound-
ary conditions. In Example 21.9.1, the inhomogeneous term is cos x and there are an infinite number
of solutions. In Example 21.9.2, the inhomogeneity is sin x and there are no solutions. In general,
if the inhomogeneous term is orthogonal to all the homogeneous solutions that satisfy the bound-
ary conditions then there are an infinite number of solutions. If not, there are no inhomogeneous
solutions.
671
Result 21.9.1 Fredholm Alternative Theorem. Consider the nth order
inhomogeneous problem
If the homogeneous problem has only the trivial solution then the inhomo-
geneous problem has a unique solution. If the homogeneous problem has m
independent solutions, {y1 , y2 , . . . , ym }, then there are two possibilities:
If f (x) is orthogonal to each of the homogeneous solutions then there are
an infinite number of solutions of the form
m
X
y = yp + cj yj .
j=1
cos x and sin x are two linearly independent solutions to the homogeneous equation. sin x satisfies
the homogeneous boundary conditions. Thus there are either an infinite number of solutions, or no
solution.
To transform this problem to one with homogeneous boundary conditions, we note that g(x) =
x
+ 1 and make the change of variables y = u + g to obtain
x
u00 + u = cos 2x 1, y(0) = 0, y() = 0.
Since cos 2x x 1 is not orthogonal to sin x, there is no solution to the inhomogeneous problem.
To check this, the general solution is
1
y = cos 2x + c1 cos x + c2 sin x.
3
Applying the boundary conditions,
4
y(0) = 1 c1 =
3
1 4
y() = 2 = 2.
3 3
Thus we see that the right boundary condition cannot be satisfied.
672
There are no solutions to the homogeneous equation that satisfy the homogeneous boundary con-
ditions. To check this, note that all solutions of the homogeneous equation have the form uh =
c1 cos x + c2 sin x.
u0h (0) = 0 c2 = 0
uh () = 0 c1 = 0.
From the Fredholm Alternative Theorem we see that the inhomogeneous problem has a unique
solution.
To find the solution, start with
1
y = cos 2x + c1 cos x + c2 sin x.
3
y 0 (0) = 1 c2 = 1
1
y() = 1 c1 = 1
3
Thus the solution is
1 4
y = cos 2x cos x + sin x.
3 3
673
21.10 Exercises
Undetermined Coefficients
Exercise 21.1 (mathematica/ode/inhomogeneous/undetermined.nb)
Find the general solution of the following equations.
1. y 00 + 2y 0 + 5y = 3 sin(2t)
2. 2y 00 + 3y 0 + y = t2 + 3 sin(t)
Variation of Parameters
Exercise 21.3 (mathematica/ode/inhomogeneous/variation.nb)
Use the method of variation of parameters to find a particular solution of the given differential
equation.
1. y 00 5y 0 + 6y = 2 et
2. y 00 + y = tan(t), 0 < t < /2
3. y 00 5y 0 + 6y = g(t), for a given function g.
where c1 and c2 are arbitrary constants and a and b are any conveniently chosen points.
2. Using the result of part (a) show that the solution satisfying the initial conditions y(0) = 0
and y 0 (0) = 0 is given by
Z t
y(t) = g( ) sin(t ) d.
0
674
Notice that this equation gives a formula for computing the solution of the original initial value
problem for any given inhomogeneous term g(t). The integral is referred to as the convolution
of g(t) with sin t.
3. Use the result of part (b) to solve the initial value problem,
where is a real constant. How does the solution for = 1 differ from that for 6= 1?
The = 1 case provides an example of resonant forcing. Plot the solution for resonant and
non-resonant forcing.
Exercise 21.8
Find the variation of parameters solution for the third order differential equation
Green Functions
Exercise 21.9
Use a Green function to solve
y 00 = f (x), y() = y 0 () = 0.
Exercise 21.10
Solve the initial value problem
1 0 1
y 00 + y 2 y = x2 , y(0) = 0, y 0 (0) = 1.
x x
First use variation of parameters, and then solve the problem with a Green function.
Exercise 21.11
What are the continuity conditions at x = for the Green function for the problem
Exercise 21.12
Use variation of parameters and Green functions to solve
Exercise 21.13
Find the Green function for
Exercise 21.14
Find the Green function for
Exercise 21.15
Find the Green function for each of the following:
675
b) u00 u = f (x), u(a) = u(a) = 0.
c) u00 u = f (x), u(x) bounded as |x| .
d) Show that the Green function for (b) approaches that for (c) as a .
Exercise 21.16
1. For what values of does the problem
have a unique solution? Find the Green functions for these cases.
2. For what values of does the problem
y 00 + 9y = 1 + x, y(0) = y() = 0,
Exercise 21.17
Show that the inhomogeneous boundary value problem:
Exercise 21.18
The Green function for
u00 k 2 u = f (x), < x <
subject to |u()| < is
1 k|x|
G(x; ) =
e .
2k
(We assume that k > 0.) Use the image method to find the Green function for the same equation
on the semi-infinite interval 0 < x < satisfying the boundary conditions,
Exercise 21.19
1. Determine the Green function for solving:
2. Take the limit as L to find the Green function on (0, ) for the boundary conditions:
y(0) = 0, y 0 () = 0. We assume here that a > 0. Use the limiting Green function to solve:
y 00 a2 y = ex , y(0) = 0, y 0 () = 0.
Check that your solution satisfies all the conditions of the problem.
676
21.11 Hints
Undetermined Coefficients
Hint 21.1
Hint 21.2
Variation of Parameters
Hint 21.3
Hint 21.4
Hint 21.5
Hint 21.6
Hint 21.7
Hint 21.8
Look for a particular solution of the form
yp = u 1 y 1 + u 2 y 2 + u 3 y 3 ,
To avoid some messy algebra when solving for u0j , use Kramers rule.
Green Functions
Hint 21.9
Hint 21.10
Hint 21.11
Hint 21.12
Hint 21.13
cosh(x) and sinh(x1) are homogeneous solutions that satisfy the left and right boundary conditions,
respectively.
677
Hint 21.14
sinh(x) and ex are homogeneous solutions that satisfy the left and right boundary conditions,
respectively.
Hint 21.15
The Green function for the differential equation
d
L[y] (p(x)y 0 ) + q(x)y = f (x),
dx
subject to unmixed, homogeneous boundary conditions is
Hint 21.16
The problem has a Green function if and only if the inhomogeneous problem has a unique solution.
The inhomogeneous problem has a unique solution if and only if the homogeneous problem has only
the trivial solution.
Hint 21.17
Show that g (x; a) and g (x; b) are solutions of the homogeneous differential equation. Determine
the value of these solutions at the boundary.
Hint 21.18
Hint 21.19
678
21.12 Solutions
Undetermined Coefficients
Solution 21.1
1. We consider
y 00 + 2y 0 + 5y = 3 sin(2t).
We first find the homogeneous solution with the substitition y = et .
2 + 2 + 5 = 0
= 1 2i
yh = c1 et cos(2t) + c2 et sin(2t).
yp = a cos(2t) + b sin(2t).
3
y = c1 et cos(2t) + c2 et sin(2t) + (sin(2t) 4 cos(2t)).
17
2. We consider
2y 00 + 3y 0 + y = t2 + 3 sin(t)
We first find the homogeneous solution with the substitition y = et .
22 + 3 + 1 = 0
= {1, 1/2}
679
2(2a d cos(t) e sin(t)) + 3(2at + b d sin(t) + e cos(t))
+ at2 + bt + c + d cos(t) + e sin(t) = t2 + 3 sin(t)
3
y = c1 et +c2 et/2 +t2 6t + 14 (3 cos(t) + sin(t)).
10
Solution 21.2
1. We consider the problem
680
The solution of the initial value problem is
t3
y= + 4t 3 et +4.
6
2 + 2 + 5 = 0
= 1 1 5
= 1 2
yh = c1 et cos(2t) + c2 et sin(2t).
yp = t et (a cos(2t) + b sin(2t))
We substitute this into the inhomogeneous differential equation to determine the coefficients.
A particular solution is
yp = t et sin(2t).
The general solution of the differential equation is
y(0) = 1, y 0 (0) = 0
c1 = 1, c1 + 2c2 = 0
1
c1 = 1, c2 =
2
The solution of the initial value problem is
1 t
y= e (2 cos(2t) + (2t + 1) sin(2t)) .
2
Variation of Parameters
681
Solution 21.3
1. We consider the equation
y 00 5y 0 + 6y = 2 et .
We find homogeneous solutions with the substitution y = et .
2 5 + 6 = 0
= {2, 3}
2 et e3t 2 et e2t
Z Z
yp = e2t dt + e 3t
dt
e5t e5t
Z Z
= 2 e2t et dt + 2 e3t e2t dt
= 2 et et
yp = e t
2 + 1 = 0
= i
y1 = cos(t), y2 = sin(t).
sin2 (t)
Z Z
= cos(t) dt + sin(t) sin(t) dt
cos(t)
cos(t/2) sin(t/2)
= cos(t) ln + sin(t) sin(t) cos(t)
cos(t/2) + sin(t/2)
cos(t/2) sin(t/2)
yp = cos(t) ln
cos(t/2) + sin(t/2)
682
3. We consider the equation
y 00 5y 0 + 6y = g(t).
The homogeneous solutions are
y1 = e2t , y2 = e3t .
The Wronskian of these solutions is W (t) = e5t . We find a particular solution with variation
of parameters.
g(t) e3t g(t) e2t
Z Z
yp = e2t 5t
dt + e 3t
dt
e e5t
Z Z
yp = e2t g(t) e2t dt + e3t g(t) e3t dt
Solution 21.4
Solve
y 00 (x) + y(x) = x, y(0) = 1, y 0 (0) = 0.
The solutions of the homogeneous equation are
y = c1 cos x + c2 sin x + x.
c1 = 1, c2 + 1 = 0.
y = cos x sin x + x.
Solution 21.5
Solve
x2 y 00 (x) xy 0 (x) + y(x) = x.
The homogeneous equation is
x2 y 00 (x) xy 0 (x) + y(x) = 0.
683
Substituting y = x into the homogeneous differential equation yields
x2 ( 1)x2 xx + x = 0
2 2 + 1 = 0
( 1)2 = 0
= 1.
The homogeneous solutions are
y1 = x, y2 = x log x.
The Wronskian of the homogeneous solutions is
x x log x
W [x, x log x] =
1 1 + log x
= x + x log x x log x
= x.
1 0 1 1
y 00 (x) y (x) + 2 y(x) = .
x x x
Using variation of parameters to find the particular solution,
Z Z
log x 1
yp = x dx + x log x dx
x x
1
= x log2 x + x log x log x
2
1
= x log2 x.
2
Thus the general solution of the inhomogeneous differential equation is
1
y = c1 x + c2 x log x + x log2 x.
2
Solution 21.6
1. First we find the homogeneous solutions. We substitute y = ex into the homogeneous differ-
ential equation.
y 00 + y = 0
2 + 1 = 0
=
y = ex , ex
y = {cos x, sin x}
684
We obtain a particular solution with the variation of parameters formula.
Z Z
yp = cos x ex sin x dx + sin x ex cos x dx
1 1
yp = cos x ex (sin x cos x) + sin x ex (sin x + cos x)
2 2
1 x
yp = e
2
The general solution is the particular solution plus a linear combination of the homogeneous
solutions.
1
y = ex + cos x + sin x
2
2.
y 00 + 2 y = sin x, y(0) = y 0 (0) = 0
Assume that is positive. First we find the homogeneous solutions by substituting y = ex
into the homogeneous differential equation.
y 00 + 2 y = 0
2 + 2 = 0
=
x x
y = e ,e
y = {cos(x), sin(x)}
c1 = 0,
1
+ c2 = 0,
2 1
For 6= 1, (non-resonant forcing), the solution subject to the initial conditions is
sin(x) sin(x)
y= .
(2 1)
685
Now consider the case = 1. We obtain a particular solution with the variation of parameters
formula.
Z Z
yp = cos(x) sin2 (x) dx + sin(x) cos(x) sin x dx
1 1 2
yp = cos(x) (x cos(x) sin(x)) + sin(x) cos (x)
2 2
1
yp = x cos(x)
2
The general solution for = 1 is
1
y = x cos(x) + c1 cos(x) + c2 sin(x).
2
The initial conditions give us the constraints:
c1 = 0
1
+ c2 = 0
2
For = 1, (resonant forcing), the solution subject to the initial conditions is
1
y= (sin(x) x cos x).
2
Solution 21.7
1. A set of linearly independent, homogeneous solutions is {cos t, sin t}. The Wronskian of these
solutions is
cos t sin t
W (t) =
= cos2 t + sin2 t = 1.
sin t cos t
We use variation of parameters to find a particular solution.
Z Z
yp = cos t g(t) sin t dt + sin t g(t) cos t dt
2. Since the initial conditions are given at t = 0 we choose the lower bounds of integration in the
general solution to be that point.
Z t Z t
y = c1 g( ) sin d cos t + c2 + g( ) cos d sin t
0 0
The initial condition y(0) = 0 gives the constraint, c1 = 0. The derivative of y(t) is then,
Z t Z t
0
y (t) = g(t) sin t cos t + g( ) sin d sin t + g(t) cos t sin t + c2 + g( ) cos d cos t,
0 0
Z t Z t
y 0 (t) = g( ) sin d sin t + c2 + g( ) cos d cos t.
0 0
0
The initial condition y (0) = 0 gives the constraint c2 = 0. The solution subject to the initial
conditions is
Z t
y= g( )(sin t cos cos t sin ) d
0
Z t
y= g( ) sin(t ) d
0
686
Figure 21.5: Non-resonant Forcing
is Z t
y= sin( ) sin(t ) d.
0
For 6= 1, this is
Z t
1
y= cos(t ) cos(t + ) d
2 0
t
1 sin(t ) sin(t + )
= +
2 1+ 1 0
1 sin(t) sin(t) sin(t) + sin(t)
= +
2 1+ 1
sin t sin(t)
y= + . (21.6)
1 2 1 2
The solution is the sum of two periodic functions of period 2 and 2/. This solution is
plotted in Figure 21.5 on the interval t [0, 16] for the values = 1/4, 7/8, 5/2.
For = 1, we have
Z t
1
y= cos(t 2 ) cos(tau) d
2 0
t
1 1
= sin(t 2 ) cos t
2 2 0
1
y= (sin t t cos t) . (21.7)
2
The solution has both a periodic and a transient term. This solution is plotted in Figure 21.5
on the interval t [0, 16].
Note that we can derive (21.7) from (21.6) by taking the limit as 0.
sin(t) sin t t cos(t) sin t
lim = lim
1 1 2 1 2
1
= (sin t t cos t)
2
Solution 21.8
Let y1 , y2 and y3 be linearly independent homogeneous solutions to the differential equation
687
Figure 21.6: Resonant Forcing
yp = u 1 y 1 + u 2 y 2 + u 3 y 3 .
Since the uj s are undetermined functions, we are free to impose two constraints. We choose the
constraints to simplify the algebra.
Substituting the expressions for yp and its derivatives into the differential equation,
u01 y100 + u1 y1000 + u02 y200 + u2 y2000 + u03 y300 + u3 y3000 + p2 (u1 y100 + u2 y200 + u3 y300 ) + p1 (u1 y10 + u2 y20 + u3 y30 )
+ p0 (u1 y1 + u2 y2 + u3 y3 ) = f (x)
u01 y100 + u02 y200 + u03 y300 + u1 L[y1 ] + u2 L[y2 ] + u3 L[y3 ] = f (x)
u01 y100 + u02 y200 + u03 y300 = f (x).
(y2 y30 y20 y3 )f (x) (y1 y30 y10 y3 )f (x) (y1 y20 y10 y2 )f (x)
u01 = , u02 = , u03 =
W (x) W (x) W (x)
Here W (x) is the Wronskian of {y1 , y2 , y3 }. Integrating the expressions for u0j , the particular solution
is
(y2 y30 y20 y3 )f (x) (y3 y10 y30 y1 )f (x) (y1 y20 y10 y2 )f (x)
Z Z Z
yp = y1 dx + y2 dx + y3 dx.
W (x) W (x) W (x)
688
Green Functions
Solution 21.9
We consider the Green function problem
The homogeneous solution is y = c1 + c2 x. The homogeneous solution that satisfies the boundary
conditions is y = 0. Thus the Green function has the form
(
0 x < ,
G(x|) =
c1 + c2 x x > .
G( + |) = 0, G0 ( + |) = 1.
y 00 = f (x), y() = y 0 () = 0.
is
Z
y= f ()G(x|) d
Z
y= f ()(x )H(x ) d
Z x
y= f ()(x ) d
Solution 21.10
Since we are dealing with an Euler equation, we substitute y = x to find the homogeneous solutions.
( 1) + 1 = 0
( 1)( + 1) = 0
1
y1 = x, y2 =
x
689
A particular solution is
x2 (1/x) x2 x
Z Z
1
yp = x dx + dx
2/x x 2/x
x2 x4
Z Z
1
= x dx + dx
2 x 2
4 4
x x
=
6 10
x4
= .
15
The general solution is
x4 1
y= + c1 x + c2 .
15 x
Applying the initial conditions,
y(0) = 0 c2 = 0
y 0 (0) = 0 c1 = 1.
Green Function. Since this problem has both an inhomogeneous term in the differential equation
and inhomogeneous boundary conditions, we separate it into the two problems
1 0 1
u00 + u 2 u = x2 , u(0) = u0 (0) = 0,
x x
00 1 0 1
v + v 2 v = 0, v(0) = 0, v 0 (0) = 1.
x x
First we solve the inhomogeneous differential equation with the homogeneous boundary conditions.
The Green function for this problem satisfies
Since the Green function must satisfy the homogeneous boundary conditions, it has the form
(
0 for x <
G(x|) =
cx + d/x for x > .
690
Thus the solution is
Z
u(x) = G(x|) 2 d
0
Z x
2
1
= x 2 d
0 2 2x
1 1
= x4 x4
6 10
x4
= .
15
Now to solve the homogeneous differential equation with inhomogeneous boundary conditions.
The general solution for v is
v = cx + d/x.
Applying the two boundary conditions gives
v = x.
Solution 21.11
The Green function satisfies
First note that only the G000 (x|) term can have a delta function singularity. If a lower derivative
had a delta function type singularity, then G000 (x|) would be more singular than a delta function
and there would be no other term in the equation to balance that behavior. Thus we see that
G000 (x|) will have a delta function singularity; G00 (x|) will have a jump discontinuity; G0 (x|) will
be continuous at x = . Integrating the differential equation from to + yields
Z + Z +
000
G (x|) dx = (x ) dx
G00 ( + |) G00 ( |) = 1.
Thus we have the three continuity conditions:
G00 ( + |) = G00 ( |) + 1
G0 ( + |) = G0 ( |)
G( + |) = G( |)
Solution 21.12
Variation of Parameters. Consider the problem
y1 = x, y2 = x2 .
691
In the variation of parameters formula, we will choose 1 as the lower bound of integration. (This
will simplify the algebra in applying the initial conditions.)
Z x 2 Z x
e 2 e
yp = x 4
d + x d
1 1 4
Z x Z x
e e
= x 2
d + x2 d
1 1 3
x
ex 1 x e
x
ex
e e
Z Z
1 2
= x e d + x 2 + d
x 1 2x 2x 2 1
Z x
x + x2
1 e
= x e1 + (1 + x) ex + d
2 2 1
If you wanted to, you could write the last integral in terms of exponential integral functions.
The general solution is
Z x
x2
2 1 1 x e
y = c1 x + c2 x x e + (1 + x) e + x + d
2 2 1
Applying the boundary conditions,
y(1) = 0 c1 + c2 = 0
y 0 (1) = 1 c1 + 2c2 = 1,
we find that c1 = 1, c2 = 1.
Thus the solution subject to the initial conditions is
Z x
x2
1 2 1 x e
y = (1 + e )x + x + (1 + x) e + x + d
2 2 1
692
Thus we find the solution for y is
Z x
x2
1 2 1 x e
y = (1 + e )x + x + (1 + x) e + x+ d
2 2 1
Solution 21.13
The differential equation for the Green function is
Note that cosh(x) and sinh(x1) are homogeneous solutions that satisfy the left and right boundary
conditions, respectively. The Wronskian of these two solutions is
cosh(x) sinh(x 1)
W (x) =
sinh(x) cosh(x 1)
= cosh(x) cosh(x 1) sinh(x) sinh(x 1)
1
ex + ex ex1 + ex+1 ex ex ex1 ex+1
=
4
1 1
e + e1
=
2
= cosh(1).
Solution 21.14
The differential equation for the Green function is
Note that sinh(x) and ex are homogeneous solutions that satisfy the left and right boundary
conditions, respectively. The Wronskian of these two solutions is
ex
sinh(x)
W (x) =
cosh(x) ex
= sinh(x) ex cosh(x) ex
1 1
= ex ex ex ex + ex ex
2 2
= 1
Solution 21.15
693
a) The Green function problem is
xy 00 + y 0 = 0
{ex , ex } and {cosh x, sinh x} are both linearly independent sets of homogeneous solutions.
sinh(x + a) and sinh(x a) are homogeneous solutions that satisfy the left and right boundary
conditions, respectively. The Wronskian of these two solutions is,
sinh(x + a) sinh(x a)
W (x) =
cosh(x + a) cosh(x a)
= sinh(x + a) cosh(x a) sinh(x a) cosh(x + a)
= sinh(2a)
694
d) The Green function from part (b) is,
sinh(x< + a) sinh(x> a)
G(x|) = .
sinh(2a)
Solution 21.16
1. The problem,
y 00 + y = f (x), y(0) = y() = 0,
has a Green function if and only if it has a unique solution. This inhomogeneous problem has
a unique solution if and only if the homogeneous problem has only the trivial solution.
First consider the case = 0. We find the general solution of the homogeneous differential
equation.
y = c1 + c2 x
Only the trivial solution satisfies the boundary conditions. The problem has a unique solution
for = 0.
Now consider non-zero . We find the general solution of the homogeneous differential equation.
y = c1 cos x + c2 sin x .
Thus the problem has a unique solution for all complex except = n2 , n Z+ .
Consider the case = 0. We find solutions of the homogeneous equation that satisfy the left
and right boundary conditions, respectively.
y1 = x, y2 = x .
695
We consider the case 6= n2 , 6= 0. We find the solutions of the homogeneous equation that
satisfy the left and right boundary conditions, respectively.
y1 = sin x , y2 = sin (x ) .
y 00 + 9y = 1 + x, y(0) = y() = 0.
The homogeneous solutions of the problem are constant multiples of sin(3x). Thus for each
value of , the problem either has no solution or an infinite number of solutions. There will be
an infinite number of solutions if the inhomogeneity 1 + x is orthogonal to the homogeneous
solution sin(3x) and no solution otherwise.
Z
+ 2
(1 + x) sin(3x) dx =
0 3
The problem has a solution only for = 2/. For this case the general solution of the
inhomogeneous differential equation is
1 2x
y= 1 + c1 cos(3x) + c2 sin(3x).
9
696
We substitute the expansion into the differential equation to determine the coefficients. This
will not determine gn . We choose gn = 0, which is one of the choices that will make the
modified Green function symmetric in x and .
X 2X
gk n2 k 2 sin(kx) =
sin(kx) sin(k)
k=1 k=1
k6=n
2 X sin(kx) sin(k)
G(x|) =
n2 k 2
k=1
k6=n
Solution 21.17
We separate the problem for u into the two problems:
Lv (pv 0 )0 + qv = f (x), a < x < b, v(a) = 0, v(b) = 0
Lw (pw0 )0 + qw = 0, a < x < b, w(a) = , w(b) =
and note that the solution for u is u = v + w.
The problem for v has the solution,
Z b
v= g(x; )f () d,
a
with the Green function,
(v
1 (x)v2 ()
v1 (x< )v2 (x> ) p()W () for a x ,
g(x; ) = v1 ()v2 (x)
p()W () p()W () for x b.
Here v1 and v2 are homogeneous solutions that respectively satisfy the left and right homogeneous
boundary conditions.
Since g(x; ) is a solution of the homogeneous equation for x 6= , g (x; ) is a solution of the
homogeneous equation for x 6= . This is because for x 6= ,
L g = L[g] = (x ) = 0.
If is outside of the domain, (a, b), then g(x; ) and g (x; ) are homogeneous solutions on that
domain. In particular g (x; a) and g (x; b) are homogeneous solutions,
L [g (x; a)] = L [g (x; b)] = 0.
Now we use the definition of the Green function and v1 (a) = v2 (b) = 0 to determine simple expres-
sions for these homogeneous solutions.
v10 (a)v2 (x) (p0 (a)W (a) + p(a)W 0 (a))v1 (a)v2 (x)
g (x; a) =
p(a)W (a) (p(a)W (a))2
0
v (a)v2 (x)
= 1
p(a)W (a)
v10 (a)v2 (x)
=
p(a)(v1 (a)v20 (a) v10 (a)v2 (a))
v 0 (a)v2 (x)
= 1 0
p(a)v1 (a)v2 (a)
v2 (x)
=
p(a)v2 (a)
697
-4 -2 2 4
-0.1
-0.2
-0.3
-0.4
-0.5
Solution 21.18
Figure 21.7 shows a plot of G(x; 1) and G(x; 1) for k = 1.
First we consider the boundary condition u(0) = 0. Note that the solution of
G00 k 2 G = (x ) (x + ), |G(; )| < ,
satisfies the condition G(0; ) = 0. Thus the Green function which satisfies G(0; ) = 0 is
1 k|x| 1 k|x+|
G(x; ) = e + e .
2k 2k
698
1 2 3 4 5 1 2 3 4 5
-0.1 -0.1
-0.2
-0.2 -0.3
-0.3 -0.4
-0.4 -0.5
1
G(x; ) = ekx> sinh(kx< )
k
Now consider the boundary condition u0 (0) = 0. Note that the solution of
1
G(x; ) = ekx> cosh(kx< )
k
The Green functions which satisfies G(0; ) = 0 and G0 (0; ) = 0 are shown in Figure 21.8.
Solution 21.19
1. The Green function satisfies
g 00 a2 g = (x ), g(0; ) = g 0 (L; ) = 0.
699
The solutions that respectively satisfy the left and right boundary conditions are
u1 = sinh(ax), u2 = cosh(a(x L)).
The Wronskian of these solutions is
sinh(ax) cosh(a(x L))
W (x) = = a cosh(aL).
a cosh(ax) a sinh(a(x L))
Thus the Green function is
( sinh(ax) cosh(a(L))
a cosh(aL) for x , sinh(ax< ) cosh(a(x> L))
g(x; ) = = .
sinh(a) cosh(a(xL))
a cosh(aL) for x. a cosh(aL)
We note that this solution satisfies the differential equation and the boundary conditions.
700
21.13 Quiz
Problem 21.1
Find the general solution of
y 00 y = f (x),
where f (x) is a known function.
701
21.14 Quiz Solutions
Solution 21.1
y 00 y = f (x)
We substitute y = ex into the homogeneous differential equation.
y 00 y = 0
2 ex ex = 0
= 1
ex
x
e
= 2.
ex
x
e
ex f (x) ex f (x)
Z Z
y = c1 ex +c2 ex ex dx + ex dx.
2 2
702
Chapter 22
Difference Equations
Televisions should have a dial to turn up the intelligence. There is a brightness knob, but it
doesnt work.
-?
22.1 Introduction
Example 22.1.1 Gamblers ruin problem. Consider a gambler that initially has n dollars. He
plays a game in which he has a probability p of winning a dollar and q of losing a dollar. (Note that
p + q = 1.) The gambler has decided that if he attains N dollars he will stop playing the game. In
this case we will say that he has succeeded. Of course if he runs out of money before that happens,
we will say that he is ruined. What is the probability of the gamblers ruin? Let us denote this
probability by an . We know that if he has no money left, then his ruin is certain, so a0 = 1. If he
reaches N dollars he will quit the game, so that aN = 0. If he is somewhere in between ruin and
success then the probability of his ruin is equal to p times the probability of his ruin if he had n + 1
dollars plus q times the probability of his ruin if he had n 1 dollars. Writing this in an equation,
an = pan+1 + qan1 subject to a0 = 1, aN = 0.
This is an example of a difference equation. You will learn how to solve this particular problem in
the section on constant coefficient equations.
Corresponding to
Z
df
dx = f () f (),
dx
in the discrete realm we have
1
X 1
X
D[an ] = (an+1 an ) = a a .
n= n=
703
Linear difference equations have the form
Besides being important in their own right, we will need to solve difference equations in order to
develop series solutions of differential equations. Also, some methods of solving differential equations
numerically are based on approximating them with difference equations.
There are many similarities between differential and difference equations. Like differential equa-
tions, an rth order homogeneous difference equation has r linearly independent solutions. The
general solution to the rth order inhomogeneous equation is the sum of the particular solution and
an arbitrary linear combination of the homogeneous solutions.
For an rth order difference equation, the initial condition is given by specifying the values of the
first r an s.
Example 22.1.2 Consider the difference equation an2 an1 an = 0 subject to the initial
condition a1 = a2 = 1. Note that although we may not know a closed-form formula for the an
we can calculate the an in order by substituting into the difference equation. The first few an are
1, 1, 2, 3, 5, 8, 13, 21, . . . We recognize this as the Fibonacci sequence.
We can reduce the order of, (or solve for first order), this equation by summing from 1 to n 1.
n1
X n1
X
D[F (aj , aj+1 , . . . , j)] = g(j)
j=1 j=1
n1
X
F (an , an+1 , . . . , n) F (a1 , a2 , . . . , 1) = g(j)
j=1
n1
X
F (an , an+1 , . . . , n) = g(j) + F (a1 , a2 , . . . , 1)
j=1
Result 22.2.1 We can reduce the order of the exact difference equation
D[F (an , an+1 , . . . , n)] = g(n), for n 1
704
Example 22.2.1 Consider the difference equation, D[nan ] = 1. Summing both sides of this equa-
tion
n1
X n1
X
D[jaj ] = 1
j=1 j=1
nan a1 = n 1
n + a1 1
an = .
n
an+1 p(n)an = 0
an+1 an
Qn Qn1 =0
j=1 p(j) j=1 p(j)
" #
an
D Qn1 =0
j=1 p(j)
Result 22.3.1 The solution of the homogeneous first order difference equation
an+1 = p(n)an , for n 1,
is
n1
Y
an = a1 p(j).
j=1
705
Example 22.3.1 Consider the equation an+1 = nan with the initial condition a1 = 1.
n1
Y
an = a1 j = (1)(n 1)! = (n)
j=1
Recall that (z) is the generalization of the factorial function. For positive integral values of the
argument, (n) = (n 1)!.
a a q(n)
Qn n+1 Qn1n = Qn .
j=1 p(j) j=1 p(j) j=1 p(j)
n1
" #
an X q(k)
Qn1 a1 = Qk
j=1 p(j) k=1 j=1 p(j)
Result 22.4.1 The solution of the inhomogeneous first order difference equa-
tion
an+1 = p(n)an + q(n) for n 1
is " n1 # " n1 " # #
Y X q(k)
an = p(m) Qk + a1 .
m=1 k=1 j=1 p(j)
Example 22.4.1 Consider the equation an+1 = nan + 1 for n 1. The summing factor is
1
n
Y 1
S(n) = j = .
j=1
n!
706
Multiplying the difference equation by the summing factor,
an+1 an 1
=
n! (n 1)! n!
an 1
D =
(n 1)! n!
n1
an X 1
a1 =
(n 1)! k!
k=1
"n1 #
X 1
an = (n 1)! + a1 .
k!
k=1
an+1 = an + , for n 0.
From the above result, (with the products and sums starting at zero instead of one), the solution is
rN + pN 1 rN 1 + + p1 r + p0 = 0
(r r1 )m1 (r rk )mk = 0.
If r1 is a distinct root then the associated linearly independent solution is r1n . If r1 is a root of
multiplicity m > 1 then the associated solutions are r1n , nr1n , n2 r1n , . . . , nm1 r1n .
707
Result 22.5.1 Consider the homogeneous constant coefficient difference
equation
an+N + pN 1 an+N 1 + + p1 an+1 + p0 an = 0.
The substitution an = rn yields the equation
(r r1 )m1 (r rk )mk = 0.
Example 22.5.1 Consider the equation an+2 3an+1 + 2an = 0 with the initial conditions a1 = 1
and a2 = 3. The substitution an = rn yields
r2 3r + 2 = (r 1)(r 2) = 0.
Thus the general solution is
an = c1 1n + c2 2n .
The initial conditions give the two equations,
a1 = 1 = c1 + 2c2
a2 = 3 = c1 + 4c2
Since c1 = 1 and c2 = 1, the solution to the difference equation subject to the initial conditions is
an = 2n 1.
Example 22.5.2 Consider the gamblers ruin problem that was introduced in Example 22.1.1. The
equation for the probability of the gamblers ruin at n dollars is
an = pan+1 + qan1 subject to a0 = 1, aN = 0.
We assume that 0 < p < 1. With the substitution an = rn we obtain
r = pr2 + q.
The roots of this equation are
1 1 4pq
r=
2p
p
1 1 4p(1 p)
=
2p
p
1 (1 2p)2
=
2p
1 |1 2p|
= .
2p
We will consider the two cases p 6= 1/2 and p = 1/2.
p 6= 1/2. If p < 1/2, the roots are
1 (1 2p)
r=
2p
1p q
r1 = = , r2 = 1.
p p
708
If p > 1/2 the roots are
1 (2p 1)
r=
2p
p + 1 q
r1 = 1, r2 = = .
p p
Thus the general solution for p 6= 1/2 is
n
q
an = c1 + c2 .
p
The boundary condition a0 = 1 requires that c1 +c2 = 1. From the boundary condition aN = 0
we have
N
q
(1 c2 ) + c2 =0
p
1
c2 =
1 + (q/p)N
pN
c2 = N .
p qN
Solving for c1 ,
pN
c1 = 1
pN q N
q N
c1 = N .
p qN
Thus we have
n
q N pN q
an = + .
pN q N pN q N p
p = 1/2. In this case, the two roots of the polynomial are both 1. The general solution is
an = c1 + c2 n.
The left boundary condition demands that c1 = 1. From the right boundary condition we
obtain
1 + c2 N = 0
1
c2 = .
N
Thus the solution for this case is
n
an = 1 .
N
As a check that this formula makes sense, we see that for n = N/2 the probability of ruin is
1 N/2 1
N = 2.
709
We see that one solution to this equation is an = 1/n!. Analogous to the reduction of order for
differential equations, the substitution an = bn /n! will reduce the order of the difference equation.
At first glance it appears that we have not reduced the order of the equation, but writing it in terms
of discrete derivatives
D2 bn Dbn = 0
shows that we now have a first order difference equation for Dbn . The substitution bn = rn in
equation 22.2 yields the algebraic equation
r2 3r + 2 = (r 1)(r 2) = 0.
Thus the solutions are bn = 1 and bn = 2n . Only the bn = 2n solution will give us another linearly
independent solution for an . Thus the second solution for an is an = bn /n! = 2n /n!. The general
solution to equation 22.1 is then
1 2n
an = c1 + c2 .
n! n!
710
22.7 Exercises
Exercise 22.1
Find a formula for the nth term in the Fibonacci sequence 1, 1, 2, 3, 5, 8, 13, . . ..
Exercise 22.2
Solve the difference equation
2
an+2 = an , a1 = a2 = 1.
n
711
22.8 Hints
Hint 22.1
The difference equation corresponding to the Fibonacci sequence is
an+2 an+1 an = 0, a1 = a2 = 1.
Hint 22.2
Consider this exercise as two first order difference equations; one for the even terms, one for the odd
terms.
712
22.9 Solutions
Solution 22.1
We can describe the Fibonacci sequence with the difference equation
an+2 an+1 an = 0, a1 = a2 = 1.
r2 r 1 = 0.
1 r2
c1 =
r12 r1 r2
1 5
1 2
=
1+ 5
2 5
1+ 5
= 2
1+ 5
2 5
1
=
5
Substitute this result into the equation for c2 .
1 1
c2 = 1 r1
r2 5
!
2 1 1+ 5
= 1
1 5 5 2
!
2 1 5
=
1 5 2 5
1
=
5
713
Thus the nth term in the Fibonacci sequence has the formula
!n !n
1 1+ 5 1 1 5
an = .
5 2 5 2
It is interesting to note that although the Fibonacci sequence is defined in terms of integers, one
cannot express the formula form the nth element in terms of rational numbers.
Solution 22.2
We can consider
2
an+2 = an , a1 = a2 = 1
n
to be a first order difference equation. First consider the odd terms.
a1 = 1
2
a3 =
1
22
a5 =
31
2(n1)/2
an =
(n 2)(n 4) (1)
a2 = 1
2
a4 =
2
22
a6 =
42
2(n2)/2
an = .
(n 2)(n 4) (2)
Thus
2(n1)/2
(
(n2)(n4)(1) for odd n
an = 2(n2)/2
(n2)(n4)(2) for even n.
714
Chapter 23
-?
z3
sin z = z + O(z 5 )
6
1
= 1 + O(z)
1z
The notation o(z n ) means terms smaller that z n . For example,
cos z = 1 + o(1)
ez = 1 + z + o(z)
w = c1 ez +c2 e2z .
The functions ez and e2z are analytic in the finite complex plane. Recall that a function is analytic
at a point z0 if and only if the function has a Taylor series about z0 with a nonzero radius of
convergence. If we substitute the Taylor series expansions about z = 0 of ez and e2z into the general
solution, we obtain
X zn X 2n z n
w = c1 + c2 .
n=0
n! n=0
n!
715
Alternatively, we could try substituting
P a Taylor series into the differential equation and solving
for the coefficients. Substituting w = n=0 an z n into the differential equation yields
d2 X n d X n
X
an z 3 an z + 2 an z n = 0
dz 2 n=0 dz n=0 n=0
X
X
X
n(n 1)an z n2 3 nan z n1 + 2 an z n = 0
n=2 n=1 n=0
X X
X
(n + 2)(n + 1)an+2 z n 3 (n + 1)an+1 z n + 2 an z n = 0
n=0 n=0 n=0
h
X i
(n + 2)(n + 1)an+2 3(n + 1)an+1 + 2an z n = 0.
n=0
We use reduction of order for difference equations to find the other solution. Substituting an = bn /n!
into the difference equation yields
bn+2 bn+1 bn
(n + 2)(n + 1) 3(n + 1) +2 =0
(n + 2)! (n + 1)! n!
bn+2 3bn+1 + 2bn = 0.
At first glance it appears that we have not reduced the order of the difference equation. However
writing this equation in terms of discrete derivatives,
D2 bn Dbn = 0
we see that this is a first order difference equation for Dbn . Since this is a constant coefficient
difference equation we substitute bn = rn into the equation to obtain an algebraic equation for r.
r2 3r + 2 = (r 1)(r 2) = 0
Thus the two solutions are bn = 1n b0 and bn = 2n b0 . Only bn = 2n b0 will give us a second
independent solution for an . Thus the two solutions for an are
a0 2n a0
an = and an = .
n! n!
Thus we can write the general solution to the differential equation as
X zn X 2n z n
w = c1 + c2 .
n=0
n! n=0
n!
We recognize these two sums as the Taylor expansions of ez and e2z . Thus we obtain the same result
as we did solving the differential equation directly.
Of course it would be pretty silly to go through all the grunge involved in developing a series
expansion of the solution in a problem like Example 23.1.1 since we can solve the problem exactly.
716
However if we could not solve a differential equation, then having a Taylor series expansion of the
solution about a point z0 would be useful in determining the behavior of the solutions near that
point.
For this method of substituting a Taylor series into the differential equation to be useful we have
to know at what points the solutions are analytic. Lets say we were considering a second order
differential equation whose solutions were
1
w1 = , and w2 = log z.
z
Trying to find a Taylor series expansion of the solutions about the point z = 0 would fail because
the solutions are not analytic at z = 0. This brings us to two important questions.
1. Can we tell if the solutions to a linear differential equation are analytic at a point without
knowing the solutions?
2. If there are Taylor series expansions of the solutions to a differential equation, what are the
radii of convergence of the series?
In order to answer these questions, we will introduce the concept of an ordinary point. Consider
the nth order linear homogeneous equation
dn w dn1 w dw
n
+ p n1 (z) n1
+ + p1 (z) + p0 (z)w = 0.
dz dz dz
If each of the coefficient functions pi (z) are analytic at z = z0 then z0 is an ordinary point of the
differential equation.
For reasons of typography we will restrict our attention to second order equations and the point
z0 = 0 for a while. The generalization to an nth order equation will be apparent. Considering the
point z0 6= 0 is only trivially more general as we could introduce the transformation z z0 z to
move the point to the origin.
In the chapter on first order differential equations we showed that the solution is analytic at
ordinary points. One would guess that this remains true for higher order equations. Consider the
second order equation
y 00 + p(z)y 0 + q(z)y = 0,
Assume that one of the solutions is not analytic at the origin and behaves like z at z = 0 where
6= 0, 1, 2, . . .. That is, we can approximate the solution with w(z) = z + o(z ). Lets substitute
w = z + o(z ) into the differential equation and look at the lowest power of z in each of the terms.
X X
( 1)z 2 + o(z 2 ) + z 1 + o(z 1 ) pn z n + z + o(z ) qn z n = 0.
n=0 n=0
We see that the solution could not possibly behave like z , 6= 0, 1, 2, because there is no term
on the left to cancel out the z 2 term. The terms on the left side could not add to zero.
You could also check that a solution could not possibly behave like log z at the origin. Though
we will not prove it, if z0 is an ordinary point of a homogeneous differential equation, then all the
solutions are analytic at the point z0 . Since the solution is analytic at z0 we can expand it in a
Taylor series.
717
Now we are prepared to answer our second question. From complex variables, we know that the
radius of convergence of the Taylor series expansion of a function is the distance to the nearest
singularity of that function. Since the solutions to a differential equation are analytic at ordinary
points of the equation, the series expansion about an ordinary point will have a radius of convergence
at least as large as the distance to the nearest singularity of the coefficient functions.
where p(z) and q(z) are analytic in some neighborhood of the origin.
X
X
p(z) = pn z n and q(z) = qn z n
n=0 n=0
n
! n
!
X X X X X
n n
(n + 2)(n + 1)an+2 z + (m + 1)am+1 pnm z + am qnm zn = 0
n=0 n=0 m=0 n=0 m=0
" n
#
X X
z n = 0.
(n + 2)(n + 1)an+2 + (m + 1)am+1 pnm + am qnm
n=0 m=0
718
1.2
1.1
0.8
0.7
Figure 23.1: Plot of the Numerical Solution and the First Three Terms in the Taylor Series.
We see that a0 and a1 are arbitrary and the rest of the coefficients are determined by the recurrence
relation
n
1 X
an+2 = ((m + 1)am+1 pnm + am qnm ) for n 0.
(n + 1)(n + 2) m=0
The general recurrence relation for the an s is useful if you only want to calculate the first few
terms in the Taylor expansion. However, for many problems substituting the Taylor series for the
coefficient functions into the differential equation will enable you to find a simpler form of the
solution. We consider the following example to illustrate this point.
719
Example 23.1.4 Develop a series expansion of the solution to the initial value problem
1
w00 + w = 0, w(0) = 1, w0 (0) = 0.
(z 2 + 1)
Solution using the General Recurrence Relation. The coefficient function has the Taylor
expansion
1 X
= (1)n z 2n .
1 + z2 n=0
From the initial condition we obtain a0 = 1 and a1 = 0. Thus we see that the solution is
X
w= an z n ,
n=0
where
n
1 X
an+2 = am qnm
(n + 1)(n + 2) m=0
and (
0 for odd n
qn =
(1)(n/2) for even n.
Although this formula is fine if you only want to calculate the first few an s, it is just a tad unwieldy
to work with. Lets see if we can get a better expression for the solution.
Substitute the Taylor Series into the Differential Equation. Substituting a Taylor series
for w yields
d2 X n 1 X
an z + 2 an z n = 0.
dz 2 n=0 (z + 1) n=0
Note that the algebra will be easier if we multiply by z 2 + 1. The polynomial z 2 + 1 has only two
terms, but the Taylor series for 1/(z 2 + 1) has an infinite number of terms.
d2 X X
(z 2 + 1) an z n
+ an z n = 0
dz 2 n=0 n=0
X
X
X
n(n 1)an z n + n(n 1)an z n2 + an z n = 0
n=2 n=2 n=0
X
X
X
n(n 1)an z n + (n + 2)(n + 1)an+2 z n + an z n = 0
n=0 n=0 n=0
h
X i
(n + 2)(n + 1)an+2 + n(n 1)an + an z n = 0
n=0
n2 n + 1
an+2 = an , for n 0.
(n + 2)(n + 1)
From the initial conditions we see that a0 = 1 and a1 = 0. All of the odd terms in the series will
be zero. For the even terms, it is easier to reformulate the problem with the change of variables
bn = a2n . In terms of bn the difference equation is
(2n)2 2n + 1
bn+1 = bn , b0 = 1.
(2n + 2)(2n + 1)
720
0.2 0.4 0.6 0.8 1 1.2
0.9
0.8
0.7
0.6
0.5
0.4
0.3
|z|2 < 1.
The radius of convergence is 1.
The first few terms in the Taylor expansion are
1 1 13 6
w = 1 z2 + z4 z + .
2 8 240
In Figure 23.2 the plot of the first two nonzero terms is shown in a short dashed line, the plot of
the first four nonzero terms is shown in a long dashed line, and the numerical solution is shown in
a solid line.
In general, if the coefficient functions are rational functions, that is they are fractions of poly-
nomials, multiplying the equations by the quotient will reduce the algebra involved in finding the
series solution.
Example 23.1.5 If we were going to find the Taylor series expansion about z = 0 of the solution
to
z 1
w00 + w0 + w = 0,
1+z 1 z2
721
we would first want to multiply the equation by 1 z 2 to obtain
Example 23.1.6 Find the series expansions about z = 0 of the fundamental set of solutions for
w00 + z 2 w = 0.
w1 (0) = 1 w2 (0) = 0
w10 (0) = 0 w20 (0) = 1.
Thus if
X
X
w1 = an z n and w2 = bn z n ,
n=0 n=0
a0 = 1,a1 = 0, and b0 = 0, b1 = 1.
P
Substituting the Taylor expansion w = n=0 cn z n into the differential equation,
X
X
n(n 1)cn z n2 + cn z n+2 = 0
n=2 n=0
X
X
(n + 2)(n + 1)cn+2 z n + cn2 z n = 0
n=0 n=2
h
X i
2c2 + 6c3 z + (n + 2)(n + 1)cn+2 + cn2 z n = 0
n=2
z0 : c2 = 0
1
z : c3 = 0
z n : (n + 2)(n + 1)cn+2 + cn2 = 0, for n 2
cn
cn+4 =
(n + 4)(n + 3)
For our first solution we have the difference equation
an
a0 = 1, a1 = 0, a2 = 0, a3 = 0, an+4 = .
(n + 4)(n + 3)
For our second solution,
bn
b0 = 0, b1 = 1, b2 = 0, b3 = 0, bn+4 = .
(n + 4)(n + 3)
The first few terms in the fundamental set of solutions are
1 4 1 8 1 5 1 9
w1 = 1 z + z , w2 = z z + z .
12 672 20 1440
In Figure 23.3 the five term approximation is graphed in a coarse dashed line, the ten term
approximation is graphed in a fine dashed line, and the numerical solution of w1 is graphed in a
solid line. The same is done for w2 .
722
1.5 1.5
1 1
0.5 0.5
1 2 3 4 5 6 1 2 3 4 5 6
-0.5 -0.5
-1 -1
p(z) 0 q(z)
w00 + w + w = 0.
z z0 (z z0 )2
If z = z0 is not an ordinary point but both p(z) and q(z) are analytic at z = z0 then z0 is a regular
singular point of the differential equation. The following equations have a regular singular point
at z = 0.
w00 + z1 w0 + z 2 w = 0
w00 + 1
sin z w
0
w =0
w00 zw0 + 1
z sin z w =0
Concerning regular singular points of second order linear equations there is good news and bad
news.
The Good News. We will find that with the use of the Frobenius method we can always find
series expansions of two linearly independent solutions at a regular singular point. We will illustrate
this theory with several examples.
723
The Bad News. Instead of a tidy little theory like we have for ordinary points, the solutions can
be of several different forms. Also, for some of the problems the algebra can get pretty ugly.
3(1 + z)
w00 + w = 0.
16z 2
P
We wish to find series solutions about the point z = 0. First we try a Taylor series w = n=0 an z n .
Substituting this into the differential equation,
X 3 X
z2 n(n 1)an z n2 + (1 + z) an z n = 0
n=2
16 n=0
X 3 X 3 X
n(n 1)an z n + an z n + an+1 z n = 0.
n=0
16 n=0 16 n=1
Equating powers of z,
z0 : a0 = 0
3 3
zn : n(n 1) + an + an+1 = 0
16 16
16
an+1 = n(n 1) + 1 an .
3
This difference equation has the solution an = 0 for all n. Thus we have obtained only the trivial
solution to the differential equation. We must try an expansion of a more general form. We recall
that for regular singular P
points of first order equations we can always find a solution in the form of a
Frobenius series w = z n=0 an z n , a0 6= 0. We substitute this series into the differential equation.
X 3 X
z2 ( 1) + 2n + n(n 1) an z n+2 + (1 + z)z an z n = 0
n=0
16 n=0
X 3 X 3 X
( 1) + 2n + n(n 1) an z n + an z n + an1 z n = 0
n=0
16 n=0
16 n=1
Since we have assumed that a0 6= 0, the polynomial in must be zero. The two roots of the
polynomial are p p
1 + 1 3/4 3 1 1 3/4 1
1 = = , 2 = = .
2 4 2 4
Thus our two series solutions will be of the form
X
X
w1 = z 3/4 an z n , w2 = z 1/4 bn z n .
n=0 n=0
724
Equating powers of z, we see that a0 is arbitrary and
3
an = an1 for n 1.
16n(n + 1)
This difference equation has the solution
n
Y 3
an = a0
j=1
16j(j + 1)
n Yn
3 1
= a0
16 j=1
j(j + 1)
n
3 1
= a0 for n 1.
16 n!(n + 1)!
We see that the difference equation for bn is the same as the equation for an . Thus we can write the
general solution to the differential equation as
n ! n !
3/4
X 3 1 n 1/4
X 3 1 n
w = c1 z 1+ z + c2 z 1+ z
n=1
16 n!(n + 1)! n=1
16 n!(n + 1)!
n !
3/4 1/4
X 3 1
c1 z + c2 z 1+ zn .
n=1
16 n!(n + 1)!
P
Substituting a Frobenius series w = z n=0 an z n , a0 6= 0 and the Taylor series for p(z) and q(z)
into the differential equation yields
h
! ! ! !
X i X X X X
( + n)( + n 1) an z n + pn z n ( + n)an z n + qn z n an z n =0
n=0 n=0 n=0 n=0 n=0
h
X i
( + n)2 ( + n) + p0 ( + n) + q0 an z n
n=0
! ! ! !
X X X X
n n n n
+ pn z ( + n)an z + qn z an z =0
n=1 n=0 n=1 n=0
725
h
X i
X n1
X
X n1
X
( + n)2 + (p0 1)(n ) + q0 an z n + ( + j)aj pnj z n + aj qnj z n = 0
n=0 n=1 j=0 n=1 j=0
Equating powers of z,
h i
z0 : 2 + (p0 1) + q0 a0 = 0
h i n1
Xh i
zn : ( + n)2 + (p0 1)( + n) + q0 an = ( + j)pnj + qnj aj .
j=0
Let
I() = 2 + (p0 1) + q0 = 0.
This is known as the indicial equation. The indicial equation gives us the form of the solutions.
The equation for a0 is I()a0 = 0. Since we assumed that a0 is nonzero, I() = 0. Let the two
roots of I() be 1 and 2 where <(1 ) <(2 ).
Rewriting the difference equation for an (),
n1
Xh i
I( + n)an () = ( + j)pnj + qnj aj () for n 1. (23.1)
j=0
If the roots are distinct and do not differ by an integer then we can use Equation 23.1 to solve
for an (1 ) and an (2 ), which will give us the two solutions
X
X
w1 = z 1 an (1 )z n , and w2 = z 2 an (2 )z n .
n=0 n=0
If the roots are not distinct, 1 = 2 , we will only have one solution and will have to generate
another. If the roots differ by an integer, 1 2 = N , there is one solution corresponding to 1 ,
but when we try to solve Equation 23.1 for an (2 ), we will encounter the equation
N
X 1 h i
I(2 + N )aN (2 ) = I(1 )aN (2 ) = 0 aN (2 ) = ( + n)pnj + qnj aj (2 ).
j=0
If the right side of the equation is nonzero, then aN (2 ) is undefined. On the other hand, if the
right side is zero then aN (2 ) is arbitrary. The rest of this section is devoted to considering the
cases 1 = 2 and 1 2 = N .
I() = ( 1 )2 = 0
In order to find the second solution, we will differentiate with respect to the parameter, . Let an ()
satisfy Equation 23.1 Substituting the Frobenius expansion into the differential equation,
" #
X
L z an ()z n = 0.
n=0
726
Setting = 1 will make the left hand side of the equation zero. Differentiating this equation with
respect to ,
" #
X
n
L z an ()z = 0.
n=0
Since setting = 1 will make the left hand side of this equation zero, the second linearly indepen-
dent solution is
X X dan ()
w2 = log z z 1 an (1 )z n + z 1 zn
d
n=0 n=0 =1
X
w2 = w1 log z + z 1 a0n (1 )z n .
n=0
1+z
w00 + w = 0.
4z 2
There is a regular singular point at z = 0. The indicial equation is
2
1 1
( 1) + = = 0.
4 2
1
z 2 w00 + (1 + z)w = 0
4
X 1X 1X
( 1) + 2n + n(n 1) an ()z n+ + an ()z n+ + an ()z n++1 = 0.
n=0
4 n=0
4 n=0
X 1X 1X
[( 1) + 2n + n(n 1)] an ()z n + an ()z n + an1 ()z n = 0
n=0
4 n=0
4 n=1
1 X 1 1
( 1)a0 + a0 + ( 1) + 2n + n(n 1) + an () + an1 () z n = 0
4 n=1
4 4
727
Equating the coefficient of z 0 to zero yields I()a0 = 0. Equating the coefficients of z n to zero yields
the difference equation
1 1
( 1) + 2n + n(n 1) + an () + an1 () = 0
4 4
n(n + 1) ( 1) 1
an () = + + an1 ().
4 4 16
The first few an s are
9 25 9
a0 , ( 1) + a0 , ( 1) + ( 1) + a0 , . . .
16 16 16
Setting = 1/2, the coefficients for the first solution are
5 105
a0 , a0 , a0 , ...
16 16
If the right hand side of this equation is zero, then aN is arbitrary. There will be two solutions of
the Frobenius form.
X
X
1 n 2
w1 = z an (1 )z and w2 = z an (2 )z n .
n=0 n=0
728
If the right hand side of the equation is nonzero then aN (2 ) will be undefined. We will have to
generate the second solution. Let
X
w(z, ) = z an ()z n ,
n=0
where an () satisfies the recurrence formula. Substituting this series into the differential equation
yields
L[w(z, )] = 0.
We will multiply by ( 2 ), differentiate this equation with respect to and then set = 2 .
This will generate a linearly independent solution.
L[( 2 )w(z, )] = L ( 2 )w(z, )
" #
X
n
=L ( 2 )z an ()z
n=0
" #
X X d
= L log z z ( 2 )an ()z n + z [( 2 )an ()]z n
n=0 n=0
d
The first N terms in the sum will be zero. That is because a0 , . . . , aN 1 are finite, so multiplying by
( 2 ) and taking the limit as 2 will make the coefficients vanish. The equation for aN ()
is
N
X 1 h i
I( + N )aN () = ( + j)pN j + qN j aj ().
j=0
2
Since 1 = 2 + N , lim2 +N 1 = 1.
N 1 h
1 X i
= (2 + j)pN j + qN j aj (2 ).
(1 2 ) j=0
Using this you can show that the first term in the solution can be written
d1 log z w1 ,
729
where d1 is a constant. Thus the second linearly independent solution is
X
w2 = d1 log z w1 + z 2 dn z n ,
n=0
where
N 1 h
1 1 X i
d1 = (2 + j)pN j + qN j aj (2 )
a0 (1 2 ) j=0
and i
d h
dn = lim ( 2 )an () for n 0.
2 d
The First Solution. The first solution will have the Frobenius form
X
w1 = z 2 an (1 )z n .
n=0
Equating powers of z,
h i
(n + )(n + 1) 2(n + ) + 2 an = (n + 1)an1
an1
an = .
n+2
730
The Second Solution. The equation for a1 (2 ) is
0 a1 (2 ) = 2a0 .
Since the right hand side of this equation is not zero, the second solution will have the form
2
X d
w2 = d1 log z w1 + z lim [( 2 )an ()] z n
n=0
2 d
1 1
d1 = a0 = 1.
a0 2 1
(1)n a0
an () = .
( + n 2)( + n 1) ( 1)
=0
d a0
d2 = lim ( 1)
1 d ( 1)
d a0
h i
= lim
1 d
= a0
d a0
d3 = lim ( 1)
1 d ( + 1)( 1)
d a0
= lim
1 d ( + 1)
3
= a0 .
4
731
It will take a little work to find the general expression for dn . We will need the following relations.
n1
X 1
(n) = (n 1)!, 0 (z) = (z)(z), (n) = + .
k
k=1
See the chapter on the Gamma function for explanations of these equations.
(1)n a0
d
dn = lim ( 1)
1 d ( + n 2)( + n 1) ( 1)
(1)n a0
d
= lim
1 d ( + n 2)( + n 1) ()
d (1)n a0 ()
= lim
1 d ( + n 1)
()() ()( + n 1)
= (1)n a0 lim
1 ( + n 1) ( + n 1)
()[() ( + n 1)]
= (1)n a0 lim
1 ( + n 1)
(1) (n)
= (1)n a0
(n 1)!
n1
(1)n+1 a0 X 1
=
(n 1)! k
k=0
We see that even in problems that are chosen for their simplicity, the algebra involved in the
Frobenius method can be pretty involved.
Example 23.2.4 Consider a series expansion about the origin of the equation
1z 0 1
w00 + w 2 w = 0.
z z
The indicial equation is
2 1 = 0
= 1.
Substituting a Frobenius series into the differential equation,
X
X
X
z2 (n + )(n + 1)an z n2 + (z z 2 ) (n + )an z n1 an z n = 0
n=0 n=0 n=0
X
X
X
X
(n + )(n + 1)an z n + (n + )an z n (n + 1)an1 z n an z n = 0
n=0 n=0 n=1 n=0
h i h
X i
( 1) + 1 a0 + n + )(n + 1)an + (n + 1)an (n + 1)an1 z n = 0.
n=1
732
Equating powers of z to zero,
an1 ()
an () = .
n++1
We know that the first solution has the form
X
w1 = z an z n .
n=0
733
Example 23.4.1 Consider the behavior of f (z) = sin z at infinity. This is the same as considering
the point = 0 of sin(1/), which has the series expansion
(1)n
X
1
sin = .
n=0
(2n + 1)! 2n+1
Thus we see that the point = 0 is an essential singularity of sin(1/). Hence sin z has an essential
singularity at z = .
Example 23.4.2 Consider the behavior at infinity of z e1/z . We make the transformation = 1/z.
1 1 X n
e =
n=0 n!
In order to classify the point at infinity of a differential equation in w(z), we apply the transfor-
mation = 1/z, u() = w(z). We write the derivatives with respect to z in terms of .
1
z=
1
dz = d
2
d d
= 2
dz d
d2
2 d 2 d
=
dz 2 d d
2
d d
= 4 2 + 2 3
d d
Now we apply the transformation to the differential equation.
734
23.5 Exercises
Exercise 23.1 (mathematica/ode/series/series.nb)
f (x) satisfies the Hermite equation
d2 f df
2
2x + 2f = 0.
dx dx
Construct two linearly independent solutions of the equation as Taylor series about x = 0. For what
values of x do the series converge?
Show that for certain values of , called eigenvalues, one of the solutions is a polynomial, called
an eigenfunction. Calculate the first four eigenfunctions H0 (x), H1 (x), H2 (x), H3 (x), ordered by
degree.
Exercise 23.2
Consider the Legendre equation
(1 x2 )y 00 2xy 0 + ( + 1)y = 0.
1. Find two linearly independent solutions in the form of power series about x = 0.
2. Compute the radius of convergence of the series. Explain why it is possible to predict the
radius of convergence without actually deriving the series.
3. Show that if = 2n, with n an integer and n 0, the series for one of the solutions reduces
to an even polynomial of degree 2n.
4. Show that if = 2n + 1, with n an integer and n 0, the series for one of the solutions reduces
to an odd polynomial of degree 2n + 1.
5. Show that the first 4 polynomial solutions Pn (x) (known as Legendre polynomials) ordered by
their degree and normalized so that Pn (1) = 1 are
P0 = 1 P1 = x
1 1
P2 = (3x2 1) P4 = (5x3 3x)
2 2
((1 x2 )y 0 )0 = ( + 1)y.
Note that two Legendre polynomials Pn (x) and Pm (x) must satisfy this relation for = n and
= m respectively. By multiplying the first relation by Pm (x) and the second by Pn (x) and
integrating by parts show that Legendre polynomials satisfy the orthogonality relation
Z 1
Pn (x)Pm (x) dx = 0 if n 6= m.
1
If n = m, it can be shown that the value of the integral is 2/(2n + 1). Verify this for the first
three polynomials (but you neednt prove it in general).
Exercise 23.3
Find the forms of two linearly independent series expansions about the point z = 0 for the differential
equation
1 1z
w00 + w0 + 2 w = 0,
sin z z
such that the series are real-valued on the positive real axis. Do not calculate the coefficients in the
expansions.
735
Exercise 23.4
Classify the singular points of the equation
w0
w00 + + 2w = 0.
z1
Exercise 23.5
Find the series expansions about z = 0 for
5 0 z1
w00 + w + w = 0.
4z 8z 2
Exercise 23.6
Find the series expansions about z = 0 of the fundamental solutions of
w00 + zw0 + w = 0.
Exercise 23.7
Find the series expansions about z = 0 of the two linearly independent solutions of
1 0 1
w00 + w + w = 0.
2z z
Exercise 23.8
Classify the singularity at infinity of the differential equation
2 3 1
w00 + + 2 w0 + 2 w = 0.
z z z
Find the forms of the series solutions of the differential equation about infinity that are real-valued
when z is real-valued and positive. Do not calculate the coefficients in the expansions.
Exercise 23.9
Consider the second order differential equation
d2 y dy
x + (b x) ay = 0,
dx2 dx
where a, b are real constants.
1. Show that x = 0 is a regular singular point. Determine the location of any additional singular
points and classify them. Include the point at infinity.
2. Compute the indicial equation for the point x = 0.
3. By solving an appropriate recursion relation, show that one solution has the form
ax (a)2 x2 (a)n xn
y1 (x) = 1 + + + + +
b (b)2 2! (b)n n!
736
5. Show that if b = n + 1 where n = 0, 1, 2, . . ., then the second solution of this equation has
logarithmic terms. Indicate the form of the second solution in this case. You need not compute
any coefficients.
Exercise 23.10
Consider the equation
xy 00 + 2xy 0 + 6 ex y = 0.
Find the first three non-zero terms in each of two linearly independent series solutions about x = 0.
737
23.6 Hints
Hint 23.1
Hint 23.2
Hint 23.3
Hint 23.4
Hint 23.5
Hint 23.6
Hint 23.7
Hint 23.8
Hint 23.9
Hint 23.10
738
23.7 Solutions
Solution 23.1
f (x) is a Taylor series about x = 0.
X
f (x) = an xn
n=0
X
f 0 (x) = nan xn1
n=1
X
= nan xn1
n=0
X
f 00 (x) = n(n 1)an xn2
n=2
X
= (n + 2)(n + 1)an+2 xn
n=0
739
Since the coefficient functions in the differential equation do not have any singularities in the finite
complex plane, the radius of convergence of the series is infinite.
If = n is a positive even integer, then the first solution, y1 , is a polynomial of order n. If = n
is a positive odd integer, then the second solution, y2 , is a polynomial of order n. For = 0, 1, 2, 3,
we have
H0 (x) = 1
H1 (x) = x
H2 (x) = 1 2x2
2
H3 (x) = x x3
3
Solution 23.2
1. First we write the differential equation in the standard form.
1 x2 y 00 2xy 0 + ( + 1)y = 0
(23.2)
2x 0 ( + 1)
y 00 y + y = 0. (23.3)
1 x2 1 x2
Since the coefficients of y 0 and y are analytic in a neighborhood of x = 0, We can find two
Taylor series solutions about that point. We find the Taylor series for y and its derivatives.
X
y= an xn
n=0
X
y0 = nan xn1
n=1
X
y 00 = (n 1)nan xn2
n=2
X
= (n + 1)(n + 2)an+2 xn
n=0
Here we used index shifting to explicitly write the two forms that we will need for y 00 . Note
that we can take the lower bound of summation to be n = 0 for all above sums. The terms
added by this operation are zero. We substitute the Taylor series into Equation 23.2.
X
X
X
X
(n + 1)(n + 2)an+2 xn (n 1)nan xn 2 nan xn + ( + 1) an xn = 0
n=0 n=0 n=0 n=0
X
(n + 1)(n + 2)an+2 (n 1)n + 2n ( + 1) an xn = 0
n=0
740
We will find the fundamental set of solutions at x = 0, that is the set {y1 , y2 } that satisfies
y1 (0) = 1 y10 (0) = 0
y2 (0) = 0 y20 (0) = 1.
For y1 we take a0 = 1 and a1 = 0; for y2 we take a0 = 0 and a1 = 1. The rest of the coefficients
are determined from the recurrence relation.
X 1 Y n2
k(k + 1) ( + 1) xn
y1 =
n!
n=0 k=0
even n even k
n2
X 1 Y
k(k + 1) ( + 1) xn
y2 =
n!
n=1 k=1
odd n odd k
2. We determine the radius of convergence of the series solutions with the ratio test.
an+2 xn+2
lim <1
n an xn
n(n+1)(+1)
(n+1)(n+2) an xn+2
lim <1
n an xn
n(n + 1) ( + 1) 2
lim x < 1
n (n + 1)(n + 2)
2
x < 1
Thus we see that the radius of convergence of the series is 1. We knew that the radius of
convergence would be at least one, because the nearest singularities of the coefficients of (23.3)
occur at x = 1, a distance of 1 from the origin. This implies that the solutions of the
equation are analytic in the unit circle about x = 0. The radius of convergence of the Taylor
series expansion of an analytic function is the distance to the nearest singularity.
3. If = 2n then a2n+2 = 0 in our first solution. From the recurrence relation, we see that all
subsequent coefficients are also zero. The solution becomes an even polynomial.
2n m2
X 1 Y
k(k + 1) ( + 1) xm
y1 =
m!
m=0 k=0
even m even k
4. If = 2n + 1 then a2n+3 = 0 in our second solution. From the recurrence relation, we see that
all subsequent coefficients are also zero. The solution becomes an odd polynomial.
X 1 m2
2n+1 Y m
y2 = k(k + 1) ( + 1) x
m!
m=1 k=1
odd m odd k
741
Figure 23.4: The First Four Legendre Polynomials
P0 = 1
P1 = x
1
3x2 1
P2 =
2
1
5x3 3x
P3 =
2
These four Legendre polynomials are plotted in Figure 23.4.
6. We note that the first two terms in the Legendre equation form an exact derivative. Thus the
Legendre equation can also be written as
0
(1 x2 )y 0 = ( + 1)y.
742
We verify that for the first four polynomials the value of the integral is 2/(2n + 1) for n = m.
Z 1 Z 1
P0 (x)P0 (x) dx = 1 dx = 2
1 1
1 1 1
x3
Z Z
2
P1 (x)P1 (x) dx = = x2 dx =
1 1 3 1 3
Z 1 Z 1 5 1
1 1 9x 2
9x4 6x2 + 1 dx = 2x3 + x
P2 (x)P2 (x) dx = =
1 1 4 4 5 1 5
Z 1 Z 1 7
1
1 1 25x 2
25x6 30x4 + 9x2 dx = 6x5 + 3x3
P3 (x)P3 (x) dx = =
1 1 4 4 7 1 7
Solution 23.3
The indicial equation for this problem is
2 + 1 = 0.
Since the two roots 1 = i and 2 = i are distinct and do not differ by an integer, there are two
solutions in the Frobenius form.
X
X
w1 = z i an z n , w1 = z i bn z n
n=0 n=0
However, these series are not real-valued on the positive real axis. Recalling that
we can write a new set of solutions that are real-valued on the positive real axis as linear combinations
of w1 and w2 .
1 1
u1 = (w1 + w2 ), u2 = (w1 w2 )
2 2i
X
X
n
u1 = cos(log z) cn z , u1 = sin(log z) dn z n
n=0 n=0
Solution 23.4
Consider the equation w00 + w0 /(z 1) + 2w = 0.
We see that there is a regular singular point at z = 1. All other finite values of z are ordinary
points of the equation. To examine the point at infinity we introduce the transformation z = 1/t,
w(z) = u(t). Writing the derivatives with respect to z in terms of t yields
d d d2 4 d
2
d
= t2 , 2
= t 2
+ 2t3 .
dz dt dz dt dt
Substituting into the differential equation gives us
t2 u 0
t4 u00 + 2t3 u0 + 2u = 0
1/t 1
2 1 2
u00 + u0 + 4 u = 0.
t t(1 t) t
Since t = 0 is an irregular singular point in the equation for u(t), z = is an irregular singular
point in the equation for w(z).
743
Solution 23.5
Find the series expansions about z = 0 for
5 0 z1
w00 + w + w = 0.
4z 8z 2
We see that z = 0 is a regular singular point of the equation. The indicial equation is
1 1
2 + = 0
4
8
1 1
+ = 0.
2 4
Since the roots are distinct and do not differ by an integer, there will be two solutions in the
Frobenius form.
X
X
w1 = z 1/4 an (1 )z n , w2 = z 1/2 an (2 )z n
n=0 n=0
2
We multiply the differential equation by 8z to put it in a better form. Substituting a Frobenius
series into the differential equation,
X
X
X
8z 2 (n + )(n + 1)an z n+2 + 10z (n + )an z n+1 + (z 1) an z n+
n=0 n=0 n=0
X
X
X
X
8 (n + )(n + 1)an z n + 10 (n + )an z n + an1 z n an z n .
n=0 n=0 n=1 n=0
744
Solution 23.6
We consider the series solutions of,
w00 + zw0 + w = 0.
We would like to find the expansions of the fundamental set of solutions about z = 0. Since z = 0
is a regular point, (the coefficient functions are analytic there), we expand the solutions in Taylor
series. Differentiating the series expansions for w(z),
X
w= an z n
n=0
X
w0 = nan z n1
n=1
X
w00 = n(n 1)an z n2
n=2
X
= (n + 2)(n + 1)an+2 z n
n=0
We may take the lower limit of summation to be zero without changing the sums. Substituting these
expressions into the differential equation,
X
X
X
(n + 2)(n + 1)an+2 z n + nan z n + an z n = 0
n=0 n=0 n=0
X
(n + 2)(n + 1)an+2 + (n + 1)an z n = 0.
n=0
a0 and a1 are arbitrary. We determine the rest of the coefficients from the recurrence relation. We
consider the cases for even and odd n separately.
a2n2
a2n =
2n
a2n4
=
(2n)(2n 2)
a0
= (1)n
(2n)(2n 2) 4 2
n a0
= (1) Qn , n0
m=1 2m
a2n1
a2n+1 =
2n + 1
a2n3
=
(2n + 1)(2n 1)
a1
= (1)n
(2n + 1)(2n 1) 5 3
n a1
= (1) Qn , n0
m=1 (2m + 1)
745
If {w1 , w2 } is the fundamental set of solutions, then the initial conditions demand that w1 = 1 + 0
z + and w2 = 0 + z + . We see that w1 will have only even powers of z and w2 will have only
odd powers of z.
X (1)n X (1)n
w1 = Qn z 2n , w2 = Qn z 2n+1
n=0 m=1 2m n=0 m=1 (2m + 1)
Since the coefficient functions in the differential equation are entire, (analytic in the finite complex
plane), the radius of convergence of these series solutions is infinite.
Solution 23.7
1 0 1
w00 + w + w = 0.
2z z
We can find the indicial equation by substituting w = z + O(z +1 ) into the differential equation.
1
( 1)z 2 + z 2 + z 1 = O(z 1 )
2
Equating the coefficient of the z 2 term,
1
( 1) + = 0
2
1
= 0, .
2
Since the roots are distinct and do not differ by an integer, the solutions are of the form
X
X
w1 = an z n , w2 = z 1/2 bn z n .
n=0 n=0
746
We can combine the above two equations for an .
an
an+1 = , for n 0
(n + 1/2)(n + 1)
Solving this difference equation for an ,
n1
Y 1
an = a0
j=0
(j + 1/2)(j + 1)
n1
(1)n Y 1
an = a0
n! j=0 j + 1/2
Solution 23.8
2 3 1
w00 + + 2 w0 + w = 0.
z z z2
747
In order to analyze the behavior at infinity we make the change of variables t = 1/z, u(t) = w(z)
and examine the point t = 0. Writing the derivatives with respect to z in terms if t yields
1
z=
t
1
dz = dt
t2
d d
= t2
dz dt
d2
2 d 2 d
= t t
dz 2 dt dt
2
d d
= t4 2 + 2t3 .
dt dt
The equation for u is then
( 1) + 1 = 0
1i 3
=
2
Since the roots of the indicial equation are distinct and do not differ by an integer, a set of solutions
has the form
( )
X X
(1+i 3)/2 n (1i 3)/2 n
t an t , t bn t .
n=0 n=0
Noting that
! !
i 3 i 3
t(1+i 3)/2
= t1/2 exp log t , and t(1i 3)/2
= t1/2 exp log t .
2 2
We can take the sum and difference of the above solutions to obtain the form
! !
1/2 3 X
n 1/2 3 X
u1 = t cos log t an t , u1 = t sin log t b n tn .
2 n=0
2 n=0
Putting the answer in terms of z, we have the form of the two Frobenius expansions about infinity.
! !
1/2 3 X an
1/2 3 X bn
w1 = z cos log z n
, w1 = z sin log z .
2 n=0
z 2 n=0
zn
Solution 23.9
1. We write the equation in the standard form.
bx 0 a
y 00 + y y=0
x x
748
Since bx a
x has no worse than a first order pole and x has no worse than a second order pole at
x = 0, that is a regular singular point. Since the coefficient functions have no other singularities
in the finite complex plane, all the other points in the finite complex plane are regular points.
Now to examine the point at infinity. We make the change of variables u() = y(x), = 1/x.
d d 1
y0 = u = 2 u0 = 2 u0
dx d x
d d
y 00 = 2 2 u = 4 u00 + 2 3 u0
d d
xy 00 + (b x)y 0 ay
1 4 00 1
u + 2 3 u0 + b 2 u0 au = 0
u + (2 b) + u0 au = 0
3 00 2
2b 1 a
u00 + + 2 3u = 0
Since this equation has an irregular singular point at = 0, the equation for y(x) has an
irregular singular point at infinity.
1X 1
p(x) pn xn = (b x),
x n=1 x
1 X 1
q(x) qn xn = 2 (0 ax).
x2 n=1 x
2 + (p0 1) + q0 = 0
2 + (b 1) + 0 = 0
( + b 1) = 0.
3. Since one of the roots of the indicial equation is zero, and the other root is not a negative
749
integer, one of the solutions of the differential equation is a Taylor series.
X
y1 = ck xk
k=0
X
y10 = kck xk1
k=1
X
= (k + 1)ck+1 xk
k=0
X
= kck xk1
k=0
X
y100 = k(k 1)ck xk2
k=2
X
= (k + 1)kck+1 xk1
k=1
X
= (k + 1)kck+1 xk1
k=0
750
For the case n > 0 the roots of the indicial equation differ by an integer. The solutions have
the form:
m
X (a)k k X
y1 (x) = x , y2 (x) = d1 y1 (x) log x + xn dk xk
(b)k k!
k=0 k=0
The form of the solution for y2 can be substituted into the equation to determine the coefficients
dk .
Solution 23.10
We write the equation in the standard form.
xy 00 + 2xy 0 + 6 ex y = 0
ex
y 00 + 2y 0 + 6 y = 0
x
We see that x = 0 is a regular singular point. The indicial equation is
2 = 0
= 0, 1.
y1 = x + a2 x2 + a3 x3 + O(x4 )
xy 00 + 2xy 0 + 6 ex y = 0
y2 = 1 + O(x2 )
xy 00 + 2xy 0 + 6 ex y = 0
O(x) + O(x) + 6(1 + O(x))(1 + O(x2 )) = 0
6 = O(x)
The substitution y2 = 1 + O(x) has yielded a contradiction. Since the second solution is not of the
Frobenius form, it has the following form:
y2 = y1 ln(x) + a0 + a2 x2 + O(x3 )
y2 = a0 + x ln x 4x2 ln x + O(x2 ).
751
We calculate the derivatives of y2 .
xy 00 + 2xy 0 + 6 ex y = 0
(1 + O(x ln x)) + 2 (O(x ln x)) + 6 (a0 + O(x ln x)) = 0
1 + 6a0 = 0
1
y2 = + x ln x 4x2 ln x + O(x2 )
6
752
23.8 Quiz
Problem 23.1 P
Write the definition of convergence of the series n=1 an .
Problem 23.2
What is the Cauchy convergence criterion for series?
Problem 23.3
Define absolute convergence and uniform convergence. What is the relationship between the two?
Problem 23.4
Write the geometric series and the function to which it converges. For what values of the variable
does the series converge?
Problem 23.5 P
For what real values of a does the series n=1 na converge?
Problem 23.6
State the ratio and root convergence tests.
Problem 23.7
State the integral convergence test.
753
23.9 Quiz Solutions
Solution 23.1
P PN
The series n=1 an converges if the sequence of partial sums, SN = n=1 an , converges. That is,
N
X
lim SN = lim an = constant.
N N
n=1
Solution 23.2
A series converges if and only if for any > 0 there exists an N such that |Sn Sm | < for all
n, m > N .
Solution 23.3
P P
The series n=1 an converges absolutely if n=1 |an | converges. If the rate of convergence of
P
n=1 an (z) is independent of z then the series is uniformly convergent. The series is uniformly
convergent in a domain if for any given > 0 there exists an N , independent of z, such that
XN
|f (z) SN (z)| = f (z) an (z) <
n=1
Solution 23.4
1 X
= zn for |z| < 1.
1 z n=0
Solution 23.5
The series converges for a < 1.
Solution 23.6
P
The series n=1 an converges absolutely if
an+1
lim < 1.
n an
If the limit is greater than unity, then the series diverges. If the limit is unity, the test fails.
Solution 23.7 P
If the coefficients an of a series n=1 an are monotonically decreasing and can be extended to a
monotonically decreasing function of the continuous variable x:
754
Chapter 24
Asymptotic Expansions
The more you sweat in practice, the less you bleed in battle.
f (x) g(x) as x x0 ,
f (x)
lim = 0.
xx0 g(x)
The notation
f (x) g(x) as x x0 ,
is read f (x) is asymptotic to g(x) as x tends to x0 ; which means
f (x)
lim = 1.
xx0 g(x)
ex x as x +
sin x x as x 0
1/x 1 as x +
Note that it does not make sense to say that a function f (x) is asymptotic to zero. Using the above
definition this would imply
f (x) 0 as x x0 .
If you encounter an expression like f (x) + g(x) 0, take this to mean f (x) g(x).
755
The Big O and Little o Notation. If |f (x)| m|g(x)| for some constant m in some neighbor-
hood of the point x = x0 , then we say that
f (x) = O(g(x)) as x x0 .
We read this as f is big O of g as x goes to x0 . If g(x) does not vanish, an equivalent definition
is that f (x)/g(x) is bounded as x x0 .
If for any given positive there exists a neighborhood of x = x0 in which |f (x)| |g(x)| then
f (x) = o(g(x)) as x x0 .
Operations on Asymptotic Relations. You can perform the ordinary arithmetic operations on
asymptotic relations. Addition, multiplication, and division are valid.
You can always integrate an asymptotic relation. Integration is a smoothing operation. However,
it is necessary to exercise some care.
1 1
Example 24.1.2 Consider f (x) = x + x2 sin(x3 ).
1
f (x) as x .
x
Differentiating this relation yields
1
f 0 (x) as x .
x2
However, this is not true since
1 2
f 0 (x) = 2
3 sin(x3 ) + 2 cos(x3 )
x x
1
6 2 as x .
x
756
The Controlling Factor. The controlling factor is the most rapidly varying factor in an asymp-
totic relation. Consider a function f (x) that is asymptotic to x2 ex as x goes to infinity. The
controlling factor is ex . For a few examples of this,
x log x has the controlling factor x as x .
x2 e1/x has the controlling factor e1/x as x 0.
x1 sin x has the controlling factor sin x as x .
The Controlling Factor. Since the solutions at irregular singular points often have exponential
behavior, we make the substitution y = es(x) into the differential equation for y.
d2 s
2
e = x es
dx
00
s + (s0 )2 es = x es
s00 + (s0 )2 = x
1 Using We may be a bit presumptuous on my part. Even if you dont particularly want to know how the solutions
behave, I urge you to just play along. This is an interesting section, I promise.
757
The Dominant Balance. Now we have a differential equation for s that appears harder to
solve than our equation for y. However, we did not introduce the substitution in order to obtain an
equation that we could solve exactly. We are looking for an equation that we can solve approximately
in the limit as x . If one of the terms in the equation for s is much smaller that the other two
as x , then dropping that term and solving the simpler equation may give us an approximate
solution. If one of the terms in the equation for s is much smaller than the others then we say that
the remaining terms form a dominant balance in the limit as x .
Assume that the s00 term is much smaller that the others, s00 (s0 )2 , x as x . This gives us
(s0 )2 x
s0 x
2
s x3/2 as x .
3
Now lets check our assumption
that the s00 term is small. Assuming that we can differentiate the
asymptotic relation s x, we obtain s00 12 x1/2 as x .
0
Thus we see that the behavior we found for s is consistent with our assumption. The controlling
factors for solutions to the Airy equation are exp( 23 x3/2 ) as x .
The Leading Order Behavior of the Decaying Solution. Lets find the leading order behavior
as x goes to infinity of the solution with the controlling factor exp( 32 x3/2 ). We substitute
2
s(x) = x3/2 + t(x), where t(x) x3/2 as x
3
into the differential equation for s.
s00 + (s0 )2 = x
1
x1/2 + t00 + (x1/2 + t0 )2 = x
2
1
t + (t0 )2 2x1/2 t0 x1/2 = 0
00
2
Assume that we can differentiate t x3/2 to obtain
Since t00 21 x1/2 we drop the t00 term. Also, t0 x1/2 implies that (t0 )2 2x1/2 t0 , so we drop
the (t0 )2 term. This gives us
1
2x1/2 t0 x1/2 0
2
1
t x1
0
4
1
t log x + c
4
1
t log x as x .
4
Checking our assumptions about t,
t0 x1/2 x1 x1/2
t00 x1/2 x2 x1/2
758
we see that the behavior of t is consistent with our assumptions.
So far we have
2 1
y(x) exp x3/2 log x + u(x) as x ,
3 4
where u(x) log x as x . To continue, we substitute t(x) = 14 log x+u(x) into the differential
equation for t(x).
1
t00 + (t0 )2 2x1/2 t0 x1/2 = 0
2
2
1 2 00 1 1 0 1 1 1
x +u + x +u 2x1/2
x + u x1/2 = 0
0
4 4 4 2
1 5
u00 + (u0 )2 + x1 2x1/2 u0 + x2 = 0
2 16
u0 x1 , u00 x2 as x .
5 2
u00 x2 u00 x
16
5 2
u0 x1 (u0 )2 x .
16
Thus we obtain
5 2
2x1/2 u0 + x 0
16
5 5/2
u0 x
32
5
u x3/2 + c
48
u c as x .
You can show that the leading behavior of the exponentially growing solution is
2 3/2
y (const)x1/4 exp x as x .
3
Example 24.2.2 The Modified Bessel Equation. Consider the modified Bessel equation
x2 y 00 + xy 0 (x2 + 2 )y = 0.
759
We would like to know how the solutions of this equation behave as x +. First we need to
classify the point at infinity. The change of variables x = 1t , y(x) = u(t) yields
1 4 00 3 0 1 2 0 1 2
(t u + 2t u ) + (t u ) 2 + u = 0
t2 t t
2
1 1
u00 + u0 4 + 2 u = 0
t t t
Since u(t) has an irregular singular point at t = 0, y(x) has an irregular singular point at infinity.
The Controlling Factor. Since the solutions at irregular singular points often have exponential
behavior, we make the substitution y = es(x) into the differential equation for y.
x2 (s00 + (s0 )2 ) es +xs0 es (x2 + 2 ) es = 0
1 0 2
s00 + (s0 )2 + s (1 + 2 ) = 0
x x
We make the assumption that s00 (s0 )2 as x and we know that 2 /x2 1 as x . Thus
we drop these two terms from the equation to obtain an approximate equation for s.
1 0
(s0 )2 + s 10
x
This is a quadratic equation for s0 , so we can solve it exactly. However, let us try to simplify the
equation even further. Assume that as x goes to infinity one of the three terms is much smaller that
the other two. If this is the case, there will be a balance between the two dominant terms and we
can neglect the third. Lets check the three possibilities.
1.
1 0 1
1 is small. (s0 )2 + s 0 s0 , 0
x x
1 6 x12 , 0 as x so this balance is inconsistent.
2.
1 0
s is small. (s0 )2 1 0 s0 1
x
This balance is consistent as x1 1 as x .
3.
1 0
(s0 )2 is small. s 10 s0 x
x
This balance is not consistent as x2
6 1 as x .
The only dominant balance that makes sense leads to s0 1 as x . Integrating this
relationship,
s x + c
x as x .
Now lets see if our assumption that we made to get the simplified equation for s is valid. Assuming
that we can differentiate s0 1, s00 (s0 )2 becomes
d 2
1 + o(1) 1 + o(1)
dx
0 + o(1/x) 1
Thus we see that the behavior we obtained for s is consistent with our initial assumption.
We have found two controlling factors, ex and ex . This is a good sign as we know that there
must be two linearly independent solutions to the equation.
760
Leading Order Behavior. Now lets find the full leading behavior of the solution with the
controlling factor ex . In order to find a better approximation for s, we substitute s(x) = x + t(x),
where t(x) x as x , into the differential equation for s.
2
00 0 2 1 0
s + (s ) + s 1 + 2 = 0
x x
2
1
t00 + (1 + t0 )2 + (1 + t0 ) 1 + 2 = 0
x x
2
1 1
t00 + (t0 )2 + 2 t0 + 2 =0
x x x
1 2 1
We know that x 2 and x2 x as x . Dropping these terms from the equation yields
1
t00 + (t0 )2 2t0 0.
x
Assuming that we can differentiate the asymptotic relation for t, we obtain t0 1 and t00 x1 as
x . We can drop t00 . Since t0 vanishes as x goes to infinity, (t0 )2 t0 . Thus we are left with
1
2t0 0, as x .
x
Integrating this relationship,
1
t log x + c
2
1
log x as x .
2
Checking our assumptions about the behavior of t,
1
t0 1 1
2x
1 1 1
t00
x 2x2 x
we see that the solution is consistent with our assumptions.
The leading order behavior to the solution with controlling factor ex is
1
y(x) exp x log x + u(x) = x1/2 ex+u(x) as x ,
2
where u(x) log x. We substitute t = 21 log x + u(x) into the differential equation for t in order
to find the asymptotic behavior of u.
2
1 1
t00 + (t0 )2 + 2 t0 + 2 =0
x x x
2
2
1 00 1 0 1 1 0 1
+u + +u + 2 +u + =0
2x2 2x x 2x x x2
1 2
u00 + (u0 )2 2u0 + 2 2 = 0
4x x
Assuming that we can differentiate the asymptotic relation for u, u0 1
x and u00 1
x2 as x .
Thus we see that we can neglect the u00 and (u0 )2 terms.
1 1
2u0 + 2 0
4 x2
761
1 1 1
u0 2
2 4 x2
1 1 1
u 2 +c
2 4 x
u c as x
Since u = c + o(1), we can expand eu as ec +o(1). Thus we can write the leading order behavior as
y x1/2 ex (ec +o(1)).
Thus the full leading order behavior is
y (const)x1/2 ex as x .
You can verify that the solution with the controlling factor ex has the leading order behavior
y (const)x1/2 ex as x .
Two linearly independent solutions to the modified Bessel equation are the modified Bessel
functions, I (x) and K (x). These functions have the asymptotic behavior
1
I (x) ex , K (x) ex as x .
2x 2x
x
In Figure 24.1 K0 (x) is plotted in a solid line and 2x e is plotted in a dashed line. We see that
the leading order behavior of the solution as x goes to infinity gives a good approximation to the
behavior even for fairly small values of x.
1.75
1.5
1.25
0.75
0.5
0.25
0 1 2 3 4 5
762
is used in statistics for its relation to the normal probability distribution. We would like to find an
approximation to erfc(x) for large x. Using integration by parts,
Z
2 1 2
erfc(x) = 2t et dt
x 2t
Z
2 1 t2 2 1 2 t2
= e t e dt
2t x x 2
Z
1 2 1 2
= x1 ex t2 et dt.
x
Therefore,
1 2
erfc(x) x1 ex as x ,
2
and we expect that 1 x1 ex would be a good approximation to erfc(x) for large x. In Figure 24.2
2
log(erfc(x)) is graphed in a solid line and log 1 x1 ex is graphed in a dashed line. We see that
this first approximation to the error function gives very good results even for moderate values of x.
Table 24.1 gives the error in this first approximation for various values of x.
-4
-6
-8
-10
763
x erfc(x) One Term Relative Error Three Term Relative Error
1 0.157 0.3203 0.6497
2 0.00468 0.1044 0.0182
3 2.21 105 0.0507 0.0020
4 1.54 108 0.0296 3.9 104
5 1.54 1012 0.0192 1.1 104
6 2.15 1017 0.0135 3.7 105
7 4.18 1023 0.0100 1.5 105
8 1.12 1029 0.0077 6.9 106
9 4.14 1037 0.0061 3.4 106
10 2.09 1045 0.0049 1.8 106
Table 24.1:
error function.
Z
1 1 x2 1 2
erfc(x) = x e t2 et dt
x
Z
1 1 x2 1 1 3 t2 1 3 4 t2
= x e t e + t e dt
2 x 2
Zx
1 2 1 1 3 4 t2
= ex x1 x3 + t e dt
2 x 2
Z
1 2 1 1 3 2 1 15 6 t2
= ex x1 x3 + t5 et t e dt
2 4 x x 4
Z
1 2 1 3 1 15 6 t2
= ex x1 x3 + x5 t e dt
2 4 x 4
The error in approximating erfc(x) with the first three terms is given in Table 24.1. We see that for
x 2 the three terms give a much better approximation to erfc(x) than just the first term.
At this point you might guess that you could continue this process indefinitely. By repeated
application of integration by parts, you can obtain the series expansion
n
2 2 X (1) (2n)!
erfc(x) = ex .
n=0
n!(2x)2n+1
This is a Taylor expansion about infinity. Lets find the radius of convergence.
n+1
(2(n + 1))! n!(2x)2n+1
an+1 (x)
< 1 lim (1)
lim <1
n an (x) n (n + 1)!(2x)2(n+1)+1 (1)n (2n)!
(2n + 2)(2n + 1)
lim <1
n (n + 1)(2x)2
2(2n + 1)
lim <1
n (2x)2
1
= 0
x
Thus we see that our series diverges for all x. Our conventional mathematical sense would tell us that
this series is useless, however we will see that this series is very useful as an asymptotic expansion
of erfc(x).
Say we are working with a convergent series expansion of some function f (x).
X
f (x) = an (x)
n=0
764
For fixed x = x0 ,
N
X
f (x0 ) an (x0 ) 0 as N .
n=0
P
For an asymptotic series we have a quite different behavior. If g(x) is asymptotic to n=0 bn (x) as
x x0 then for fixed N ,
N
X
g(x) bn (x) bN (x) as x x0 .
0
We say that the error function is asymptotic to the series as x goes to infinity.
n
2 2 X (1) (2n)!
erfc(x) ex as x
n=0
n!(2x)2n+1
In Figure 24.3 the logarithm of the difference between the one term, ten term and twenty term
approximations and the complementary error function are graphed in coarse, medium, and fine
dashed lines, respectively.
1 2 3 4 5 6
-20
-40
-60
*Optimal Asymptotic Series. Of the three approximations, the one term is best for x . 2, the
ten term is best for 2 . x . 4, and the twenty term is best for 4 . x. This leads us to the concept of
an optimal asymptotic approximation. An optimal asymptotic approximation contains the number
of terms in the series that best approximates the true behavior.
In Figure 24.4 we see a plot of the number of terms in the approximation versus the logarithm of
the error for x = 3. Thus we see that the optimal asymptotic approximation is the first nine terms.
After nine terms the error gets larger. It was inevitable that the error would start to grow after
some point as the series diverges for all x.
765
-12
-14
-16
-18
5 10 15 20 25
A good rule of thumb for finding the optimal series is to find the smallest term in the series and
take all of the terms up to but not including the smallest term as the optimal approximation. This
makes sense, because the nth term is an approximation of the error incurred by using the first n 1
terms. In Figure 24.5 there is a plot of n versus the logarithm of the nth term in the asymptotic
expansion of erfc(3). We see that the tenth term is the smallest. Thus, in this case, our rule of
thumb predicts the actual optimal series.
-12
-14
-16
5 10 15 20 25
Figure 24.5: The logarithm of the nth term in the expansion for x = 3.
766
24.4 Asymptotic Series
P
A function f (x) has an asymptotic series expansion about x = x0 , n=0 an (x), if
N
X
f (x) an (x) aN (x) as x x0 for all N.
n=0
An asymptotic series may be convergent or divergent. Most of the asymptotic series you encounter
will be divergent. If the series is convergent, then we have that
N
X
f (x) an (x) 0 as N for fixed x.
n=0
Let n (x) be some set of gauge functions. The example that we are most familiar with is
n (x) = xn . If we say that
X
X
an n (x) bn n (x),
n=0 n=0
s00 (s0 )2 as x +.
767
Leading Order Behavior. Now we attempt to get a better approximation to s. We make the
substitution s = 41 x2 + t(x) into the equation for s where t x2 as x +.
1 1 1 1
+ t00 + x2 xt0 + (t0 )2 + + x2 = 0
2 4 2 4
t00 xt0 + (t0 )2 + = 0
Since t x2 , we assume that t0 x and t00 1 as x +. Note that this in only an assumption
since it is not always valid to differentiate an asymptotic relation. Thus (t0 )2 xt0 and t00 xt0 as
x +; we drop these terms from the equation.
t0
x
t log x + c
t log x as x +
Asymptotic Expansion Since we have factored off the singular behavior of y, we might expect
that what is left over is well behaved enough to be expanded in a Taylor series about infinity. Let
us assume that we can expand the solution for y in the form
2 2X
x x
y(x) x exp (x) = x exp an xn as x +
4 4 n=0
2
where a0 = 1. Differentiating y = x exp x4 (x),
1 2 2
y 0 = x1 x+1 ex /4 (x) + x ex /4 0 (x)
2
768
1 1 1 2 1 2
y 00 = ( 1)x2 x ( + 1)x + x+2 ex /4 (x) + 2 x1 x+1 ex /4 0 (x)
2 2 4 2
2
+ x ex /4
00 (x).
Substituting this into the differential equation for y,
1 1 1 1 1
( 1)x2 ( + ) + x2 (x) + 2 x1 x 0 (x) + 00 (x) + + x2 (x) = 0
2 4 2 2 4
00 (x) + (2x1 x) 0 (x) + ( 1)x2 = 0
x2 00 (x) + (2x x3 ) 0 (x) + ( 1)(x) = 0.
Differentiating the expression for (x),
X
(x) = an xn
n=0
X
X
0 (x) = nan xn1 = (n + 2)an+2 xn3
n=1 n=1
X
00 (x) = n(n + 1)an xn2 .
n=1
769
we see that the radius of convergence is zero. Thus if 6= 0, 1, 2, . . . our asymptotic expansion for y
x2 /4 ( 1) 2 ( 1)( 2)( 3) 4
yx e 1 x + x
21 1! 22 2!
diverges for all x. However this solution is still very useful. If we only use a finite number of terms,
we will get a very good numerical approximation for large x.
In Figure 24.6 the one term, two term, and three term asymptotic approximations are shown in
rough, medium, and fine dashing, respectively. The numerical solution is plotted in a solid line.
1 2 3 4 5 6
-2
770
Chapter 25
Hilbert Spaces
An expert is a man who has made all the mistakes which can be made, in a narrow field.
- Niels Bohr
In this chapter we will introduce Hilbert spaces. We develop the two important examples: l2 , the
space of square summable infinite vectors and L2 , the space of square integrable functions.
x+y =y+x
(x + y) + z = x + (y + z)
(ab)x = a(bx)
(a + b)x = ax + bx
a(x + y) = ax + ay
All the linear spaces that we will work with have additional properties: The zero element 0 is
the additive identity.
x+0=x
Multiplication by the scalar 1 is the multiplicative identity.
1x = x
x + (x) = 0
y = c1 x1 + c2 x2 +
771
then y is a linear combination of the xi . A set of elements {x1 , x2 , . . .} is linearly independent if the
equation
c1 x1 + c2 x2 + = 0
has only the trivial solution c1 = c2 = = 0. Otherwise the set is linearly dependent.
Let {e1 , e2 , } be a linearly independent set of elements. If every element x can be written
as a linear combination of the ei then the set {ei } is a basis for the space. The ei are called base
elements. X
x= ci ei
i
The set {ei } is also called a coordinate system. The scalars ci are the coordinates or components of
x. If the set {ei } is a basis, then we say that the set is complete.
1. Conjugate-commutative.
hx|yi = hx|yi
3. Positive definite.
hx|xi 0
hx|xi = 0 if and only if x = 0
2. Schwarz Inequality.
2
|hx|yi| hx|xihy|yi
This is called the inner product with respect to the weighting function (x). It is also denoted
hu|vi .
772
25.3 Norms
A norm is a real-valued function on a space which satisfies the following properties.
1. Positive.
kxk 0
2. Definite.
kxk = 0 if and only if x = 0
3. Multiplication my a scalar, c C.
kcxk = |c|kxk
4. Triangle inequality.
kx + yk kxk + kyk
Example 25.3.1 Consider a vector space, (finite or infinite dimension), with elements x = (x1 , x2 , x3 , . . .).
Here are some common norms.
The lp norm.
!1/p
X
p
kxkp = |xk |
k=1
l1 norm.
X
kxk1 = |xk |
k=1
l norm.
kxk = max |xk |
k
Example 25.3.2 Consider a space of functions defined on the interval (a . . . b). Here are some
common norms.
The Lp norm.
!1/p
Z b
p
kukp = |u(x)| dx
a
773
Euclidian norm, or L2 norm.
s
Z b
kuk2 = |u(x)|2 dx
a
L1 norm.
Z b
kuk1 = |u(x)| dx
a
L norm.
kuk = lim sup |u(x)|
x(a...b)
Distance. Using the norm, we can define the distance between elements u and v.
d(u, v) ku vk
25.5 Orthogonality
Orthogonality.
hj |k i = 0 if j 6= k
Orthonormality.
hj |k i = jk
Example 25.5.1 Infinite vectors. ej has all zeros except for a 1 in the j th position.
ej = (0, 0, . . . 0, 1, 0, . . .)
1
j = ejx , jZ
2
1 (1) 1 (1) 1
0 = , j = cos(jx), j = sin(jx), j Z+
2
774
the same span as the set of n s with the formulas
1 = 1
h1 |2 i
2 = 2 1
k1 k2
h1 |3 i h2 |3 i
3 = 3 1 2
k1 k2 k2 k2
n1
X hj |n i
n = n j .
j=1
kj k2
You could verify that the m are orthogonal with a proof by induction.
Example 25.6.1 Suppose we would like a polynomial approximation to cos(x) in the domain
[1, 1]. One way to do this is to find the Taylor expansion of the function about x = 0. Up to terms
of order x4 , this is
(x)2 (x)4
cos(x) = 1 + + O(x6 ).
2 24
In the first graph of Figure 25.1 cos(x) and this fourth degree polynomial are plotted. We see
that the approximation is very good near x = 0, but deteriorates as we move away from that point.
This makes sense because the Taylor expansion only makes use of information about the functions
behavior at the point x = 0.
As a second approach, we could find the least squares fit of a fourth degree polynomial to cos(x).
The set of functions {1, x, x2 , x3 , x4 } is independent, but not orthogonal in the interval [1, 1]. Using
Gramm-Schmidt orthogonalization,
0 = 1
h1|xi
1 = x =x
h1|1i
h1|x2 i hx|x2 i 1
2 = x2 x = x2
h1|1i hx|xi 3
3
3 = x3 x
5
6 3
4 = x x2
4
7 35
A widely used set of functions in mathematics is the set of Legendre polynomials {P0 (x), P1 (x), . . .}.
They differ from the n s that we generated only by constant factors. The first few are
P0 (x) = 1
P1 (x) = x
3x2 1
P2 (x) =
2
5x3 3x
P3 (x) =
2
35x4 30x2 + 3
P4 (x) = .
8
Expanding cos(x) in Legendre polynomials
4
X
cos(x) cn Pn (x),
n=0
775
and calculating the generalized Fourier coefficients with the formula
hPn | cos(x)i
cn = ,
hPn |Pn i
yields
15 45(2 2 21)
cos(x) P 2 (x) + P4 (x)
2 4
105
= [(315 30 2 )x4 + (24 2 270)x2 + (27 2 2 )]
8 4
The cosine and this polynomial are plotted in the second graph in Figure 25.1. The least squares fit
method uses information about the function on the entire interval. We see that the least squares fit
does not give as good an approximation close to the point x = 0 as the Taylor expansion. However,
the least squares fit gives a good approximation on the entire interval.
In order to expand a function in a Taylor series, the function must be analytic in some domain.
One advantage of using the method of least squares is that the function being approximated does
not even have to be continuous.
1 1
0.5 0.5
-0.5 -0.5
-1 -1
776
2
X
X X X
= kf k2 + |cj |2
f cj j cj hj |f i cj hj |f i (25.1)
j
j j j
P
To complete the square, we add the constant j hj |f ihj |f i. We see the values of cj which minimize
X 2
kf k2 + |cj hj |f i| .
j
This is known as Bessels Inequality. If the set of {j } is complete then the norm of the error is zero
and we obtain Bessels Equality. X
kf k2 = |cj |2
j
777
If n 6= m then
*r r +
2
Z
2 2
sin(nx) sin(mx) = sin(nx) sin(mx) dx
0
1
Z
= (cos[(n m)x] cos[(n + m)x]) dx
0
= 0.
778
If we make the change of variables x = t in this integral, we obtain
Z 2 r
r
1 2 2
sin(n t) sin(m t) dt = nm .
0 2 t
1
is orthonormal with respect to (t) =
2 t
on the interval [0, 2 ].
Orthogonal Series. Suppose that a function f (x) defined on [a, b] can be written as a uniformly
convergent sum of functions that are orthogonal with respect to (x).
X
f (x) = cn n (x)
n=1
We can solve for the cn by taking the inner product of m (x) and each side of the equation with
respect to (x).
* +
X
hm ||f i = m cn n
n=1
X
hm ||f i = cn hm ||n i
n=1
hm ||f i = cm hm ||m i
hm ||f i
cm =
hm ||m i
The cm are known as Generalized Fourier coefficients. If the functions in the expansion are
orthonormal, the formula simplifies to
cm = hm ||f i.
Example 25.8.4 The function f (x) = x( x) has a uniformly convergent series expansion in the
domain [0, ] of the form
r
X 2
x( x) = cn sin(nx).
n=1
The Fourier coefficients are
*r +
2
cn = sin(nx)x( x)
r Z
2
= x( x) sin(nx) dx
0
r
2 2
= (1 (1)n )
n3
(q
2 4
n3 for odd n
=
0 for even n
779
Thus the expansion is
X 8
x( x) = sin(nx) for x [0, ].
n=1
n3
oddn
In the first graph of Figure 25.2 the first term in the expansion is plotted in a dashed line and
x( x) is plotted in a solid line. The second graph shows the two term approximation.
2 2
1 1
1 2 3 1 2 3
Example 25.8.5 The set {. . . , 1/ 2 ex , 1/ 2, 1/ 2 ex , 1/ 2 e2x , . . .} is orthonormal on
the interval [, ]. f (x) = sign(x) has the expansion
* +
X 1 n 1
sign(x) e sign() enx
n=
2 2
Z
1 X
= en sign() d enx
2 n=
Z 0 Z
1 X
= en d + en d enx
2 n= 0
1 X 1 (1)n nx
= e .
n= n
1 X 1 (1)n
= (cos(nx) + sin(nx))
n= n
2 X 1 (1)n
= sin(nx)
n=1 n
4 X 1
sign(x) sin(nx).
n=1 n
oddn
780
25.9 Least Squares Fit to a Function and Completeness
Let {1 , 2 , 3 , . . .} be a set of real, square integrable functions that are orthonormal with respect
to the weighting function (x) on the interval [a, b]. That is,
hn ||m i = nm .
Let f (x) be some square integrable function defined on the same interval. We would like to approx-
imate the function f (x) with a finite orthonormal series.
N
X
f (x) n n (x)
n=1
f (x) may or may not have a uniformly convergent expansion in the orthonormal functions.
We would like to choose the n so that we get the best possible approximation to f (x). The
most common measure of how well a series approximates a function is the least squares measure.
The error is defined as the integral of the weighting function times the square of the deviation.
N
!2
Z b X
E= (x) f (x) n n (x) dx
a n=1
The best fit is found by choosing the n that minimize E. Let cn be the Fourier coefficients of
f (x).
cn = hn ||f i
N
!2
Z b X
E() = (x) f (x) n n (x) dx
a n=1
N
X N
X
= f n n
f
n n
n=1 n=1
N
X N
X
N
X
= hf ||f i 2 n n f +
n n
n n
n=1 n=1 n=1
N
X N X
X N
= hf ||f i 2 n hn ||f i + n m hn ||m i
n=1 n=1 m=1
N
X N
X
= hf ||f i 2 n cn + n2
n=1 n=1
N
X N
X
= hf ||f i + (n cn )2 c2n
n=1 n=1
Each term involving n in non-negative and is minimized for n = cn . The Fourier coefficients give
the least squares approximation to a function. The least squares fit to f (x) is thus
N
X
f (x) hn ||f i n (x).
n=1
781
Result 25.9.1 If {1 , 2 , 3 , . . .} is a set of real, square integrable functions
that are orthogonal with respect to (x) then the least squares fit of the first
N orthogonal functions to the square integrable function f (x) is
N
X hn ||f i
f (x) n (x).
n=1
h n ||n i
Since the error in the approximation E is a nonnegative number we can obtain on inequality on
the sum of the squared coefficients.
N
X
E = hf ||f i c2n
n=1
N
X
c2n hf ||f i
n=1
This equation is known as PBessels Inequality. Since hf ||f i is just a nonnegative number,
independent of N , the sum n=1 c2n is convergent and cn 0 as n
N
!2
Z b X
lim (x) f (x) cn n (x) dx = 0,
N a n=1
then the sum converges in the mean to f (x) relative to the weighting function (x). This implies
that
N
!
X
lim hf ||f i c2n =0
N
n=1
X
c2n = hf ||f i.
n=1
782
25.10 Closure Relation
Let {1 , 2 , . . .} be an orthonormal, complete set on the domain [a, b]. For any square integrable
function f (x) we can write
X
f (x) cn n (x).
n=1
Here the cn are the generalized Fourier coefficients and the sum converges in the mean to f (x).
Substituting the expression for the Fourier coefficients into the sum yields
X
f (x) hn |f in (x)
n=1
!
X Z b
= n ()f () d n (x).
n=1 a
Since the sum is not necessarily uniformly convergent, we are not justified in exchanging the order
of summation and integration. . . but what the heck, lets do it anyway.
!
Z b X
= n ()f ()n (x) d
a n=1
!
Z b X
= n ()n (x) f () d
a n=1
The sum behaves like a Dirac delta function. Recall that (x ) satisfies the equation
Z b
f (x) = (x )f () d for x (a, b).
a
Thus we could say that the sum is a representation of (x ). Note that a series representation
of the delta function could not be convergent, hence the necessity of throwing caution to the wind
when we interchanged the summation and integration in deriving the series. The closure relation
for an orthonormal, complete set states
X
n (x)n () (x ).
n=1
Alternatively, you can derive the closure relation by computing the generalized Fourier coefficients
of the delta function.
X
(x ) cn n (x)
n=1
cn = hn |(x )i
Z b
= n (x)(x ) dx
a
= n ()
X
(x ) n (x)n ()
n=1
783
Result 25.10.1 If {1 , 2 , . . .} is an orthogonal, complete set on the domain
[a, b], then
X n (x)n ()
(x ).
n=1
kn k2
If the set is orthonormal, then
X
n (x)n () (x ).
n=1
Example 25.10.1 The integral of the Dirac delta function is the Heaviside function. On the interval
x (, ) (
Z x
1 for 0 < x <
(t) dt = H(x) =
0 for < x < 0.
Consider the orthonormal, complete set {. . . , 12 ex , 12 , 12 ex , . . .} on the domain [, ].
The delta function has the series
X 1 1 1 X nt
(t) ent en0 = e .
n=
2 2 2 n=
We will find the series expansion of the Heaviside function first by expanding directly and then
by integrating the expansion for the delta function.
Finding the series expansion of H(x) directly. The generalized Fourier coefficients of H(x)
are
Z
1
c0 = H(x) dx
2
Z
1
= dx
2 0
r
=
2
Z
1
cn = enx H(x) dx
2
Z
1
= enx dx
2 0
1 (1)n
= .
n 2
Thus the Heaviside function has the expansion
1 (1)n 1 nx
r
1 X
H(x) + e
2 2 n= n 2 2
n6=0
1 1 X 1 (1)n
= + sin(nx)
2 n=1
n
1 2 X 1
H(x) + sin(nx).
2 n=1 n
oddn
784
Integrating the series for (t).
Z x Z x
1 X
(t) dt ent dt
2 n=
x
1 X 1 nt
= (x + ) + e
2 in
n=
n6=0
1 1 nx X
e (1)n
= (x + ) +
2 n=
n
n6=0
x 1 1 X 1 nx
e enx (1)n + (1)n
= + +
2 2 2 n=1 n
x 1 1X1
= + + sin(nx)
2 2 n=1 n
x
Expanding 2 in the orthonormal set,
x X 1
cn enx .
2 n= 2
Z
1 x
c0 = dx = 0
2 2
(1)n
Z
1 x
cn = enx dx =
2 2 n 2
x X (1)n 1 nx 1X
e = (1)n sin(nx)
2 n= n 2 2 n=1
n6=0
x
Substituting the series for 2 into the expression for the integral of the delta function,
x
1 X 1 (1)n
Z
1
(t) dt + sin(nx)
2 n=1 n
Z x
1 2 X 1
(t) dt + sin(nx).
2 n=1 n
oddn
Thus we see that the series expansions of the Heaviside function and the integral of the delta
function are the same.
785
25.12 Exercises
Exercise 25.1
1. Suppose {k (x)} k=0 is an orthogonal system on [a, b]. Show that any finite set of the j (x)
is a linearly independent set on [a, b]. That is, if {j1 (x), j2 (x), . . . , jn (x)} is the set and all
the j are distinct, then
is true iff: a1 = a2 = = an = 0.
2. Show that the complex functions k (x) ekx/L , k = 0, 1, 2, . . . are orthogonal in the sense
RL
that L k (x)n (x) dx = 0, for n 6= k. Here n (x) is the complex conjugate of n (x).
786
25.13 Hints
Hint 25.1
787
25.14 Solutions
Solution 25.1
1.
Rb
We take the inner product with j for any = 1, . . . , n. (h, i a
(x) (x) dx.)
* n +
X
ak jk , j = 0
k=1
hjk j i = 0 for j 6= .
a hj j i = 0
hj j i =
6 0.
a = 0
788
Chapter 26
hv|Lui hL v|ui = 0
The adjoint of a matrix. For vectors, one can represent linear operators L with matrix multi-
plication.
Lx Ax
Let B = A be the adjoint of the matrix A. We determine the adjoint of A from Greens Identity.
hx|Ayi hBx|yi = 0
x Ay = Bx y
T
xT Ay = Bx y
T
xT Ay = xT B y
T T
yT A x = yT BxB = A
T
Thus we see that the adjoint of a matrix is the conjugate transpose of the matrix, A = A . The
conjugate transpose is also called the Hermitian transpose and is denoted AH .
The adjoint of a differential operator. Consider a second order linear differential operator
acting on C 2 functions defined on (a . . . b) which satisfy certain boundary conditions.
hy|Hxi = hHy|xi
y Hx = Hy x
789
The eigenvalues of a Hermitian matrix are real. Let x be an eigenvector with eigenvalue .
hx|Hxi = hHx|xi
hx|xi hx|xi = 0
( )hx|xi = 0
=
The eigenvectors corresponding to distinct eigenvalues are distinct. Let x and y be eigenvectors
with distinct eigenvalues and .
hy|Hxi = hHy|xi
hy|xi hy|xi = 0
( )hy|xi = 0
( )hy|xi = 0
hy|xi = 0
Furthermore, all Hermitian matrices are similar to a diagonal matrix and have a complete set of
orthogonal eigenvectors.
0 = 0, 0 = 1
n = n2 , (1)
n = cos(nx), (2)
n = sin(nx), n Z+
790
26.3 Exercises
791
26.4 Hints
792
26.5 Solutions
793
794
Chapter 27
-Calvin
dn y dn1 y
L[y] = pn + p n1 + + p0 y,
dxn dxn1
is defined
dn n1 d
n1
L [y] = (1)n (p n y) + (1) (pn1 y) + + p0 y
dxn dxn1
If each of the pk is k times continuously differentiable and u and v are n times continuously
differentiable on some interval, then on that interval Lagranges identity states
d
vL[u] uL [v] = B[u, v]
dx
n
X X
B[u, v] = (1)j u(k) (pm v)(j) .
m=1 j+k=m1
j0,k0
795
27.2 Formally Self-Adjoint Operators
Example 27.2.1 The linear operator
L[y] = x2 y 00 + 2xy 0 + 3y
d2 2 d
L [y] = 2
(x y) (2xy) + 3y
dx dx
= x2 y 00 + 4xy 0 + 2y 2xy 0 2y + 3y
= x2 y 00 + 2xy 0 + 3y.
In Example 27.2.1, the adjoint operator is the same as the operator. If L = L , the operator is
said to be formally self-adjoint.
Most of the differential equations that we study in this book are second order, formally self-
adjoint, with real-valued coefficient functions. Thus we wish to find the general form of this operator.
Consider the operator
L[y] = p2 y 00 + p1 y 0 + p0 y,
where the pj s are real-valued functions. The adjoint operator then is
d2 d
L [y] = (p2 y) (p1 y) + p0 y
dx2 dx
= p2 y 00 + 2p02 y 0 + p002 y p1 y 0 p01 y + p0 y
= p2 y 00 + (2p02 p1 )y 0 + (p002 p01 + p0 )y.
Thus second order, formally self-adjoint operators with real-valued coefficient functions have the
form
L[y] = p2 y 00 + p02 y 0 + p0 y,
which is equivalent to the form
d
L[y] = (py 0 ) + qy.
dx
L[y] = y 00 + p1 y 0 + p0 y = f (x),
where each pj is j times continuously differentiable and real-valued, can be written as a formally
self adjoint equation. We just multiply by the factor,
Z x
eP (x) = exp( p1 () d)
to obtain
796
Example 27.2.2 Consider the equation
1 0
y 00 + y + y = 0.
x
Multiplying by the factor Z x
1
exp d = elog x = x
will make the equation formally self-adjoint.
xy 00 + y 0 + xy = 0
d
(xy 0 ) + xy = 0
dx
L[y] = y 00 + p1 y 0 + p0 y = f (x),
where each pj is j times continuously differentiable and real-valued, can be
written as aR formally self adjoint equation by multiplying the equation by the
x
factor exp( p1 () d).
hv|L[u]i hL[v]|ui = 0
Example 27.3.1 Consider the formally self-adjoint equation y 00 = 0, subject to the boundary
conditions y(0) = y() = 0. Greens formula is
797
27.4 Self-Adjoint Eigenvalue Problems
Associated with the self-adjoint problem
This is called a self-adjoint eigenvalue problem. The values of for which there exist nontrivial
solutions to this problem are called eigenvalues. The functions that satisfy the equation when is
an eigenvalue are called eigenfunctions.
y = c1 + c2 x.
Only the trivial solution satisfies the boundary conditions. = 0 is not an eigenvalue. Now consider
6= 0. The general solution is
y = c1 cos x + c2 sin x .
= n2 , n N.
n = n2 , n = sin(nx), for n = 1, 2, 3, . . .
Self-adjoint eigenvalue problems have a number a interesting properties. We will devote the rest
of this section to developing some of these properties.
Real Eigenvalues. The eigenvalues of a self-adjoint problem are real. Let be an eigenvalue with
the eigenfunction . Greens formula states
h|L[]i hL[]|i = 0
h|i h|i = 0
( )h|i = 0
798
Orthogonal Eigenfunctions. The eigenfunctions corresponding to distinct eigenvalues are or-
thogonal. Let n and m be distinct eigenvalues with the eigenfunctions n and m . Using Greens
formula,
(m n )hn |m i = 0.
Since the two eigenvalues are distinct, hn |m i = 0 and thus n and m are orthogonal.
The key to showing that the eigenvalues are enumerable, is that the j are entire functions of .
That is, they are analytic functions of for all finite . We will not prove this.
The boundary conditions are
n h
X i
Bj [y] = jk y (k1) (a) + jk y (k1) (b) = 0.
k=1
Pn
The eigenvalue problem has a solution for a given value of if y = k=1 ck k satisfies the boundary
conditions. That is,
" n # n
X X
Bj ck k = ck Bj [k ] = 0 for j = 1, . . . , n.
k=1 k=1
Infinite Number of Eigenvalues. Though we will not show it, self-adjoint problems have an
infinite number of eigenvalues. Thus the eigenfunctions form an infinite orthogonal set.
Eigenvalues of Second Order Problems. Consider the second order, self-adjoint eigenvalue
problem
L[y] = (py 0 )0 + qy = y, on a x b, subject to Bj [y] = 0.
799
-1 1
Thus we can express each eigenvalue in terms of its eigenfunction. You might think that this
formula is just a shade less than worthless. When solving an eigenvalue problem you have to find
the eigenvalues before you determine the eigenfunctions. Thus this formula could not be used to
compute the eigenvalues. However, we can often use the formula to obtain information about the
eigenvalues before we solve a problem.
800
27.5 Inhomogeneous Equations
Let the problem,
L[y] = 0, Bk [y] = 0,
be self-adjoint. If the inhomogeneous problem,
L[y] = f, Bk [y] = 0,
has a solution, then we we can write this solution in terms of the eigenfunction of the associated
eigenvalue problem,
L[y] = y, Bk [y] = 0.
We denote the eigenvalues as n and the eigenfunctions as n for n Z+ . For the moment we
assume that = 0 is not an eigenvalue and that the eigenfunctions are real-valued. We expand the
function f (x) in a series of the eigenfunctions.
X hn |f i
f (x) = fn n (x), fn =
kn k
We expand the inhomogeneous solution in a series of eigenfunctions and substitute it into the
differential equation.
L[y] = f
hX i X
L yn n (x) = fn n (x)
X X
n yn n (x) = fn n (x)
fn
yn =
n
The inhomogeneous solution is
X hn |f i
y(x) = n (x). (27.1)
n kn k
As a special case we consider the Green function problem,
L[G] = (x ), Bk [G] = 0,
801
Example 27.5.1 Consider the Green function problem
00 + = , (0) = (1) = 0
00 + (1 ) = 0, (0) = (1) = 0
n = 1 (n)2 , n = sin(nx), n Z+
802
27.6 Exercises
Exercise 27.1
Show that the operator adjoint to
is given by
803
27.7 Hints
Hint 27.1
804
27.8 Solutions
Solution 27.1
Consider u(x), v(x) C n . (C n is the set of n times continuously differentiable functions). First we
prove the preliminary result
n1
d X
uv (n) (1)n u(n) v = (1)k u(k) v (nk1) (27.2)
dx
k=0
n nm1
X d X
= (1)k (upm (z))(k) y (nmk1)
m=0
dz
k=0
n nm1
d X X
uLy yM u = (1)k (upm (z))(k) y (nmk1)
dz m=0
k=0
805
806
Chapter 28
Fourier Series
-Failure
-Tom Shear (Assemblage 23)
L[y] y 00 = y
Since Greens Identity reduces to hv|L[u]ihL[v]|ui = 0, the problem is self adjoint. This means that
the eigenvalues are real and that eigenfunctions corresponding to distinct eigenvalues are orthogonal.
807
We compute the Rayleigh quotient for an eigenvalue with eigenfunction .
[0 ] + h0 |0 i
=
h|i
()0 () + ()0 () + h0 |0 i
=
h|i
()0 () + ()0 () + h0 |0 i
=
h|i
h0 |0 i
=
h|i
Computing the eigenvalues and eigenfunctions. Now we find the eigenvalues and eigenfunc-
tions. First we consider the case = 0. The general solution of the differential equation is
y = c1 + c2 x.
y() = y()
c1 cos + c2 sin = c1 cos + c2 sin
c1 cos c2 sin = c1 cos + c2 sin
c2 sin = 0
y 0 () = y 0 ()
c1 sin + c2 cos = c1 sin + c2 cos
c1 sin + c2 cos = c1 sin + c2 cos
c1 sin = 0
To satisify the two boundary conditions either c1 = c2 = 0 or sin = 0. The former yields the
trivial solution. The latter gives us the eigenvalues n = n2 , n Z+ . The corresponding solution is
yn = c1 cos(nx) + c2 sin(nx).
1
0 = 0, 0 =
2
n = n2 , 2n1 = cos(nx), 2n = sin(nx), for n = 1, 2, 3, . . .
808
Orthogonality of Eigenfunctions. We know that the eigenfunctions of distinct eigenvalues are
orthogonal. In addition, the two eigenfunctions of each positive eigenvalue are orthogonal.
Z
1 2
cos(nx) sin(nx) dx = sin (nx) =0
2n
Thus the eigenfunctions { 12 , cos(x), sin(x), cos(2x), sin(2x)} are an orthogonal set.
Here the means has the Fourier series. We have not said if the series converges yet. For now
lets assume that the series converges uniformly so we can replace the with an =.
We integrate Equation 28.1 from to to determine a0 .
Z Z Z X
1
f (x) dx = a0 dx + an cos(nx) + bn sin(nx) dx
2 n=1
Z
X Z Z
f (x) dx = a0 + an cos(nx) dx + bn sin(nx) dx
n=1
Z
f (x) dx = a0
1
Z
a0 = f (x) dx
Multiplying by cos(mx) and integrating will enable us to solve for am .
Z Z
1
f (x) cos(mx) dx = a0 cos(mx) dx
2
X Z Z
+ an cos(nx) cos(mx) dx + bn sin(nx) cos(mx) dx
n=1
All but one of the terms on the right side vanishes due to the orthogonality of the eigenfunctions.
Z Z
f (x) cos(mx) dx = am cos(mx) cos(mx) dx
Z Z
1
f (x) cos(mx) dx = am + cos(2mx) dx
2
Z
f (x) cos(mx) dx = am
1
Z
am = f (x) cos(mx) dx.
809
Note that this formula is valid for m = 0, 1, 2, . . ..
Similarly, we can multiply by sin(mx) and integrate to solve for bm . The result is
1
Z
bm = f (x) sin(mx) dx.
f (0 ) = 0, f (0+ ) = 1.
R
Result 28.2.1 Let f (x) be a 2-periodic function for which |f (x)| dx ex-
ists. Define the Fourier coefficients
1 1
Z Z
an = f (x) cos(nx) dx, bn = f (x) sin(nx) dx.
If x is an interior point of an interval on which f (x) has limited total fluctua-
tion, then the Fourier series of f (x)
a0 X
+ an cos(nx) + bn sin(nx) ,
2 n=1
Limited Fluctuation. A function that has limited total fluctuation can be written f (x) =
+ (x) (x), where + and are bounded, nondecreasing functions. An example of a function
that does not have limited total fluctuation is sin(1/x), whose fluctuation is unlimited at the point
x = 0.
Functions with Jump Discontinuities. Let f (x) be a discontinuous function that has a conver-
gent Fourier series. Note that the series does not necessarily converge to f (x). Instead it converges
to f(x) = 12 (f (x ) + f (x+ )).
810
10
-5 5 10
-2
Here the coefficients are computed with the familiar formulas. Is this the best approximation to the
function? That is, is it possible to choose coefficients n and n such that
N
0 X
f (x) + (n cos(nx) + n sin(nx))
2 n=1
811
3
-3 -2 -1 1 2 3
-1
-2
-3
Least squared error fit. The most common criterion for finding the best fit to a function is the
least squares fit. The best approximation to a function is defined as the one that minimizes the
integral of the square of the deviation. Thus if f (x) is to be approximated on the interval a x b
by a series
N
X
f (x) cn n (x), (28.2)
n=1
the best approximation is found by choosing values of cn that minimize the error E.
N
2
Z b X
E f (x) cn n (x) dx
a
n=1
Generalized Fourier coefficients. We consider the case that the n are orthogonal. For sim-
plicity, we also assume that the n are real-valued. Then most of the terms will vanish when we
interchange the order of integration and summation.
N N N
!
Z b X X X
2
E= f 2f cn n + cn n cm m dx
a n=1 n=1 m=1
Z b N
X Z b N X
X N Z b
E= f 2 dx 2 cn f n dx + cn cm n m dx
a n=1 a n=1 m=1 a
Z b N
X Z b N
X Z b
2
E= f dx 2 cn f n dx + c2n 2n dx
a n=1 a n=1 a
N
!
Z b X Z b Z b
2
E= f dx + c2n 2n dx 2cn f n dx
a n=1 a a
812
We complete the square for each term.
Rb !2 Rb !2
b N b
f n dx f n dx
Z X Z
E= f 2 dx + 2n dx cn Ra b a
Rb
a n=1 a
a
2n dx a
2n dx
This is known as Bessels Inequality. If the series in Equation 28.2 converges in the mean to f (x),
lim N E = 0, then we have equality as N .
Z b
X Z b
f 2 dx = c2n 2n dx.
a n=1 a
converges uniformly then the coefficients in the series are the Fourier coefficients,
Z Z
1 1
an = f (x) cos(nx) dx, bn = f (x) sin(nx) dx.
Now we show that by choosing the coefficients to minimize the squared error, we obtain the same
result. We apply Equation 28.3 to the Fourier eigenfunctions.
R
f 12 dx
Z
1
a0 = R
1
= f (x) dx
4
dx
R
f cos(nx) dx
Z
1
an = R
= f (x) cos(nx) dx
cos2 (nx) dx
R
f sin(nx) dx
Z
1
bn = R
= f (x) sin(nx) dx
sin2 (nx) dx
813
28.4 Fourier Series for Functions Defined on Arbitrary Ranges
If f (x) is defined on c d x < c + d and f (x + 2d) = f (x), then f (x) has a Fourier series of the
form
a0 X n(x + c) n(x + c)
f (x) + an cos + bn sin .
2 n=1
d d
Since
Z c+d Z c+d
2 n(x + c) 2 n(x + c)
cos dx = sin dx = d,
cd d cd d
814
1
0.5
-0.5
-1
0.5
-0.5
-1
Figure 28.3: A Function Defined on the range 1 x < 2 and the Function to which the Fourier
Series Converges.
Z 2
1 2n(x + 1/2)
bn = f (x) sin dx
3/2 1 3
2 5/2
Z
2nx
= f (x 1/2) sin dx
3 1/2 3
2 1/2 2 3/2
Z Z
2nx 2nx
= (x + 1/2) sin dx + (x 1/2) sin dx
3 1/2 3 3 1/2 3
2 5/2
Z
2nx
+ (4 2x) sin dx
3 3/2 3
2 2 n
h
n
n n i
= sin 2(1) n + 4n cos 3 sin
(n)2 3 3 3
815
28.5 Fourier Cosine Series
If f (x) is an even function, (f (x) = f (x)), then there will not be any sine terms in the Fourier
series for f (x). The Fourier sine coefficient is
1
Z
bn = f (x) sin(nx) dx.
Since f (x) is an even function and sin(nx) is odd, f (x) sin(nx) is odd. bn is the integral of an odd
function from to and is thus zero. We can rewrite the cosine coefficients,
1
Z
an = f (x) cos(nx) dx
2
Z
= f (x) cos(nx) dx.
0
In Figure 28.4 the even periodic extension of f (x) is plotted in a dashed line and the sum of the
first five nonzero terms in the Fourier cosine series are plotted in a solid line.
1.5
1.25
0.75
0.5
0.25
-3 -2 -1 1 2 3
816
28.6 Fourier Sine Series
If f (x) is an odd function, (f (x) = f (x)), then there will not be any cosine terms in the Fourier
series. Since f (x) cos(nx) is an odd function, the cosine coefficients will be zero. Since f (x) sin(nx)
is an even function,we can rewrite the sine coefficients
Z
2
bn = f (x) sin(nx) dx.
0
(
x for 0 x < /2
f (x) =
x for /2 x < .
2 /2 2
Z Z
bn = x sin(nx) dx + ( x) sin(nx) dx
0 /2
16 n n
= 2
cos sin3
n 4 4
In Figure 28.5 the odd periodic extension of f (x) is plotted in a dashed line and the sum of the first
five nonzero terms in the Fourier sine series are plotted in a solid line.
1.5
0.5
-3 -2 -1 1 2 3
-0.5
-1
-1.5
817
28.7 Complex Fourier Series and Parsevals Theorem
By writing sin(nx) and cos(nx) in terms of enx and enx we can obtain the complex form for a
Fourier series.
a0 X a0 X 1 nx nx 1 nx nx
+ an cos(nx) + bn sin(nx) = + an (e + e ) + bn (e e )
2 n=1
2 n=1
2 2
a0 X 1 nx 1 nx
= + (an bn ) e + (an + bn ) e
2 n=1
2 2
X
= cn enx
n=
where
1
2 (an bn )
for n 1
a0
cn = 2 for n = 0
1
2 (an + bn ) for n 1.
The functions {. . . , ex , 1, ex , e2x , . . .}, satisfy the relation
Z Z
enx emx dx = e(nm)x dx
(
2 for n = m
=
0 for n 6= m.
Starting with the complex form of the Fourier series of a function f (x),
X
f (x) cn enx ,
818
This yields a result known as Parsevals theorem which holds even when the Fourier series of f (x)
is not uniformly convergent.
then
Z
2 2 X
f (x) dx = a0 + (a2n + b2n ).
2 n=1
Let f (x) be a function that meets the conditions for having a Fourier series and in addition is
bounded. Let (, p1 ), (p1 , p2 ), (p2 , p3 ), . . . , (pm , ) be a partition into a finite number of intervals
of the domain, (, ) such that on each interval f (x) and all its derivatives are continuous. Let
f (p ) denote the left limit of f (p) and f (p+ ) denote the right limit.
Example 28.8.1 The function shown in Figure 28.6 would be partitioned into the intervals
a0 X
f (x) + an cos(nx) + bn sin(nx).
2 n=1
1
Z
an = f (x) cos(nx) dx
Z p1 Z p2 Z
1
= f (x) cos(nx) dx + f (x) cos(nx) dx + + f (x) cos(nx) dx
p1 pm
819
1
0.5
-2 -1 1 2
-0.5
-1
h ip1 ip2 i
1 h h
= f (x) sin(nx) + f (x) sin(nx) + + f (x) sin(nx)
n p1 pm
Z p1 Z p2 Z
1
f 0 (x) sin(nx) dx + f 0 (x) sin(nx) dx + f 0 (x) sin(nx) dx
n p1 pm
1 n +
+
o
= f (p1 ) f (p1 ) sin(np1 ) + + f (pm ) f (pm ) sin(npm )
n
11 0
Z
f (x) sin(nx) dx
n
1 1
= An b0n
n n
where
m
1X
sin(npj ) f (p +
An = j ) f (pj )
j=1
Z
1
b0n = f 0 (x) sin(nx) dx = O(1).
820
Now we repeat this analysis for the sine coefficients.
1
Z
bn = f (x) sin(nx) dx
Z p1 Z p2 Z
1
= f (x) sin(nx) dx + f (x) sin(nx) dx + + f (x) sin(nx) dx
p1 pm
1 n p1 p2 o
= f (x) cos(nx) + f (x) cos(nx) p1 + + f (x) cos(nx) pm
n Z p1 Z p2 Z
1
+ f 0 (x) cos(nx) dx + f 0 (x) cos(nx) dx + f 0 (x) cos(nx) dx
n p1 pm
1 1 0
= Bn + an
n n
where
m
(1)n 1X
cos(npj ) f (p +
Bn = f () f () j ) f (pj )
j=1
1 0 1
a0n = An b00n
n n
Pm
where A0n = 1
j=1 sin(npj )[f 0 (p 0 + 00 00
j ) f (pj )] and the bn are the sine coefficients of f (x), and
1 1
b0n = Bn0 + a00n
n n
n Pm
where Bn0 = (1) 0 0 1
[f () f ()]
0 0 + 00
j=1 cos(npj )[f (pj ) f (pj )] and the an are the cosine
00
coefficients of f (x).
Now we can rewrite an and bn as
1 1 1
an = An + 2 Bn0 2 a00n
n n n
1 1 1
bn = Bn + 2 A0n 2 b00n .
n n n
(j) (j)
Continuing this process we could define An and Bn so that
1 1 1 1
an = An + 2 Bn0 3 A00n 4 Bn000 +
n n n n
1 1 0 1 00 1
bn = Bn + 2 An + 3 Bn 4 A000 .
n n n n n
For any bounded function, the Fourier coefficients satisfy an , bn = O(1/n) as n . If An and
Bn are zero then the Fourier coefficients will be O(1/n2 ). A sufficient condition for this is that the
periodic extension of f (x) is continuous. We see that if the periodic extension of f 0 (x) is continuous
then A0n and Bn0 will be zero and the Fourier coefficients will be O(1/n3 ).
Result 28.8.1 Let f (x) be a bounded function for which there is a partition
of the range (, ) into a finite number of intervals such that f (x) and all
its derivatives are continuous on each of the intervals. If f (x) is not contin-
uous then the Fourier coefficients are O(1/n). If f (x), f 0 (x), . . . , f (k2) (x) are
continuous then the Fourier coefficients are O(1/nk ).
821
If the Fourier coefficients will be O(1/n2 ). The
Pperiodic extension of f (x) is continuous, then theP
series n=1 |an cos(nx)bn sin(nx)| can be bounded by M n=1 1/n2 where M = max(|an | + |bn |).
n
Thus the Fourier series converges to f (x) uniformly.
Result 28.8.2 If the periodic extension of f (x) is continuous then the Fourier
series of f (x) will converge uniformly for all x.
If the periodic extension of f (x) is not continuous, we have the following result.
Result 28.8.3 If f (x) is continuous in the interval c < x < d, then the Fourier
series is uniformly convergent in the interval c + x d for any > 0.
(
1 for 1 < x < 0
f1 (x) =
1, for 0 < x < 1.
This function has jump discontinuities, so we know that the Fourier coefficients are O(1/n).
Since this function is odd, there will only be sine terms in its Fourier expansion. Furthermore,
since the function is symmetric about x = 1/2, there will be only odd sine terms. Computing these
terms,
Z 1
bn = 2 sin(nx) dx
0
1
1
=2 cos(nx)
n 0
(1)n
1
=2
n n
(
4
for odd n
= n
0 for even n.
The function and the sum of the first three terms in the expansion are plotted, in dashed and
solid lines respectively, in Figure 28.7. Although the three term sum follows the general shape of
the function, it is clearly not a good approximation.
x 1
for 1 < x < 1/2
f2 (x) = x for 1/2 < x < 1/2
x + 1 for 1/2 < x < 1.
822
1
0.5
-1 -0.5 0.5 1
-0.5
-1
0.4
0.2
-1 -0.5 0.5 1
-0.2
-0.4
Figure 28.7: Three Term Approximation for a Function with Jump Discontinuities and a Continuous
Function.
Since this function is continuous, the Fourier coefficients will be O(1/n2 ). Also we see that there
will only be odd sine terms in the expansion.
Z 1/2 Z 1/2 Z 1
bn = (x 1) sin(nx) dx + x sin(nx) dx + (x + 1) sin(nx) dx
1 1/2 1/2
Z 1/2 Z 1
=2 x sin(nx) dx + 2 (1 x) sin(nx) dx
0 1/2
4
= sin(n/2)
(n)2
(
4 (n1)/2
(n)2 (1) for odd n
=
0 for even n.
823
0.2
0.1
-1 -0.5 0.5 1
-0.1
-0.2
1 0.5
1 1
0.25 0.1
0
0.1
Figure 28.8: Three Term Approximation for a Function with Continuous First Derivative and Com-
parison of the Rates of Convergence.
The function and the sum of the first three terms in the expansion are plotted, in dashed and
solid lines respectively, in Figure 28.7. We see that the convergence is much better than for the
function with jump discontinuities.
Since the periodic extension of this function is continuous and has a continuous first derivative, the
Fourier coefficients will be O(1/n3 ). We see that the Fourier expansion will contain only odd sine
824
terms.
Z 0 Z 1
bn = x(1 + x) sin(nx) dx + x(1 x) sin(nx) dx
1 0
Z 1
=2 x(1 x) sin(nx) dx
0
4(1 (1)n )
=
(n)3
(
4
(n)3 for odd n
=
0 for even n.
The function and the sum of the first three terms in the expansion are plotted in Figure 28.8.
We see that the first three terms give a very good approximation to the function. The plots of
the function, (in a dashed line), and the three term approximation, (in a solid line), are almost
indistinguishable.
In Figure 28.8 the convergence of the of the first three terms to f1 (x), f2 (x), and f3 (x) are
compared. In the last graph we see a closeup of f3 (x) and its Fourier expansion to show the error.
is
4X1
f (x) sin(nx).
n=1 n
For any fixed x, the series converges to 12 (f (x ) + f (x+ )). For any > 0, the convergence is uniform
in the intervals 1 + x and x 1 . How will the nonuniform convergence at
integral values of x affect the Fourier series? Finite Fourier series are plotted in Figure 28.9 for 5,
10, 50 and 100 terms. (The plot for 100 terms is closeup of the behavior near x = 0.) Note that
at each discontinuous point there is a series of overshoots and undershoots that are pushed closer
to the discontinuity by increasing the number of terms, but do not seem to decrease in height. In
fact, as the number of terms goes to infinity, the height of the overshoots and undershoots does not
vanish. This is known as Gibbs phenomenon.
825
1 1
1 1
1 1.2
1 0.1
0.8
Figure 28.9:
Since this is an odd function, there are no cosine terms in the Fourier series.
2
Z
bn = sin(nx) dx
0
1
= 2 cos(nx)
n 0
2 n
= (1 (1) )
n
(
4
for odd n
= n
0 for even n.
X 4
f (x) sin nx
n=1
n
oddn
Integrating this relation,
Z x Z x
X 4
f (t) dt sin(nt) dt
n=1
n
oddn
4 x
X Z
F (x) sin(nt) dt
n=1
n
oddn
x
X 4 1
= cos(nt)
n=1
n n
oddn
X 4
= ( cos(nx) + (1)n )
n=1
n2
oddn
X 1 X cos(nx)
=4 2
4
n=1
n n=1
n2
oddn oddn
826
Since this series converges uniformly,
(
X 1 X cos(nx) x for x < 0
4 2
4 = F (x) =
n=1
n n=1
n2 x for 0 x < .
oddn oddn
Thus
(
1 X cos(nx) x for x < 0
4 =
n=1
n2 x for 0 x < .
oddn
Differentiating Fourier Series. Recall that in general, a series can only be differentiated if it
is uniformly convergent. The necessary and sufficient condition that a Fourier series be uniformly
convergent is that the periodic extension of the function is continuous.
The function has a derivative except at the points x = n. Differentiating the Fourier series yields
X
f 0 (x) 4 cos(nx).
n=1
oddn
which is false. The series does not converge. This is as we expected since the Fourier series for f (x)
is not uniformly convergent.
827
28.11 Exercises
Exercise 28.1
1. Consider a 2 periodic function f (x) expressed as a Fourier series with partial sums
N
a0 X
SN (x) = + an cos(nx) + bn sin(nt).
2 n=1
show
a20 X 2
Z
1
+ an + b2n = f (x)2 dx.
2 n=1
2. Find the Fourier series for f (x) = x on x < (and repeating periodically). Use this to
show
X 1 2
2
= .
n=1
n 6
Exercise 28.2
Consider the Fourier series of f (x) = x on x < as found above. Investigate the convergence
at the points of discontinuity.
1. Let SN be the sum of the first N terms in the Fourier series. Show that
N + 12 x
dSN N cos
= 1 (1) .
cos x2
dx
3. Finally investigate the maxima of this difference around x = and provide an estimate (good
to two decimal places) of the overshoot in the limit N .
Exercise 28.3
Consider the boundary value problem on the interval 0 < x < 1
y 00 + 2y = 1 y(0) = y(1) = 0.
2. Solve directly and find the Fourier series of the solution (using the same extension). Compare
the result to the previous step and verify the series agree.
828
Exercise 28.4
Consider the boundary value problem on 0 < x <
y 00 + 2y = sin x y 0 (0) = y 0 () = 0.
2. Suppose the ODE is slightly modified: y 00 + 4y = sin x with the same boundary conditions.
Attempt to find a Fourier series solution and discuss in as much detail as possible what goes
wrong.
Exercise 28.5
Find the Fourier cosine and sine series for f (x) = x2 on 0 x < . Are the series differentiable?
Exercise 28.6
Find the Fourier series of cosn (x).
Exercise 28.7
For what values of x does the Fourier series
2 X (1)n
+4 2
cos nx = x2
3 n=1
n
converge? What is the value of the above Fourier series for all x? From this relation show that
X 1 2
2
=
n=1
n 6
X (1)n+1 2
=
n=1
n2 12
Exercise 28.8
1. Compute the Fourier sine series for the function
2x
f (x) = cos x 1 + , 0 x .
Exercise 28.9
Determine the cosine and sine series of
Estimate before doing the calculation the rate of decrease of Fourier coefficients, an , bn , for large n.
Exercise 28.10
Determine the Fourier cosine series of the function
f (x) = cos(x), 0 x ,
829
where is an arbitrary real number. From this series deduce the following identities for non-integer
.
1 X 1 1
= + (1)n +
sin() n=1 n +n
1 X 1 1
cot() = + +
n=1 n + n
Integrate the last formula from = 0 to = , (0 < < 1), to show that
2
sin() Y
= 1 2 .
n=1
n
Exercise 28.11
1. Show that
x X (1)n
ln cos = ln 2 cos(nx), < x < .
2 n=1
n
Use properties of Fourier series to conclude that
x X (1)n
ln cos = ln 2 cos(nx), x 6= (2k + 1), k Z.
2 n=1
n
3. Show that
1 sin((x + )/2) X sin(nx) sin(n)
ln = , x 6= + 2k.
2 sin((x )/2) n=1 n
Exercise 28.12
Solve the problem
y 00 + y = f (x), y(a) = y(b) = 0,
with an eigenfunction expansion. Assume that 6= n/(b a), n N.
Exercise 28.13
Solve the problem
y 00 + y = f (x), y(a) = A, y(b) = B,
with an eigenfunction expansion. Assume that 6= n/(b a), n N.
Exercise 28.14
Find the trigonometric series and the simple closed form expressions for A(r, x) and B(r, x) where
z = r ex and |r| < 1.
1
a) A + B = 1 + z2 + z4 +
1 z2
1 1
b) A + B log(1 + z) = z z 2 + z 3
2 3
830
Find An and Bn , and the trigonometric sum for them where:
c) An + Bn = 1 + z + z 2 + + z n .
Exercise 28.15
1. Is the trigonometric system
orthogonal on the interval [0, ]? Is the system orthogonal on any interval of length ? Why,
in each case?
Exercise 28.16
Let SN (x) be the N th partial sum of the Fourier series for f (x) |x| on < x < . Find N such
that |f (x) SN (x)| < 101 on |x| < .
Exercise 28.17
The set {sin(nx)}
n=1 is orthogonal and complete on [0, ].
2. Find a convergent series for g(x) = x on 0 x by integrating the series for part (a).
Exercise 28.18
1. Show that the Fourier cosine series expansion on [0, ] of:
1, 0 x < 2 ,
1
f (x) 2 , x = 2 ,
0, 2 < x ,
is
1 2 X (1)n
S(x) = + cos((2n + 1)x).
2 n=0 2n + 1
831
4. Show that at x = xN , the maximum of SN (x) nearest to /2 in (0, /2) is
Z N
1 1 2(N +1) sin(2(N + 1)t)
SN (xN ) = + dt.
2 0 sin t
Clearly xN /2 as N .
5. Show that also in this limit,
Z
1 1 sin t
SN (xN ) + dt 1.0895.
2 0 t
How does this compare with f (/2 0)? This overshoot is the Gibbs phenomenon that occurs
at each discontinuity. It is a manifestation of the non-uniform convergence of the Fourier series
for f (x) on [0, ].
Exercise 28.19
Prove the Isoperimetric Inequality: L2 4A where L is the length of the perimeter and A the area
of any piecewise smooth plane figure. Show that equality is attained only for the circle. (Hints: The
closed curve is represented parametrically as
Express x(t) and y(t) as Fourier series and use the completeness and orthogonality relations to show
that L2 4A 0.)
Exercise 28.20
1. Find the Fourier sine series expansion and the Fourier cosine series expansion of
832
28.12 Hints
Hint 28.1
Hint 28.2
Hint 28.3
Hint 28.4
Hint 28.5
Hint 28.6
Expand
n
n 1 x x
cos (x) = (e + e )
2
Using Newtons binomial formula.
Hint 28.7
Hint 28.8
Hint 28.9
Hint 28.10
Hint 28.11
Hint 28.12
Hint 28.13
Hint 28.14
Hint 28.15
Hint 28.16
Hint 28.17
833
Hint 28.18
Hint 28.19
Hint 28.20
834
28.13 Solutions
Solution 28.1
1. We start by assuming that the Fourier series converges in the mean.
!2
Z
a0 X
f (x) (an cos(nx) + bn sin(nx)) =0
2 n=1
Z Z
X Z Z
2
(f (x)) dx a0 f (x) dx 2 an f (x) cos(nx) dx + bn f (x) sin(nx)
n=1
Z
a20 X
+ a0
+ (an cos(nx) + bn sin(nx)) dx
2 n=1
X
X Z
+ (an cos(nx) + bn sin(nx))(am cos(mx) + bm sin(mx)) dx = 0
n=1 m=1
Z Z
X Z Z
2
(f (x)) dx a0 f (x) dx 2 an f (x) cos(nx) dx + bn f (x) sin(nx)
n=1
Z
a20 X
+ + (a2n cos2 (nx) + b2n sin2 (nx)) dx = 0
2 n=1
We use the definition of the Fourier coefficients to evaluate the integrals in the last sum.
a2
Z X X
(f (x))2 dx a20 2 a2n + b2n + 0 + a2n + b2n = 0
n=1
2 n=1
a20
Z
X 1
a2n b2n f (x)2 dx
+ + =
2 n=1
2. We determine the Fourier coefficients for f (x) = x. Since f (x) is odd, all of the an are zero.
Z
1
b0 = x sin(nx) dx
Z
1 1 1
= x cos(nx) + cos(nx) dx
n n
2(1)n+1
=
n
X 2(1)n+1
x= sin(nx) for x ( . . . ).
n=1
n
835
P 1
We apply Parsevals theorem for this series to find the value of n=1 n2 .
1 2
Z
X 4
= x dx
n=1
n2
X 4 2 2
=
n=1
n2 3
X 1 2
=
n=1
n2 6
3. Consider f (x) = x2 . Since the function is even, there are no sine terms in the Fourier series.
The coefficients in the cosine series are
2 2
Z
a0 = x dx
0
2 2
=
3Z
2 2
an = x cos(nx) dx
0
4(1)n
= .
n2
Thus the Fourier series is
2 X (1)n
x2 = +4 cos(nx) for x ( . . . ).
3 n=1
n2
P
We apply Parsevals theorem for this series to find the value of n=1 n14 .
2 4 1 4
Z
X 1
+ 16 = x dx
9 n=1
n4
2 4 X 1 2 4
+ 16 4
=
9 n=1
n 5
X 1 4
4
=
n=1
n 90
836
Solution 28.2
1. We differentiate the partial sum of the Fourier series and evaluate the sum.
N
X 2(1)n+1
SN = sin(nx)
n=1
n
N
X
0
SN =2 (1)n+1 cos(nx)
n=1
N
!
X
0
SN = 2< (1)n+1 enx
n=1
1 (1)N +2 e(N +1)x
0
SN = 2<
1 + ex
1 + ex (1)N e(N +1)x (1)N eN x
0
SN =<
1 + cos(x)
0 cos((N + 1)x) + cos(N x)
SN = 1 (1)N
1 + cos(x)
N + 12 x cos x2
0 N cos
SN = 1 (1)
cos2 x2
N + 12 x
dSN N cos
= 1 (1)
cos x2
dx
0
2. We integrate SN .
x 1
(1)N cos N+
Z
2
SN (x) SN (0) = x d
0 cos 2
x
N + 12 ( )
sin
Z
x SN = d
0 sin
2
3. We find the extrema of the overshoot E = x SN with the first derivative test.
sin N + 12 (x )
E0 = =0
sin x
2
sin N + 12 ( )
Z (11/(N +1/2))
E0 = d
0 sin
2
837
We shift the limits of integration.
N + 12
sin
Z
E0 = d
/(N +1/2) sin 2
N + 12 sin N + 12
Z /(N +1/2)
sin
Z
E0 = d d
0 sin 2 0 sin 2
We can evaluate the first integral with contour integration on the unit circle C.
sin N + 12
Z Z
sin ((2N + 1) )
d = d
0 sin 2 0 sin ()
1 sin ((2N + 1) )
Z
= d
2 sin ()
= z 2N +1
Z
1 dz
=
2 C (z 1/z)/(2) z
z 2N +1
Z
== 2
dz
C (z 1)
z 2N +1 z 2N +1
= = Res , 1 + Res , 1
(z + 1)(z 1) (z + 1)(z 1)
2N +1 2N +1
1 (1)
= < +
2 2
=
sin N + 12
Z /(N +1/2) Z
2 sin(x)
d = dx
0 sin 2 2N + 1 0 sin x
2N +1
Z
sin(x)
2 dx
0 x
Z X
1 (1)n x2n+1
=2 dx
0 x n=0 (2n + 1)!
Z
X (1)n x2n
=2 dx
n=0 0
(2n + 1)!
X (1)n 2n+1
=2 dx
n=0
(2n + 1)(2n + 1)!
3.70387
| 3.70387| 0.56.
Solution 28.3
1. The eigenfunctions of the self-adjoint problem
y 00 = y, y(0) = y(1) = 0,
838
are
n = sin(nx), n Z+
We find the series expansion of the inhomogeneity f (x) = 1.
X
1= fn sin(nx)
n=1
Z 1
fn = 2 sin(nx) dx
0
1
cos(nx)
fn = 2
n 0
2 n
fn = (1 (1) )
(n
4
for odd n
fn = n
0 for even n
y 00 + 2y = 1
X X X 4
an 2 n2 sin(nx) + 2 an sin(nx) = sin(nx)
n=1 n=1 n=1
n
odd n
(
4
n(2 2 n2 ) for odd n
an =
0 for even n
X 4
y= sin(nx)
n=1
n(2 2 n2 )
odd n
y 00 + 2y = 1 y(0) = y(1) = 0
839
We find the Fourier sine series of the solution.
X
y= an sin(nx)
n=1
Z 1
an = 2 y(x) sin(nx) dx
0
cos 2 1
!
Z 1
an = 1 cos 2x + sin 2x sin(nx) dx
0 sin 2
2(1 (1)2
an =
n(2 2 n2 )
(
4
n(2 2 n2 ) for odd n
an =
0 for even n
We obtain the same series as in the first part.
Solution 28.4
1. The eigenfunctions of the self-adjoint problem
y 00 = y, y 0 (0) = y 0 () = 0,
are
1
0 = , n = cos(nx), n Z+
2
We find the series expansion of the inhomogeneity f (x) = sin(x).
f0 X
f (x) = + fn cos(nx)
2 n=1
2
Z
f0 = sin(x) dx
0
4
f0 =
2
Z
fn = sin(x) cos(nx) dx
0
2(1 + (1)n )
fn =
(1 n2 )
(
4
2 for even n
fn = (1n )
0 for odd n
We expand the solution in a series of the eigenfunctions.
a0 X
y= + an cos(nx)
2 n=1
840
2. We expand the solution in a series of the eigenfunctions.
a0 X
y= + an cos(nx)
2 n=1
Solution 28.5
Cosine Series. The coefficients in the cosine series are
2 2
Z
a0 = x dx
0
2 2
=
3Z
2 2
an = x cos(nx) dx
0
4(1)n
= .
n2
Thus the Fourier cosine series is
2 X 4(1)n
f (x) = + cos(nx).
3 n=1
n2
In Figure 28.10 the even periodic extension of f (x) is plotted in a dashed line and the sum of the
first five terms in the Fourier series is plotted in a solid line. Since the even periodic extension is
continuous, the cosine series is differentiable.
Sine Series. The coefficients in the sine series are
2 2
Z
bn = x sin(nx) dx
0
2(1)n 4(1 (1)n )
=
n n3
2(1)n
(
n for even n
= n
2(1)
n
8
n3 for odd n.
841
10
-3 -2 -1 1 2 3
10
-3 -2 -1 1 2 3
-5
-10
2(1)n 4(1 (1)n )
X
f (x) + sin(nx).
n=1
n n3
In Figure 28.10 the odd periodic extension of f (x) and the sum of the first five terms in the sine
series are plotted. Since the odd periodic extension of f (x) is not continuous, the series is not
differentiable.
Solution 28.6
We could find the expansion by integrating to find the Fourier coefficients, but it is easier to expand
842
cosn (x) directly.
n
n 1 x x
cos (x) = (e + e )
2
1 n nx n (n2)x n (n2)x n nx
= n e + e + + e + e
2 0 1 n1 n
If n is odd,
"
1 n nx n
cosn (x) = n nx
(e + e )+ (e(n2)x + e(n2)x ) +
2 0 1
#
n x x
+ (e + e )
(n 1)/2
1 n n n
= n 2 cos(nx) + 2 cos((n 2)x) + + 2 cos(x)
2 0 1 (n 1)/2
(n1)/2
1 X n
= n1 cos((n 2m)x)
2 m=0
m
n
1 X n
= n1 cos(kx).
2 (n k)/2
k=1
odd k
If n is even,
"
1 n nx n
cosn (x) = n nx
(e + e )+ (e(n2)x + e(n2)x ) +
2 0 1
#
n 2x i2x n
+ (e + e )+
n/2 1 n/2
1 n n n n
= n 2 cos(nx) + 2 cos((n 2)x) + + 2 cos(2x) +
2 0 1 n/2 1 n/2
(n2)/2
1 n 1 X n
= n + n1 cos((n 2m)x)
2 n/2 2 m=0
m
n
1 n 1 X n
= n + n1 cos(kx).
2 n/2 2 (n k)/2
k=2
even k
We may denote,
n
n a0 X
cos (x) = ak cos(kx),
2
k=1
where
1 + (1)nk 1
n
ak = .
2 2n1 (n k)/2
843
Solution 28.7
We expand f (x) in a cosine series. The coefficients in the cosine series are
2 2
Z
a0 = x dx
0
2 2
=
3Z
2 2
an = x cos(nx) dx
0
4(1)n
= .
n2
Thus the Fourier cosine series is
2 X (1)n
f (x) = +4 cos(nx).
3 n=1
n2
Solution 28.8
1. We compute the Fourier sine coefficients.
2
Z
an = f (x) sin(nx) dx
0
2
Z
2x
= cos x 1 + sin(nx) dx
0
2(1 + (1)n )
=
(n3 n)
844
(
4
(n3 n) for even n
an =
0 for odd n
2. From our work in the previous part, we see that the Fourier coefficients decay as 1/n3 . The
Fourier sine series converges to the odd periodic extension of the function, f(x). We can
determine the rate of decay of the Fourier coefficients from the smoothness of f(x). For
< x < , the odd periodic extension of f (x) is defined
(
f (x) = cos(x) 1 + 2x 0 x < ,
f(x) =
f (x) = cos(x) + 1 + 2x x < 0.
Since
f(0+ ) = f(0 ) = 0 and f() = f() = 0
f(x) is continuous, C 0 . Since
2 2
f0 (0+ ) = f0 (0 ) = and f0 () = f0 () =
f(x) is continuously differentiable, C 1 . However, since
f(x) is not C 2 . Since f(x) is C 1 we know that the Fourier coefficients decay as 1/n3 .
Solution 28.9
Cosine Series. The even periodic extension of f (x) is a C 0 , continuous, function (See Figure 28.11.
Thus the coefficients in the cosine series will decay as 1/n2 . The Fourier cosine coefficients are
2
Z
a0 = x sin x dx
0
=2
Z
2
a1 = x sin x cos x dx
0
1
=
2
2
Z
an = x sin x cos(nx) dx
0
2(1)n+1
= , for n 2
n2 1
The Fourier cosine series is
1 X 2(1)n
f(x) = 1 cos x 2 cos(nx).
2 n=2
n2 1
Sine Series. The odd periodic extension of f (x) is a C 1 , continuously differentiable, function
(See Figure 28.12. Thus the coefficients in the cosine series will decay as 1/n3 . The Fourier sine
coefficients are
1
Z
a1 = x sin x sin x dx
0
=
2
845
1
-5 5
Z
2
an = x sin x sin(nx) dx
0
4(1 + (1)n )n
= , for n 2
(n2 1)2
The Fourier sine series is
4 X (1 + (1)n )n
f(x) = sin x cos(nx).
2 n=2 (n2 1)2
-5 5
Solution 28.10
If = n is an integer, then the Fourier cosine series is the single term cos(|n|x). We assume that
6= n.
We note that the even periodic extension of cos(x) is C 0 so that the series converges to cos(x)
for x and the coefficients decay as 1/n2 . We compute the Fourier cosine coefficients.
2
Z
a0 = cos(x) dx
0
2 sin()
=
846
Z
2
an = cos(x) cos(nx) dx
0
n 1 1
= (1) + sin()
n +n
sin() X 1 1
cos(x) = + (1)n + sin() cos(nx).
n=1
n +n
Note that neither cot() nor 1/ is integrable at = 0. We write the last formula so each side
is integrable.
1 X 1 1
cot = +
n=1
n +n
Solution 28.11
1. We will consider the principal branch of the logarithm, < =(Log z) . For < x < ,
cos(x/2) is positive so that ln(cos(x/2)) is well-defined. At x = , ln(cos(x/2)) is singular.
However, the function is integrable so it has a Fourier series which converges except at x =
847
(2k + 1), k Z.
ex/2 + ex/2
x
ln cos = ln
2 2
= ln 2 + ln ex/2 (1 + ex )
x
= ln 2 + Log (1 + ex )
2
Since | ex | 1 and ex 6= 1 for =(x) 0, x 6= (2k + 1), we can expand the last term in a
Taylor series in that domain.
x X (1)n x n
= ln 2 (e )
2 n=1 n
!
X (1)n x X (1)n
= ln 2 cos(nx) + sin(nx)
n=1
n 2 n=1 n
For < x < , ln(cos(x/2)) is real-valued. We equate the real parts of the equation on this
domain to obtain the desired Fourier series.
x X (1)n
ln cos = ln 2 cos(nx), < x < .
2 n=1
n
The domain of convergence for this series is =(x) = 0, x 6= (2k + 1). The Fourier series
converges to the periodic extension of the function.
x X (1)n
ln cos = ln 2 cos(nx), x 6= (2k + 1), k Z
2 n=1
n
!
(1)n
Z Z
x X
ln cos dx = ln 2 cos(nx) dx
0 2 0 n=1
n
(1)n
X Z
= ln 2 cos(nx) dx
n=1
n 0
(1)n sin(nx)
X
= ln 2
n=1
n n 0
Z x
ln cos dx = ln 2
0 2
Consider the function ln | sin(y/2)|. Since sin(x) = cos(x /2), we can use the result of part
848
(a) to obtain,
y y
ln sin = ln cos
2 2
X (1)n
= ln 2 cos(n(y ))
n=1
n
X 1
= ln 2 cos(ny), for y 6= 2k, k Z.
n=1
n
for x 6= 2k, k Z.
1 sin((x + )/2) X sin(nx) sin(n)
ln = , x 6= + 2k
2 sin((x )/2) n=1 n
Solution 28.12
The eigenfunction problem associated with this problem is
00 + 2 = 0, (a) = (b) = 0,
b
n(x a) n(x a)
Z
X 2
f (x) = fn sin , fn = f (x) sin dx
n=1
ba ba a ba
Since the solution y(x) satisfies the same homogeneous boundary conditions as the eigenfunctions,
we can differentiate the series. We substitute the series expansions into the differential equation.
y 00 + y = f (x)
X X
2
yn n + sin (n x) = fn sin (n x)
n=1 n=1
fn
yn =
2n
X n(x a)
2n sin
y(x) = .
n=1
ba
849
Solution 28.13
The eigenfunction problem associated with this problem is
00 + 2 = 0, (a) = (b) = 0,
b
n(x a) n(x a)
Z
X 2
f (x) = fn sin , fn = f (x) sin dx
n=1
ba ba a ba
Since the solution y(x) does not satisfy the same homogeneous boundary conditions as the eigenfunc-
tions, we can differentiate the series. We multiply the differential equation by an eigenfunction and
integrate from a to b. We use integration by parts to move derivatives from y to the eigenfunction.
y 00 + y = f (x)
Z b Z b Z b
00
y (x) sin(m x) dx + y(x) sin(m x) dx = f (x) sin(m x) dx
a a a
b
ba ba
Z
b
[y 0 sin(m x)]a y 0 m cos(m x) dx + ym = fm
a 2 2
b
ba ba
Z
b
[ym cos(m x)]a y2m sin(m x) dx +
ym = fm
a 2 2
ba ba
Bm (1)m + Am (1)m+1 2m ym + ym = fm
2 2
fm + (1)m m (A + B)
ym =
2m
fm + (1)m m (A + B)
X n(x a)
y(x) = sin .
n=1
2m ba
Solution 28.14
1.
1
A + B =
1 z2
X
= z 2n
n=0
X
= r2n e2nx
n=0
X
X
= r2n cos(2nx) + r2n sin(2nx)
n=0 n=1
850
X
X
A= r2n cos(2nx), B= r2n sin(2nx)
n=0 n=1
1
A + B =
1 z2
1
=
1 r2 e2x
1
=
1 r2 cos(2x) r2 sin(2x)
1 r2 cos(2x) + r2 sin(2x)
=
(1 r2 cos(2x))2 + (r2 sin(2x))2
1 r2 cos(2x) r2 sin(2x)
A= , B=
1 2r2 cos(2x) + r4 1 2r2 cos(2x) + r4
A + B = log(1 + z)
X (1)n+1 n
= z
n=1
n
X (1)n+1 n nx
= r e
n=1
n
X (1)n+1 n
= r cos(nx) + sin(nx)
n=1
n
X (1)n+1 n X (1)n+1 n
A= r cos(nx), B= r sin(nx)
n=1
n n=1
n
A + B = log(1 + z)
= log (1 + r ex )
= log (1 + r cos x + r sin x)
= log |1 + r cos x + r sin x| + arg (1 + r cos x + r sin x)
p
= log (1 + r cos x)2 + (r sin x)2 + arctan (1 + r cos x, r sin x)
1
log 1 + 2r cos x + r2 ,
A= B = arctan (1 + r cos x, r sin x)
2
3.
n
X
An + Bn = zk
k=1
1 z n+1
=
1z
1 rn+1 e(n+1)x
=
1 r ex
1 r ex rn+1 e(n+1)x +rn+2 enx
=
1 2r cos x + r2
851
1 r cos x rn+1 cos((n + 1)x) + rn+2 cos(nx)
An =
1 2r cos x + r2
n
X
An + Bn = zk
k=1
Xn
= rk ekx
k=1
n
X n
X
An = rk cos(kx), Bn = rk sin(kx)
k=1 k=1
Solution 28.15
1.
Z
1 sin x dx = [ cos x]0 = 2
0
Thus the system is not orthogonal on the interval [0, ]. Consider the interval [a, a + ].
Z a+
a+
1 sin x dx = [ cos x]a = 2 cos a
a
Z a+
a+
1 cos x dx = [sin x]a = 2 sin a
a
Since there is no value of a for which both cos a and sin a vanish, the system is not orthogonal
for any interval of length .
2. First note that Z
cos nx dx = 0 for n N.
0
If n 6= m, n 1 and m 0 then
Z
1
Z
cos nx cos mx dx = cos((n m)x) + cos((n + m)x) dx = 0
0 2 0
Thus the set {1, cos x, cos 2x, . . .} is orthogonal on [0, ]. Since
Z
dx =
Z 0
cos2 (nx) dx = ,
0 2
the set (r )
r r
1 2 2
, cos x, cos 2x, . . .
is orthonormal on [0, ].
If n 6= m, n 1 and m 1 then
Z
1
Z
sin nx sin mx dx = cos((n m)x) cos((n + m)x) dx = 0
0 2 0
852
Thus the set {sin x, sin 2x, . . .} is orthogonal on [0, ]. Since
Z
sin2 (nx) dx = ,
0 2
the set (r )
r
2 2
sin x, sin 2x, . . .
is orthonormal on [0, ].
Solution 28.16
Since the periodic extension of |x| in [, ] is an even function its Fourier series is a cosine series.
Because of the anti-symmetry about x = /2 we see that except for the constant term, there will only
be odd cosine terms. Since the periodic extension is a continuous function, but has a discontinuous
first derivative, the Fourier coefficients will decay as 1/n2 .
X
|x| = an cos(nx), for x [, ]
n=0
1 x2
Z
1
a0 = x dx = =
0 2 0 2
2
Z
an = x cos(nx) dx
0
2 sin(nx)
Z
2 sin(nx)
= x dx
n 0 0 n
2 cos(nx)
=
n2 0
2
= 2 (cos(n) 1)
n
2(1 (1)n )
=
n2
4 X 1
|x| = + cos(nx) for x [, ]
2 n=1 n2
odd n
Since
X 1 2
2
=
n=1
n 8
odd n
853
We can bound the error with,
N
4 X 1
|RN (x)| .
2 n=1 n2
odd n
N = 7 is the smallest number for which our error bound is less than 101 . N 7 is sufficient to
make the error less that 0.1.
4 1 1 1
|R7 (x)| 1+ + + 0.079
2 9 25 49
N 7 is also necessary because.
4 X 1
|RN (0)| = .
n2
n=N +1
odd n
Solution 28.17
1.
X
1 an sin(nx), 0x
n=1
Since the odd periodic extension of the function is discontinuous, the Fourier coefficients will
decay as 1/n. Because of the symmetry about x = /2, there will be only odd sine terms.
2
Z
an = 1 sin(nx) dx
0
2
= ( cos(n) + cos(0))
n
2
= (1 (1)n )
n
4 X sin(nx)
1
n=1 n
odd n
2. Its always OK to integrate a Fourier series term by term. We integrate the series in part (a).
Z x Z x
4 X sin(n)
1 dx dx
a n=1 a
n
odd n
4 X cos(na) cos(nx)
xa
n=1 n2
odd n
Now we have a Fourier cosine series. The first sum on the right is the constant term. If we
choose a = /2 this sum vanishes since cos(n/2) = 0 for odd integer n.
4 X cos(nx)
x=
2 n=1 n2
odd n
854
3. If f (x) has the Fourier series
a0 X
f (x) + (an cos(nx) + bn sin(nx)),
2 n=1
Solution 28.18
1.
X
f (x) a0 + an cos(nx)
n=1
Since the periodic extension of the function is discontinuous, the Fourier coefficients will decay
like 1/n. Because of the anti-symmetry about x = /2, there will be only odd cosine terms.
1
Z
1
a0 = f (x) dx =
0 2
Z
2
an = f (x) cos(nx) dx
0
2 /2
Z
= cos(nx) dx
0
2
= sin(n/2)
n
(
2
(1)(n1)/2 , for odd n
= n
0 for even n
855
The Fourier cosine series of f (x) is
1 2 X (1)n
f (x) + cos((2n + 1)x).
2 n=0 2n + 1
N
1 2 X (1)n
SN (x) = + cos((2n + 1)x).
2 n=0 2n + 1
We wish to evaluate the sum from part (a). First we make the change of variables y = x /2
to get rid of the (1)n factor.
X (1)n
cos((2n + 1)x)
n=0
2n + 1
N
X (1)n
= cos((2n + 1)(y + /2))
n=0
2n + 1
N
X (1)n
= (1)n+1 sin((2n + 1)y)
n=0
2n + 1
N
X 1
= sin((2n + 1)y)
n=0
2n +1
856
We write the summand as an integral and interchange the order of summation and integration
to get rid of the 1/(2n + 1) factor.
N Z
X y
= cos((2n + 1)t) dt
n=0 0
Z N
yX
= cos((2n + 1)t) dt
0 n=0
2N +1 N
!
Z y X X
= cos(nt) cos(2nt) dt
0 n=1 n=1
2N +1 N
!
Z y X X
nt 2nt
= < e e dt
0 n=1 n=1
y t (2N +2)t e2t e2(N +1)t
e e
Z
= < dt
0 1 et 1 e2t
y
(et e2(N +1)t )(1 e2t ) (e2t e2(N +1)t )(1 et )
Z
= < dt
0 (1 et )(1 e2t )
Z y t
e e2t + e(2N +4)t e(2N +3)t
= < dt
0 (1 et )(1 e2t )
Z y t
e e(2N +3)t
= < dt
0 1 e2t
Z y (2N +2)t
e 1
= < dt
0 et et
Z y
e2(N +1)t +
= < dt
0 2 sin t
Z y
1 sin(2(N + 1)t)
= dt
2 0 sin t
1 x/2 sin(2(N + 1)t)
Z
= dt
2 0 sin t
Now we have a tidy representation of the partial sum.
Z x/2
1 1 sin(2(N + 1)t)
SN (x) = dt
2 0 sin t
dSN (x)
3. We solve dx = 0 to find the relative extrema of SN (x).
0
SN (x) = 0
1 sin(2(N + 1)(x /2))
=0
sin(x /2)
(1)N +1 sin(2(N + 1)x)
=0
cos(x)
sin(2(N + 1)x)
=0
cos(x)
n
x = xn = , n = 0, 1, . . . , N, N + 2, . . . , 2N + 2
2(N + 1)
Note that xN +1 = /2 is not a solution as the denominator vanishes there. The function has
a removable singularity at x = /2 with limiting value (1)N .
857
4.
N
Z /2
1 1 2(N +1) sin(2(N + 1)t)
SN (xN ) = dt
2 0 sin t
We note that the integrand is even.
N
Z 2(N +1)
/2 Z 2(N+1) Z
2(N +1)
= =
0 0 0
Z
1 1 2(N +1) sin(2(N + 1)t)
SN (xN ) = + dt
2 0 sin t
1
Z
1 sin(t)
SN (xN ) = + dt
2 0 2(N + 1) sin(t/(2(N + 1)))
Note that
sin(t) t cos(t)
lim = lim =t
0 0 1
1 sin(t)
Z
1
SN (xN ) + dt 1.0895 as N
2 0 t
Solution 28.19
With the parametrization in t, x(t) and y(t) are continuous functions on the range [0, 2]. Since the
curve is closed, we have x(0) = x(2) and y(0) = y(2). This means that the periodic extensions
of x(t) and y(t) are continuous functions. Thus we can differentiate their Fourier series. First we
define formal Fourier series for x(t) and y(t).
a0 X
x(t) = + an cos(nt) + bn sin(nt)
2 n=1
c0 X
y(t) = + cn cos(nt) + dn sin(nt)
2 n=1
X
x0 (t) =
nbn cos(nt) nan sin(nt)
n=1
X
y 0 (t) =
ndn cos(nt) ncn sin(nt)
n=1
In this problem we will be dealing with integrals on [0, 2] of products of Fourier series. We derive
a general formula for later use.
Z 2 Z 2 ! !
a0 X c0 X
xy dt = + an cos(nt) + bn sin(nt) + cn cos(nt) + dn sin(nt) dt
0 0 2 n=1
2 n=1
Z 2 !
a0 c0 X 2 2
= + an cn cos (nt) + bn dn sin (nt) dt
0 4 n=1
!
1 X
= a0 c0 + (an cn + bn dn )
2 n=1
858
In the arclength parametrization we have
2 2
dx dy
+ = 1.
ds ds
2 2 2 !
L2
Z
dx dy
= + dt
2 0 dt dt
!
X X
2 2 2 2
= (nbn ) + (nan ) + (ndn ) + (ncn )
n=1 n=1
X
= 2
n (a2n + b2n + c2n + d2n )
n=1
X
L2 = 2 2 n2 (a2n + b2n + c2n + d2n )
n=1
We assume that the curve is parametrized so that the area is positive. (Reversing the orientation
changes the sign of the area as defined above.) The area is
Z 2
dy
A= x dt
0 dt
! !
Z 2
a0 X X
= + an cos(nt) + bn sin(nt) ndn cos(nt) ncn sin(nt) dt
0 2 n=1 n=1
X
= n(an dn bn cn )
n=1
Now we find an upper bound on the area. We will use the inequality |ab| 12 |a2 + b2 |, which follows
from expanding (a b)2 0.
X
n a2n + b2n + c2n + d2n
A
2 n=1
X 2 2
n an + b2n + c2n + d2n
2 n=1
L2
=
4
L2 4A
859
Now we determine the curves for which L2 = 4A. To do this we find conditions for which A is
equal to the upper bound we obtained for it above. First note that
X
X
n a2n + b2n + c2n + d2n = n2 a2n + b2n + c2n + d2n
n=1 n=1
implies that all the coefficients except a0 , c0 , a1 , b1 , c1 and d1 are zero. The constraint,
X X
n a2n + b2n + c2n + d2n
n(an dn bn cn ) =
n=1
2 n=1
then becomes
a1 d1 b1 c1 = a21 + b21 + c21 + d21 .
This implies that d1 = a1 and c1 = b1 . a0 and c0 are arbitrary. Thus curves for which L2 = 4A
have the parametrization
a0 c0
x(t) = + a1 cos t + b1 sin t, y(t) = b1 cos t + a1 sin t.
2 2
Note that a0 2 c0 2
x(t) + y(t) = a21 + b21 .
2 2
p
The curve is a circle of radius a21 + b21 and center (a0 /2, c0 /2).
Solution 28.20
1. The Fourier sine series has the form
X
x(1 x) = an sin(nx).
n=1
860
0.2 0.2
0.1 0.1
Figure 28.13: The odd and even periodic extension of x(1 x), 0 x 1.
Z 1
an = 2 x(1 x) cos(nx) dx
0
2 4 sin(n) n cos(n)
= +
2 n2 3 n3
2
= 2 2 (1 + (1)n )
n
1 4 X cos(nx) 1 1 X cos(2nx)
x(1 x) = 2 = .
6 n=1 n2 6 2 n=1 n2
even n
The Fourier sine series converges to the odd periodic extension of the function. Since this
function is C 1 , continuously differentiable, we know that the Fourier coefficients must decay
as 1/n3 . The Fourier cosine series converges to the even periodic extension of the function.
Since this function is only C 0 , continuous, the Fourier coefficients must decay as 1/n2 . The
odd and even periodic extensions are shown in Figure 28.13. The sine series is better because
of the faster convergence of the series.
1 1 X 1
0= 2
6 n=1 n2
X 1 2
2
=
n=1
n 6
861
(b) We substitute x = 1/2 into the cosine series.
1 1 1 X cos(n)
= 2
4 6 n=1 n2
X (1)n 2
=
n=1
n2 12
862
Chapter 29
Here the coefficient functions pj are real and continuous and p2 > 0 on the interval [a . . . b]. (Note that
if p2 were negative we could multiply the equation by (1) and replace by .) The parameters
j and j are real.
We would like to write this problem in a form that can be used to obtain qualitative information
about the problem. First we will write the operator in self-adjoint form. We divide by p2 since it is
non-vanishing.
p1 p0
y 00 + y 0 + y = y.
p2 p2 p2
We multiply by an integrating factor.
Z
p1
I = exp dx eP (x)
p2
P (x) 00 p1 0 p0
e y + y + y = eP (x) y
p2 p2 p2
0 p
eP (x) y 0 + eP (x) 0 y = eP (x) y
p2 p2
863
For notational convenience, we define new coefficient functions and parameters.
p0 1
p = eP (x) , q = eP (x) , = eP (x) , = .
p2 p2
Since the pj are continuous and p2 is positive, p, q, and are continuous. p and are positive
functions. The problem now has the form,
(py 0 )0 + qy + y = 0,
This is known as a Regular Sturm-Liouville problem. We will devote much of this chapter to studying
the properties of this problem. We will encounter many results that are analogous to the properties
of self-adjoint eigenvalue problems.
Example 29.1.1
d dy
ln x + xy = 0, y(1) = y(2) = 0
dx dx
is not a regular Sturm-Liouville problem since ln x vanishes at x = 1.
p2 y 00 + p1 y 0 + p0 y = y, for a x b,
1 y(a) + 2 y 0 (a) = 0, 1 y(b) + 2 y 0 (b) = 0,
where the pj are real and continuous and p2 > 0 on [a, b], and the j and j
are real can be written in the form of a regular Sturm-Liouville problem,
(py 0 )0 + qy + y = 0, on a x b,
1 y(a) + 2 y 0 (a) = 0, 1 y(b) + 2 y 0 (b) = 0.
L[y] (py 0 )0 + qy = y.
We see that the operator is formally self-adjoint. Now we determine if the problem is self-adjoint.
864
Above we used the fact that the i and i are real.
1 1 1 1
= , =
2 2 2 2
Real Eigenvalues. Let be an eigenvalue with the eigenfunction . We start with Greens
formula.
h|L[]i hL[]|i = 0
h| i h|i = 0
h||i + h||i = 0
( )h||i = 0
Infinite Number of Eigenvalues. There are an infinite of eigenvalues which have no finite cluster
point. This result is analogous to the result that we derived for self-adjoint eigenvalue problems.
When we cover the Rayleigh quotient, we will find that there is a least eigenvalue. Since the
eigenvalues are distinct and have no finite cluster point, n as n . Thus the eigenvalues
form an ordered sequence,
1 < 2 < 3 < .
Orthogonal Eigenfunctions. Let and be two distinct eigenvalues with the eigenfunctions
and . Greens formula states
h|L[]i hL[]|i = 0.
h| i h|i = 0
h||i + h||i = 0
( )h||i = 0
Since the eigenvalues are distinct, h||i = 0. Thus eigenfunctions corresponding to distinct
eigenvalues are orthogonal with respect to .
Unique Eigenfunctions. Let be an eigenvalue. Suppose and are two independent eigen-
functions corresponding to .
L[] + = 0, L[] + = 0
We take the difference of times the first equation and times the second equation.
L[] L[] = 0
(p0 )0 (p 0 )0 = 0
(p(0 0 ))0 = 0
p(0 0 ) = const
p(0 0 ) = 0
865
Since p > 0 the second factor vanishes.
0 0 = 0
0 0
2 =0
d
=0
dx
= const
and are not independent. Thus each eigenvalue has a unique, (to within a multiplicative
constant), eigenfunction.
(p0 )0 + q + = 0.
= (const).
Since and only differ by a multiplicative constant, the eigenfunctions can be chosen so that they
are real-valued functions.
h|L[]i = h| i
h|(p0 )0 + qi = h||i
b
p0 a h0 |p|0 i + h|q|i = h||i
b
p0 a + h0 |p|0 i h|q|i
=
h||i
This is known as Rayleighs quotient. It is useful for obtaining qualitative information about the
eigenvalues.
Minimum Property of Rayleighs Quotient. Note that since p, q, and are bounded
functions, the Rayleigh quotient is bounded below. Thus there is a least eigenvalue. If we restrict u
to be a real continuous function that satisfies the boundary conditions, then
where 1 is the least eigenvalue. This form allows us to get upper and lower bounds on 1 .
To derive this formula, we first write it in terms of the operator L.
hu|L[u]i
1 = min
u hu||ui
866
Since u is continuous and satisfies the boundary conditions, we can expand u in a series of the
eigenfunctions.
P P
hu|L[u]i n=1 cn n L [ m=1 cm m ]
=
P P cm m
hu||ui n=1 cn n m=1
P P
n=1 cn n m=1 cm m m
=
P P cm m
n=1 cn n m=1
We see that the minimum value of Rayleighs quotient is 1 . The minimum is attained when cn = 0
for all n 2, that is, when u = c1 1 .
hn ||f i
cn = .
hn ||n i
Here the sum is convergent in the mean. For any fixed x, the sum converges to 12 (f (x ) + f (x+ )).
If f (x) is continuous and satisfies the boundary conditions, then the convergence is uniform.
867
Result 29.2.1 Properties of regular Sturm-Liouville problems.
The eigenvalues are real.
There are an infinite number of eigenvalues
1 < 2 < 3 < .
where Rb
a
f (x)n (x)(x) dx
cn = Rb .
2 (x)(x) dx
a n
Bounding The Least Eigenvalue. The Rayleigh quotient for the first eigenvalue is
R 0 2
( ) dx
1 = 0R 12 .
0
1 dx
R
Immediately we see that the eigenvalues are non-negative. If 0 (01 )2 dx = 0 then = (const). The
only constant that satisfies the boundary conditions is = 0. Since the trivial solution is not an
868
eigenfunction, = 0 is not an eigenvalue. Thus all the eigenvalues are positive.
Now we get an upper bound for the first eigenvalue.
R 0 2
(u ) dx
1 = min R0 2
u
0
u dx
where u is continuous and satisfies the boundary conditions. We choose u = x(x ) as a trial
function.
R 0 2
(u ) dx
1 R0 2
u dx
R 0
(2x )2 dx
= R 0 2
0
(x x)2 dx
3 /3
=
5 /30
10
= 2
1.013
Finding the Eigenvalues and Eigenfunctions. We consider the cases of negative, zero, and
positive eigenvalues to check our results above.
y(0) = 0 c=0
y() = 0 d sin( ) = 0
n = n2 , n = sin(nx), for n = 1, 2, 3, . . .
We can verify that this example satisfies all the properties listed in Result 29.2.1. Note that there
are an infinite number of eigenvalues. There is a least eigenvalue 1 = 1 but there is no greatest
eigenvalue. For each eigenvalue, there is one eigenfunction. The nth eigenfunction sin(nx) has n 1
zeroes in the interval 0 < x < .
869
Since a series of the eigenfunctions is the familiar Fourier sine series, we know that the eigen-
functions are orthogonal and complete. We check Rayleighs quotient.
Z 2
dn dn 2
pn dx + p dx qn dx
0
n = R0
0
2n dx
Z 2
d(sin(nx)) d(sin(nx))
sin(nx) dx + dx dx
0
= R 2 0
0
sin (nx)dx
R 2 2
n cos (nx) dx
= 0
/2
= n2
x2 y 00 + xy 0 + y = y, y(1) = y(2) = 0.
Since x2 > 0 on [1 . . . 2], we can write this problem in terms of a regular Sturm-Liouville eigenvalue
problem. We divide by x2 .
1 1
y 00 + y 0 + 2 (1 )y = 0
x x
R 1
We multiply by the integrating factor exp( x dx) = exp(ln x) = x and make the substitution,
= 1 to obtain the Sturm-Liouville form.
1
xy 00 + y 0 + y = 0
x
1
(xy 0 )0 + y = 0
x
We see that the eigenfunctions will be orthogonal with respect to the weighting function = 1/x.
The Rayleigh quotient is
b
p0 a + h0 |x|0 i
=
h| x1 |i
h0 |x|0 i
= .
h| x1 |i
If 0 = 0, then only the trivial solution, = 0, satisfies the boundary conditions. Thus the eigenvalues
are positive.
Returning to the original problem, we see that the eigenvalues, , satisfy < 1. Since this is an
Euler equation, we can find solutions with the substitution y = x .
( 1) + + 1 = 0
2 + 1 = 0
870
We know that the eigenfunctions can be written as real functions. We rewrite the solution.
y = c1 e 1 ln x
+c2 e 1 ln x
An equivalent form is p p
y = c1 cos( 1 ln x) + c2 sin( 1 ln x).
We apply the boundary conditions.
y(1) = 0 c1 = 0
p
y(2) = 0 sin( 1 ln 2) = 0
p
1 ln 2 = n, for n = 1, 2, . . .
Ax = x.
If the matrix A has a complete, orthonormal set of eigenvectors { k } with eigenvalues {k } then we
can represent any vector as a linear combination of the eigenvectors.
n
X
y= ak k , ak = k y
k=1
n
X
y= ( k y) k
k=1
Ax x = b. (29.1)
Before we try to solve this equation, we should consider the existence/uniqueness of the solution. If
is not an eigenvalue, then the range of L A is Rn . The problem has a unique solution. If
is an eigenvalue, then the null space of L is the span of the eigenvectors of . That is, if = i ,
then nullspace(L) = span( i1 , i2 , . . . , im ). ({ i1 , i2 , . . . , im } are the eigenvalues of i .) If b is
orthogonal to nullspace(L) then Equation 29.1 has a solution, but it is not unique. If y is a solution
then we can add any linear combination of { ij } to obtain another solution. Thus the solutions have
the form
X m
x=y+ cj ij .
j=1
871
We substitute the expansions into Equation 29.1.
n
X n
X n
X
A ak k ak k = bk k
k=1 k=1 k=1
n
X Xn Xn
ak k k ak k = bk k
k=1 k=1 k=1
bk
ak =
k
The solution is
n
X bk
x= .
k k
k=1
It would be handy if we could substitute the expansions into Equation 29.2. However, the expansion
of a function is not necessarily differentiable. Thus we demonstrate that since y is C 2 (a . . . b) and
satisfies the boundary conditions B1 [y] = B2 [y] = 0, we are justified in substituting it into the
differential equation. In particular, we will show that
" #
X X X
L[y] = L yk k = yk L [k ] = yk k k .
k k k
872
To do this we will use Greens identity. If u and v are C 2 (a . . . b) and satisfy the boundary conditions
B1 [y] = B2 [y] = 0 then
hu|L[v]i = hL[u]|vi.
First we assume that we can differentiate y term-by-term.
X
L[y] = yk k k
k
Now we directly expand L[y] and show that we get the same result.
X
L[y] = ck k
k
ck = hk |L[y]i
= hL[k ]|yi
= hk k |yi
= k hk |yi
= k yk
X
L[y] = yk k
k
The series representation of y may not be differentiable, but we are justified in applying L term-by-
term.
Now we substitute the expansions into Equation 29.2.
" #
X X X
L yk k yk k = fk k
k k k
X X X
k yk k yk k = fk k
k k k
fk
yk =
k
The solution is
X fk
y= k
k
k
We would like to substitute the series into the differential equation, but in general we are not allowed
to differentiate such series. To get around this, we use integration by parts to move derivatives from
the solution y, to the n .
873
where 6= n2 , n Z+ . We expand the solution in a cosine series.
r
y0 X 2
y(x) = + yn cos(nx)
n=1
We multiply the differential equation by the orthonormal functions and integrate over the interval.
We neglect the special case 0 = 1/ for now.
Z r Z r Z r
2 00 2 2
cos(nx)y dx + cos(nx)y dx = f (x) dx
0 0 0
"r # Z r
2 0 2
cos(nx)y (x) + n sin(nx)y 0 (x) dx + yn = fn
0
r "r 0 # Z r
2 n 0 0 2 2 2
((1) y () y (0)) + n sin(nx)y(x) n cos(nx)y(x) dx + yn = fn
0
0
r
2
((1)n y 0 () y 0 (0)) n2 yn + yn = fn
874
29.4 Exercises
Exercise 29.1
Find the eigenvalues and eigenfunctions of
y 00 + 2y 0 + y = 0, y(a) = y(b) = 0,
where a < b.
Write the problem in Sturm Liouville form. Verify that the eigenvalues and eigenfunctions
satisfy the properties of regular Sturm-Liouville problems. Find the coefficients in the expansion of
an arbitrary function f (x) in a series of the eigenfunctions.
Exercise 29.2
Find the eigenvalues and eigenfunctions of the boundary value problem
y 00 + y=0
(x + 1)2
on the interval 1 x 2 with boundary conditions y(1) = y(2) = 0. Discuss how the results satisfy
the properties of Sturm-Liouville problems.
Exercise 29.3
Find the eigenvalues and eigenfunctions of
2 + 1 0
y 00 + y + 2 y = 0, y(a) = y(b) = 0,
x x
where 0 < a < b. Write the problem in Sturm Liouville form. Verify that the eigenvalues and
eigenfunctions satisfy the properties of regular Sturm-Liouville problems. Find the coefficients in
the expansion of an arbitrary function f (x) in a series of the eigenfunctions.
Exercise 29.4
Find the eigenvalues and eigenfunctions of
y 00 y 0 + y = 0, y(0) = y(1) = 0.
Find the coefficients in the expansion of an arbitrary, f (x), in a series of the eigenfunctions.
Exercise 29.5
Consider
y 00 + y = f (x), y(0) = 0, y(1) + y 0 (1) = 0. (29.3)
The associated eigenvalue problem is
Find the eigenfunctions for this problem and the equation which the eigenvalues must satisfy.
To do this, consider the eigenvalues and eigenfunctions for,
Show that the transcendental equation for has infinitely many roots 1 < 2 < 3 < . Find
the limit of n as n . How is this limit approached?
Give the general solution of Equation 29.3 in terms of the eigenfunctions.
Exercise 29.6
Consider
y 00 + y = f (x) y(0) = 0 y(1) + y 0 (1) = 0.
Find the eigenfunctions for this problem and the equation which the eigenvalues satisfy. Give the
general solution in terms of these eigenfunctions.
875
Exercise 29.7
Show that the eigenvalue problem,
(note the mixed boundary condition), has only one real eigenvalue. Find it and the corresponding
eigenfunction. Show that this problem is not self-adjoint. Thus the proof, valid for unmixed,
homogeneous boundary conditions, that all eigenvalues are real fails in this case.
Exercise 29.8
Determine the Rayleigh quotient, R[] for,
1 0
y 00 + y + y = 0, |y(0)| < , y(1) = 0.
x
Use the trial function = 1 x in R[] to deduce
that the smallest zero of J0 (x), the Bessel function
of the first kind and order zero, is less than 6.
Exercise 29.9
Discuss the eigenvalues of the equation
where (
a > 0, 0 z l
q(z) =
b > 0, l < z .
This is an example that indicates that the results we obtained in class for eigenfunctions and eigen-
values with q(z) continuous and bounded also hold if q(z) is simply integrable; that is
Z
|q(z)| dz
0
is finite.
Exercise 29.10
1. Find conditions on the smooth real functions p(x), q(x), r(x) and s(x) so that the eigenvalues,
, of:
2. Show that for any smooth p(x), q(x), r(x) and s(x) the eigenfunctions belonging to distinct
eigenvalues are orthogonal relative to the weight s(x). That is:
Z b
vm (x)vk (x)s(x) dx = 0 if k 6= m .
a
d4 n
(0) = 00 (0) = 0,
= ,
dx4 (1) = 00 (1) = 0.
876
29.5 Hints
Hint 29.1
Hint 29.2
Hint 29.3
Hint 29.4
Write the problem in Sturm-Liouville form to show that the eigenfunctions are orthogonal with
respect to the weighting function = ex .
Hint 29.5
Note that the solution is a regular Sturm-Liouville problem and thus the eigenvalues are real. Use
the Rayleigh quotient to show that there are only positive eigenvalues. Informally show that there
are an infinite number of eigenvalues with a graph.
Hint 29.6
Hint 29.7
Find the solution for = 0, < 0 and > 0. A problem is self-adjoint if it satisfies Greens identity.
Hint 29.8
Write the equation in self-adjoint form. The Bessel equation of the first kind and order zero satisfies
the problem,
1
y 00 + y 0 + y = 0, |y(0)| < , y(r) = 0,
x
where r is a positive root of J0 (x). Make the change of variables = x/r, u() = y(x).
Hint 29.9
Hint 29.10
877
29.6 Solutions
Solution 29.1
Recall that constant coefficient equations are shift invariant. If u(x) is a solution, then so is u(x c).
We substitute y = ex into the constant coefficient equation.
y 00 + 2y 0 + y = 0
2 + 2 + = 0
p
= 2
The homogeneous solution that satisfies the left boundary condition y(a) = 0 is
y = c(x a) ex .
Since only the trivial solution with c = 0 satisfies the right boundary condition, = 2 is not an
eigenvalue.
Next we consider the case 6= 2 . We write
p
= 2 .
Note that <( 2 ) 0. A set of solutions of the differential equation is
n o
2
e( )x
By taking the sum and difference of these solutions we obtain a new set of linearly independent
solutions. n p p o
ex cos 2 x , ex sin 2 x
The solution which satisfies the left boundary condition is
p
y = c ex sin 2 (x a) .
For nontrivial solutions, the right boundary condition y(b) = 0 imposes the constraint
p
eb sin 2 (b a) = 0
p
2 (b a) = n, n Z
878
The eigenvalues
2
2 n
n = + , nZ
ba
are real.
For each eigenvalue, we found one unique, (to within a multiplicative constant), eigenfunction
n . We were able to choose the eigenfunctions to be real-valued. The eigenfunction
xa
n = ex sin n .
ba
The eigenfunctions are orthogonal with respect to the weighting function (x) = e2ax .
b b
x a x x a 2ax
Z Z
x
n (x)m (x)(x) dx = e sin n e sin m e dx
a a ba ba
Z b
xa xa
= sin n sin m dx
a ba ba
Z
ba
= sin(nx) sin(mx) dx
Z0
ba
= (cos((n m)x) cos((n + m)x)) dx
2 0
= 0 if n 6= m
The eigenfunctions are complete. Any piecewise continuous function f (x) defined on a x b
can be expanded in a series of eigenfunctions
X
f (x) cn n (x),
n=1
where
Rb
a
f (x)n (x)(x) dx
cn = Rb .
2 (x)(x) dx
a n
The sum converges to 12 (f (x ) + f (x+ )). (We do not prove this property.)
879
The eigenvalues can be related to the eigenfunctions with the Rayleigh quotient.
h ib R 2
dn b dn 2
pn dx + a p dx qn dx
a
n = Rb
2 dx
a n
2
R b 2x x n xa xa
ba cos n ba sin n ba
a
e e dx
= Rb 2
a
ex sin n xaba
e2x dx
R b n 2 2
xa
n
xa
xa
2 2
xa
a ba cos n ba 2 ba cos n ba sin n ba + sin n ba dx
= R b 2 xa
a
sin n ba dx
2
R n 2 n 2 2
0 ba cos (x) 2 ba cos(x) sin(x) + sin (x) dx
= R 2
0
sin (x) dx
2
n
= 2 +
ba
where
Rb
f (x)n (x)(x) dx
a
cn = Rb
2 (x)(x) dx
a n
Z b
2n xa
= f (x) ex sin n dx
ba a ba
Solution 29.2
This is an Euler equation. We substitute y = (x + 1) into the equation.
y 00 + y=0
(x + 1)2
( 1) + = 0
1 1 4
=
2
First consider the case = 1/4. A set of solutions is
x + 1, x + 1 ln(x + 1) .
Since only the trivial solution satisfies the y(2) = 0, = 1/4 is not an eigenvalue.
880
Now consider the case 6= 1/4. A set of solutions is
n o
(x + 1)(1+ 14)/2 , (x + 1)(1 14)/2 .
y(2) = 0
4 1 3
sin ln =0
2 2
4 1 3
ln = n, n Z
2 2
2 !
1 2n
= 1+ , nZ
4 ln(3/2)
n = 0 gives us a trivial solution, so we discard it. Discarding duplicate solutions, The eigenvalues
and eigenfunctions are
2
1 n ln((x + 1)/2)
n = + , yn = x + 1 sin n , n Z+ .
4 ln(3/2) ln(3/2)
Now we verify that the eigenvalues and eigenfunctions satisfy the properties of regular Sturm-
Liouville problems.
The eigenvalues are real.
There are an infinite number of eigenvalues
881
The eigenfunctions are orthogonal with respect to the weighting function (x) = 1/(x + 1)2 .
Let n 6= m.
Z 2
yn (x)ym (x)(x) dx
1
Z 2
ln((x + 1)/2)
ln((x + 1)/2) 1
= x + 1 sin n x + 1 sin m dx
1 ln(3/2) ln(3/2) (x + 1)2
Z
ln(3/2)
= sin(nx) sin(mx) dx
0
ln(3/2)
Z
= (cos((n m)x) cos((n + m)x)) dx
2 0
=0
The eigenfunctions are complete. A function f (x) defined on (1 . . . 2) has the series represen-
tation
X X ln((x + 1)/2)
f (x) cn yn (x) = cn x + 1 sin n ,
n=1 n=1
ln(3/2)
where
2
hyn |1/(x + 1)2 |f i
Z
2 ln((x + 1)/2) 1
cn = = sin n f (x) dx
hyn |1/(x + 1)2 |yn i ln(3/2) 1 ln(3/2) (x + 1)3/2
Solution 29.3
Recall that Euler equations are scale invariant. If u(x) is a solution, then so is u(cx) for any nonzero
constant c.
We substitute y = x into the Euler equation.
2 + 1 0
y 00 + y + 2y = 0
x x
( 1) + (2 + 1) + = 0
2 + 2 + = 0
p
= 2
The homogeneous solution that satisfies the left boundary condition y(a) = 0 is
x
y = cx (ln x ln a) = cx ln .
a
Since only the trivial solution with c = 0 satisfies the right boundary condition, = 2 is not an
eigenvalue.
Next we consider the case 6= 2 . We write
p
= 2 .
Note that <( 2 ) 0. A set of solutions of the differential equation is
n o
2
x
n o
2
x e ln x .
882
By taking the sum and difference of these solutions we obtain a new set of linearly independent
solutions. n p p o
x cos 2 ln x , x sin 2 ln x ,
The eigenvalues
2
2 n
n = + , nZ
ln(b/a)
are real.
For each eigenvalue, we found one unique, (to within a multiplicative constant), eigenfunction
n . We were able to choose the eigenfunctions to be real-valued. The eigenfunction
ln(x/a)
n = x sin n .
ln(b/a)
883
The eigenfunctions are orthogonal with respect to the weighting function (x) = x21 .
Z b Z b
ln(x/a) ln(x/a)
n (x)m (x)(x) dx = x sin n x sin m x21 dx
a a ln(b/a) ln(b/a)
Z b
ln(x/a) ln(x/a) 1
= sin n sin m dx
a ln(b/a) ln(b/a) x
Z
ln(b/a)
= sin(nx) sin(mx) dx
Z0
ln(b/a)
= (cos((n m)x) cos((n + m)x)) dx
2 0
= 0 if n 6= m
The eigenfunctions are complete. Any piecewise continuous function f (x) defined on a x b
can be expanded in a series of eigenfunctions
X
f (x) cn n (x),
n=1
where Rb
a
f (x)n (x)(x) dx
cn = Rb .
2 (x)(x) dx
a n
The sum converges to 12 (f (x ) + f (x+ )). (We do not prove this property.)
The eigenvalues can be related to the eigenfunctions with the Rayleigh quotient.
h ib R 2
dn b dn 2
pn dx + a p dx qn dx
a
n = Rb
a n
2 dx
2
Rb 2+1 1 n ln(x/a) ln(x/a)
a
x x ln(b/a) cos n ln(b/a) sin n ln(b/a) dx
= Rb 2
a
x sin n ln(x/a)
ln(b/a) x21 dx
2
Rb 2
a
n
ln(b/a) cos 2
() 2 n
ln(b/a) cos () sin () + 2
sin () x1 dx
= R b 2 ln(x/a)
a
sin n ln(b/a) x1 dx
2
R n 2 n 2 2
0 ln(b/a) cos (x) 2 ln(b/a) cos(x) sin(x) + sin (x) dx
= R 2
0
sin (x) dx
2
n
= 2 +
ln(b/a)
884
Solution 29.4
y 00 y 0 + y = 0, y(0) = y(1) = 0.
The factor that will put this equation in Sturm-Liouville form is
Z x
F (x) = exp 1 dx = ex .
885
Solution 29.5
Consider the eigenvalue problem
Since this is a Sturm-Liouville problem, there are only real eigenvalues. By the Rayleigh quotient,
the eigenvalues are
1 R 2
d 1 d
dx + 0 dx dx
0
= R1 ,
0
2 dx
R 1 d 2
2 (1) + 0 dx dx
= R1 .
0
2 dx
This demonstrates that there are only positive eigenvalues. The general solution of the differential
equation for positive, real is
y = c1 cos x + c2 sin x .
= tan .
The positive solutions of this equation are eigenvalues with corresponding eigenfunctions sin x .
In Figure 29.1 we plot the functions x and tan(x) and draw vertical lines at x = (n 1/2), n N.
From this we see that there are an infinite number of eigenvalues, 1 < 2 < 3 < . In the
limit as n , n (n 1/2). The limit is approached from above.
Now consider the eigenvalue problem
886
From above we see that the eigenvalues satisfy
p p
1 = tan 1
and that there are an infinite number of eigenvalues. For large n, n 1 (n 1/2). The
eigenfunctions are p
n = sin 1 n x .
To solve the inhomogeneous problem, we expand the solution and the inhomogeneity in a series
of the eigenfunctions.
R1
X f (x)n (x) dx
f= fn n , fn = 0 R 1
n=1 0 n
2 (x) dx
X
y= yn n
n=1
We substitite the expansions into the differential equation to determine the coefficients.
y 00 + y = f
X
X
n yn n = fn n
n=1 n=1
Xfn p
y= sin 1 n x
n=1 n
Solution 29.6
Consider the eigenvalue problem
y 00 + y = y y(0) = 0 y(1) + y 0 (1) = 0.
From Exercise 29.5 we see that the eigenvalues satisfy
p p
1 = tan 1
and that there are an infinite number of eigenvalues. For large n, n 1 (n 1/2). The
eigenfunctions are p
n = sin 1 n x .
To solve the inhomogeneous problem, we expand the solution and the inhomogeneity in a series
of the eigenfunctions.
R1
X f (x)n (x) dx
f= fn n , fn = 0 R 1
n=1 0 n
2 (x) dx
X
y= yn n
n=1
We substitite the expansions into the differential equation to determine the coefficients.
y 00 + y = f
X
X
n yn n = fn n
n=1 n=1
X fn p
y= sin 1 n x
n=1 n
887
Solution 29.7
First consider = 0. The general solution is
y = c1 + c2 x.
For nontrivial solutions of the boundary value problem, there must be negative real solutions of
sinh = 0.
Since x = sinh x has no nonzero real solutions, this equation has no solutions for negative real .
There are no negative real eigenvalues.
Finally consider positive real . The general solution is
y = c1 cos x + c2 sin x .
For nontrivial solutions of the boundary value problem, there must be positive real solutions of
sin = 0.
Since x = sin x has no nonzero real solutions, this equation has no solutions for positive real .
There are no positive real eigenvalues.
There is only one real eigenvalue, = 0, with corresponding eigenfunction = x.
The difficulty with the boundary conditions, y(0) = 0, y 0 (0) y(1) = 0 is that the problem is not
self-adjoint. We demonstrate this by showing that the problem does not satisfy Greens identity. Let
u and v be two functions that satisfy the boundary conditions, but not necessarily the differential
equation.
Solution 29.8
First we write the equation in formally self-adjoint form,
888
Let be an eigenvalue with corresponding eigenfunction . We derive the Rayleigh quotient for .
h, L[]i = h, xi
h, (x0 )0 i = h, xi
1
[x0 ]0 h0 , x0 i = h, xi
h0 , x0 i
=
h, xi
The Bessel equation of the first kind and order zero satisfies the problem,
1 0
y 00 + y + y = 0, |y(0)| < , y(r) = 0,
x
where r is a positive root of J0 (x). We make the change of variables = x/r, u() = y(x) to obtain
the problem
1 00 1 1 0
u + u + u = 0, |u(0)| < , u(1) = 0,
r2 r r
1
u00 + u0 + r2 u = 0, |u(0)| < , u(1) = 0.
Now r2 is the eigenvalue of the problem for u(). From the Rayleigh quotient, the minimum eigen-
value obeys the inequality
h0 , x0 i
r2 ,
h, xi
where is any test function that satisfies the boundary conditions. Taking = 1 x we obtain,
R1
2 (1)x(1) dx
r R1 0 = 6,
0
(1 x)x(1 x) dx
r 6
Thus the smallest zero of J0 (x) is less than or equal to 6 2.4494. (The smallest zero of J0 (x) is
approximately 2.40483.)
Solution 29.9
We assume that 0 < l < .
Recall that the solution of a second order differential equation with piecewise continuous coef-
ficient functions is piecewise C 2 . This means that the solution is C 2 except for a finite number of
points where it is C 1 .
First consider the case = 0. A set of linearly independent solutions of the differential equation
is {1, z}. The solution which satisfies y(0) = 0 is y1 = c1 z. The solution which satisfies y() = 0 is
y2 = c2 ( z). There is a solution for the problem if there are there are values of c1 and c2 such
that y1 and y2 have the same position and slope at z = l.
889
The solution which satisfies y(0) = 0 is
y1 = c1 sin( az).
This system of equations has nontrivial solutions if and only if the determinant of the matrix is zero.
b sin( al) sin( b( l)) + a cos( al) sin( b( l)) = 0
( b a) sin (l a ( l) b) + ( b + a) sin (l a + ( l) b) = 0
Clearly this equation has an infinite number of solutions for real, positive . However, it is not clear
that this equation does not have non-real solutions. In order to prove that, we will show that the
problem is self-adjoint. Before going on to that we note that the eigenfunctions have the form
(
sin an z 0zl
n (z) =
sin bn ( z) l < z .
Now we prove that the problem is self-adjoint. We consider the class of functions which are C 2
in (0 . . . ) except at the interior point x = l where they are C 1 and which satisfy the boundary
conditions y(0) = y() = 0. Note that the differential operator is not defined at the point x = l.
Thus Greens identity,
hu|q|Lvi = hLu|q|vi
is not well-defined. To remedy this we must define a new inner product. We choose
Z l Z
hu|vi uv dx + uv dx.
0 l
This new inner product does not require differentiability at the point x = l.
The problem is self-adjoint if Greens indentity is satisfied. Let u and v be elements of our
class of functions. In addition to the boundary conditions, we will use the fact that u and v satisfy
890
y(l ) = y(l+ ) and y 0 (l ) = y 0 (l+ ).
Z l Z
hv|Lui = vu00 dx + vu00 dx
0 l
Z l Z
l
= [vu0 ]0 0 0
v u dx + [vu0 ]l v 0 u0 dx
0 l
Z l Z
= v(l)u0 (l) v 0 u0 dx v(l)u0 (l) v 0 u0 dx
0 l
Z l Z
= v 0 u0 dx v 0 u0 dx
0 l
Z l Z
l
= [v 0 u]0 + v 00 u dx [v 0 u]l + v 00 u dx
0 l
Z l Z
0 00 0
= v (l)u(l) + v u dx + v (l)u(l) + v 00 u dx
0 l
Z l Z
= v 00 u dx + v 00 u dx
0 l
= hLv|Lui
The problem is self-adjoint. Hence the eigenvalues are real. There are an infinite number of positive,
real eigenvalues n .
Solution 29.10
1. Let v be an eigenfunction with the eigenvalue . We start with the differential equation and
then take the inner product with v.
b b
[v(pv 00 )0 ]a hv 0 , (pv 00 )0 i [vqv 0 ]a + hv 0 , qv 0 i + hv, rvi = hv, svi
b
[v 0 pv 00 ]a + hv 00 , pv 00 i + hv 0 , qv 0 i + hv, rvi = hv, svi
hv 00 , pv 00 i + hv 0 , qv 0 i + hv, rvi
=
hv, svi
We see that if p, q, r, s 0 then the eigenvalues will be positive. (Of course we assume that p
and s are not identically zero.)
2. First we prove that this problem is self-adjoint. Let u and v be functions that satisfy the
boundary conditions, but do not necessarily satsify the differential equation.
hv, L[u]i hL[v], ui = hv, (pu00 )00 (qu0 )0 + rui h(pv 00 )00 (qv 0 )0 + rv, ui
Following our work in part (a) we use integration by parts to move the derivatives.
= (hv 00 , pu00 i + hv 0 , qu0 i + hv, rui) (hpv 00 , u00 i + hqv 0 , u0 i + hrv, ui)
=0
891
and is thus self-adjoint.
Let vk and vm be eigenfunctions corresponding to the distinct eigenvalues k and m . We
start with Greens identity.
c1 sin(1/4 ) + c2 sinh(1/4 ) = 0
c1 1/2 sin(1/4 ) + c2 1/2 sinh(1/4 ) = 0
n = (n)4 , n = sin(nx), n N.
892
Chapter 30
Never try to teach a pig to sing. It wastes your time and annoys the pig.
-?
The integral is convergent to S(x) if, given any > 0, there exists T (x, ) such that
Z
f (x, t) dt S(x) < for all > T (x, ).
c
If f (x, t) is continuous for x [a, b] and t [c, ) then for a < x0 < b,
Z Z
lim f (x, t) dt = lim f (x, t) dt.
xx0 c c xx0
f
If x is continuous, then
Z Z
d
f (x, t) dt = f (x, t) dt.
dx c c x
893
30.2 The Riemann-Lebesgue Lemma
Rb
Result 30.2.1 If a
|f (x)| dx exists, then
Z b
f (x) sin(x) dx 0 as .
a
Before we try to justify the Riemann-Lebesgue lemma, we will need a preliminary result. Let
be a positive constant.
Z b
b 1
sin(x) dx = cos(x)
a a
2
.
We will prove the Riemann-Lebesgue lemma for the case when f (x) has limited total fluctuation
on the interval (a, b). We can express f (x) as the difference of two functions
when these limits exist. The Cauchy principal value of the integral is defined
Z Z a
PV f (x) dx = lim f (x) dx.
a a
894
R
Example 30.3.1
x dx diverges, but
Z Z a
PV x dx = lim x dx = lim (0) = 0.
a a a
If the improper integral converges, then the Cauchy principal value exists and is equal to the
value of the integral. The principal value of the integral of an odd function is zero. If the principal
value of the integral of an even function exists, then the integral converges.
when the limits exist. The Cauchy principal value of the integral is defined
!
Z b Z Z b
PV f (x) dx = lim f (x) dx + f (x) dx ,
a 0+ a
895
896
Chapter 31
for all values of s for which the integral exists. The Laplace transform of f (t) is a function of s
which we will denote f(s). 1
A function f (t) is of exponential order if there exist constants t0 and M such that
Example 31.1.1 Consider the Laplace transform of f (t) = 1. Since f (t) = 1 is of exponential
order for any > 0, the Laplace transform integral converges for <(s) > 0.
Z
f(s) = est dt
0
1 st
= e
s 0
1
=
s
897
Example 31.1.2 The function f (t) = t et is of exponential order for any > 1. We compute the
Laplace transform of this function.
Z
f(s) = est t et dt
Z0
= t e(1s)t dt
0
Z
1 1
= t e(1s)t e(1s)t dt
1s 0 0 1 s
1 (1s)t
= e
(1 s)2 0
1
= for <(s) > 1.
(1 s)2
where c > 0.
Z
L[H(t c)] = est H(t c) dt
0
Z
= est dt
c
st
e
=
s c
ecs
= for <(s) > 0
s
f (t) = L1 [f(s)].
We compute the inverse Laplace transform with the Mellin inversion formula.
Z +
1
f (t) = est f(s) ds
2
898
Here is a real constant that is to the right of the singularities of f(s).
To see why the Mellin inversion formula is correct, we take the Laplace transform of it. Assume
that f (t) is of exponential order . Then will be to the right of the singularities of f(s).
Z +
1
L[L 1
[f(s)]] = L e f(z) dz
zt
2
Z Z +
1
= est ezt f(z) dz dt
0 2
+
f(z)
Z
1
= dz
2 sz
We would like to evaluate this integral by closing the path of integration with a semi-circle of radius
R in the right half plane and applying the residue theorem. However, in order for the integral along
the semi-circle to vanish as R , f(z) must vanish as |z| . If f(z) vanishes we can use the
maximum modulus bound to show that the integral along the semi-circle vanishes. This we assume
that f(z) vanishes at infinity.
Consider the integral,
I
1 f (z)
dz,
2 C s z
where C is the contour that starts at R, goes straight up to +R, and then follows a semi-circle
back down to R. This contour is shown in Figure 31.1.
Im(z)
+iR
s
Re(z)
-iR
Figure 31.1: The Laplace Transform Pair Contour.
899
Note that the contour is traversed in the negative direction. Since f(z) decays as |z| , the
semicircular contribution to the integral will vanish as R . Thus
+
f(z)
Z
1
dz = f(s).
2 sz
Im(s)
+iR
CR
BR
Re(s)
-iR
Figure 31.2: The Path of Integration for the Inverse Laplace Transform.
3
s = 1 + R e ,
2 2
st t(1+R e ) t tR cos
e = e =e e et
900
Z Z
1 st 1
est
ds e
s2 ds
CR s2 CR
1
R et
(R 1)2
0 as R
Let f(s) be analytic except for isolated poles at s1 , s2 , . . . , sN and let be to the right of these
poles. Also, let f(s) 0 as |s| . Define BR to be the straight line from R to + R and
CR to be the semicircular path from + R to R. If R is large enough to enclose all the poles,
then
I N
1 X
est f(s) ds = Res(est f(s), sn )
2 BR +CR n=1
Z N Z
1 X 1
est f(s) ds = Res(e f(s), sn )
st
est f(s) ds.
2 BR n=1
2 CR
Now lets examine the integral along CR . Let the maximum of |f(s)| on CR be MR . We can
parameterize the contour with s = + R e , /2 < < 3/2.
Z Z 3/2
i
est f(s) ds = et(+R e ) f( + R e )R e d
CR /2
Z 3/2
et etR cos RMR d
/2
Z
= RMR et etR sin d
0
< RMR et .
tR
= MR et
t
We use that MR 0 as R .
0 as R
Z + N
1 X
est f(s) ds = Res(est f(s), sn )
2 n=1
N
X
L1 [f(s)] = Res(est f(s), sn )
n=1
901
Result 31.2.1 If f(s) is analytic except for poles at s1 , s2 , . . . , sN and f(s)
0 as |s| then the inverse Laplace transform of f(s) is
N
X
f (t) = L [f(s)] =
1
Res(est f(s), sn ), for t > 0.
n=1
1
Example 31.2.2 Consider the inverse Laplace transform of s3 s2 .
First we factor the denominator.
1 1 1
3 2
= 2 .
s s s s1
Taking the inverse Laplace transform,
1 st 1 1 st 1 1
L1 3 = Res e , 0 + Res e , 1
s s3 s2 s 1 s2 s 1
d est
= + et
ds s 1 s=0
1 t
= 2
+ + et
(1) 1
Thus we have that
1 1
L = et t 1, for t > 0.
s3 s2
902
Let be any positive number. The inverse Laplace transform of 1 is
s
Z +
1 1
f (t) = est ds.
2 s
We will evaluate the integral by deforming it to wrap around the branch cut. Consider the integral
+
on the contour shown in Figure 31.3. CR and CR are circular arcs of radius R. B is the vertical
line at <(s) = joining the two arcs. C is a semi-circle in the right half plane joining and .
L+ and L are lines joining the circular arcs at =(s) = .
CR+
B
/2
L+ C
L- /2+
CR-
Figure 31.3: Path of Integration for 1/ s
Z Z Z Z Z Z !
1 1
+ + + + + est ds = 0.
2 B +
CR L+ C L
CR s
Z Z /2 Z
ds = d + d.
+
CR /2 /2
The first integral vanishes by the maximum modulus bound. Note that the length of the path of
integration is less than 2.
Z !
/2
st 1
d max+ e (2)
s
/2 sCR
1
= et (2)
R
0 as R
903
+
The second integral vanishes by Jordans Lemma. A parameterization of CR is s = R e .
Z Z
1
eR e t 1 d
R e t
e d
R e R e
/2
/2
Z
1
eR cos()t d
R /2
Z /2
1
eRt sin() d
R 0
1
<
R 2Rt
0 as R
We could show that the integral along CR vanishes by the same method. Now we have
Z Z Z Z
1 1
+ + + est ds = 0.
2 B L+ C L s
We can show that the integral along C vanishes as 0 with the maximum modulus bound.
Z
st 1
st 1
e ds max e ()
C s sC s
1
< et
0 as 0
Now we can express the inverse Laplace transform in terms of the integrals along L+ and L .
Z + Z Z
1 1 1 1 1 1
f (t) est ds = est ds est ds
2 s 2 L+ s 2 L s
On L+ , s = r e , ds = e dr = dr; on L , s = r e , ds = e dr = dr. We can combine the
integrals along the top and bottom of the branch cut.
Z 0 Z
1 1
f (t) = ert (1) dr ert (1) dr
2 r 2 0 r
Z
1 2
= ert dr
2 0 r
1
= (1/2)
t
1
=
t
Thus the inverse Laplace transform of 1 is
s
1
f (t) = , for t > 0.
t
904
31.2.3 Asymptotic Behavior of f(s)
Consider the behavior of Z
f(s) = est f (t) dt
0
as s +. Assume that f (t) is analytic in a neighborhood of t = 0. Only the behavior of the
integrand near t = 0 will make a significant contribution to the value of the integral. As you move
away from t = 0, the est term dominates. Thus we could approximate the value of f(s) by replacing
f (t) with the first few terms in its Taylor series expansion about the origin.
Z
t2
f(s) est f (0) + tf 0 (0) + f 00 (0) + dt as s +
0 2
Using
n!
L [tn ] =
sn+1
we obtain
f (0) f 0 (0) f 00 (0)
f(s) + 2 + + as s +.
s s s3
Example 31.2.5 The Taylor series expansion of sin t about the origin is
t3
sin t = t + O(t5 ).
6
Thus the Laplace transform of sin t has the behavior
1 1
L[sin t] 4 + O(s6 ) as s +.
s2 s
We corroborate this by expanding L[sin t].
1
L[sin t] =
s2
+1
s2
=
1 + s2
X
2
=s (1)n s2n
n=0
1 1
= 2 4 + O(s6 )
s s
905
f(s)
hR i
t
L 0
f ( ) d = s
= sf(s) f (0)
d
L dt f (t)
h i
d2 2 0
L dt2 f (t) = s f (s) sf (0) f (0)
d2
L f (t) = sL[f 0 (t)] f 0 (0)
dt2
= s2 f(s) sf (0) f 0 (0)
Let f (t) and g(t) be continuous. The convolution of f (t) and g(t) is defined
Z t Z t
h(t) = (f g) = f ( )g(t ) d = f (t )g( ) d
0 0
h(s) = f(s)g(s).
To show this,
Z Z t
h(s) = est f ( )g(t ) d dt
Z0 Z 0
= est f ( )g(t ) dt d
0
Z Z
= es f ( ) es(t ) g(t ) dt d
0
Z Z
s
= e f ( ) d es g() d
0 0
= f(s)g(s)
1
Example 31.3.1 Consider the inverse Laplace transform of s3 s2 . First we factor the denominator.
1 1 1
= 2
s3 s2 s s1
We know the inverse Laplace transforms of each term.
1 1
L1 2 = t, L1 = et
s s1
906
We apply the convolution theorem.
Z t
1 1 1
L = et d
s2 s 1 0
Z t
t
= et e 0 et e d
0
= t 1 + et
1 1
L1 = et t 1.
s2 s 1
One can see from this example that taking the Laplace transform of a constant coefficient differ-
ential equation reduces the differential equation for y(t) to an algebraic equation for y(s).
907
Example 31.4.2 Consider the differential equation
We use the convolution theorem to find the inverse Laplace transform of y(s).
Z t
1
y(t) = sin(2 ) cos(t ) d + cos t
0 2
Z t
1
= sin(t + ) + sin(3 t) d + cos t
4 0
t
1 1
= cos(t + ) cos(3 t) + cos t
4 3 0
1 1 1
= cos(2t) + cos t cos(2t) + cos(t) + cos t
4 3 3
1 4
= cos(2t) + cos(t)
3 3
Alternatively, we can find the inverse Laplace transform of y(s) by first finding its partial fraction
expansion.
s/3 s/3 s
y(s) = +
s2 + 1 s2 + 4 s2 + 1
s/3 4s/3
= 2 + 2
s +4 s +1
1 4
y(t) = cos(2t) + cos(t)
3 3
y 00 + 5y 0 + 2y = 0, y(0) = 1, y 0 (0) = 2.
y(t) = 1 + 2t + O(t2 )
1 2
y(s) + 2 + O(s3 ), as s +.
s s
908
31.5 Systems of Constant Coefficient Differential Equations
The Laplace transform can be used to transform a system of constant coefficient differential equations
into a system of algebraic equations. This should not be surprising, as a system of differential
equations can be written as a single differential equation, and vice versa.
y10 = y2
y20 = y3
y30 = y3 y2 y1 + t3
sy1 y1 (0) = y2
sy2 y2 (0) = y3
6
sy3 y3 (0) = y3 y2 y1 +
s4
The first two equations can be written as
y3
y1 =
s2
y3
y2 = .
s
We substitute this into the third equation.
y3 y3 6
sy3 = y3 2 + 4
s s s
6
(s3 + s2 + s + 1)y3 = 2
s
6
y3 = 2 3 .
s (s + s2 + s + 1)
We solve for y1 .
6
y1 =
s4 (s3
+ s2 + s + 1)
1 1 1 1s
y1 = 4 3 + +
s s 2(s + 1) 2(s2 + 1)
t3 t2 1 1 1
y1 = + et + sin t cos t.
6 2 2 2 2
We can find y2 and y3 by differentiating the expression for y1 .
t2 1 1 1
y2 = t et + cos t + sin t
2 2 2 2
1 t 1 1
y3 = t 1 + e sin t + cos t
2 2 2
909
31.6 Exercises
Exercise 31.1
Find the Laplace transform of the following functions:
1. f (t) = eat
2. f (t) = sin(at)
3. f (t) = cos(at)
4. f (t) = sinh(at)
5. f (t) = cosh(at)
sin(at)
6. f (t) =
t
Z t
sin(au)
7. f (t) = du
0 u
(
1, 0 t <
8. f (t) =
0, t < 2
and f (t + 2) = f (t) for t > 0. That is, f (t) is periodic for t > 0.
Exercise 31.2
Show that L[af (t) + bg(t)] = aL[f (t)] + bL[g(t)].
Exercise 31.3
Show that if f (t) is of exponential order ,
Exercise 31.4
Show that
dn
L[tn f (t)] = (1)n [f (s)] for n = 1, 2, . . .
dsn
Exercise 31.5
R f (t)
Show that if 0 t dt exists for positive then
Z
f (t)
L = f() d.
t s
Exercise 31.6
Show that
t
f(s)
Z
L f ( ) d = .
0 s
Exercise 31.7
Show that if f (t) is periodic with period T then
RT
0
est f (t) dt
L[f (t)] = .
1 esT
910
Exercise 31.8
The function f (t) t 0, is periodic with period 2T ; i.e. f (t + 2T ) f (t), and is also odd with
period T ; i.e. f (t + T ) = f (t). Further,
Z T
f (t) est dt = g(s).
0
Show that the Laplace transform of f (t) is f(s) = g(s)/(1 + esT ). Find f (t) such that f(s) =
s1 tanh(sT /2).
Exercise 31.9
Find the Laplace transform of t , > 1 by two methods.
1. Assume that s is complex-valued. Make the change of variables z = st and use integration in
the complex plane.
2. Show that the Laplace transform of t is an analytic function for <(s) > 0. Assume that s is
real-valued. Make the change of variables x = st and evaluate the integral. Then use analytic
continuation to extend the result to complex-valued s.
Exercise 31.11
Find the Laplace transform of t ln t. Write the answer in terms of the digamma function, () =
0 ()/(). What is the answer for = 0?
Exercise 31.12
Find the inverse Laplace transform of
1
f(s) =
s3 2s2 + s 2
with the following methods.
1. Expand f(s) using partial fractions and then use the table of Laplace transforms.
2. Factor the denominator into (s 2)(s2 + 1) and then use the convolution theorem.
Exercise 31.13
Solve the differential equation
using the Laplace transform. This equation represents a weakly damped, driven, linear oscillator.
Exercise 31.14
Solve the problem,
y 00 ty 0 + y = 0, y(0) = 0, y 0 (0) = 1,
with the Laplace transform.
911
Exercise 31.15
Prove the following relation between the inverse Laplace transform and the inverse Fourier transform,
1 ct 1
L1 [f(s)] = e F [f (c + )],
2
d4 y
y = t, y(0) = y 0 (0) = y 00 (0) = y 000 (0) = 0.
dt4
du
+ u(t) u(t 1) = 0, t 0,
dt
and the initial condition u(t) = u0 (t), 1 t 0, where u0 (t) is given. Show that the Laplace
transform u(s) of u(t) satisfies
0
es
Z
u0 (0)
u(s) = + est u0 (t) dt.
1 + s es 1 + s es 1
Exercise 31.20
Let the function f (t) be defined by
(
1 0t<
f (t) =
0 t < 2,
and for all positive values of t so that f (t + 2) = f (t). That is, f (t) is periodic with period 2.
Find the solution of the intial value problem
d2 y
y = f (t); y(0) = 1, y 0 (0) = 0.
dt2
Examine the continuity of the solution at t = n, where n is a positive integer, and verify that the
solution is continuous and has a continuous derivative at these points.
912
Exercise 31.21
Use Laplace transforms to solve
Z t
dy
+ y( ) d = et , y(0) = 1.
dt 0
Exercise 31.22
An electric circuit gives rise to the system
di1
L + Ri1 + q/C = E0
dt
di2
L + Ri2 q/C = 0
dt
dq
= i1 i2
dt
with initial conditions
E0
i1 (0) = i2 (0) =
, q(0) = 0.
2R
Solve the system by Laplace transform methods and show that
E0 E0 t
i1 = + e sin(t)
2R 2L
where
R 2
= and 2 = 2 .
2L LC
Exercise 31.23
Solve the initial value problem,
y 00 + 4y 0 + 4y = 4 et , y(0) = 2, y 0 (0) = 3.
913
31.7 Hints
Hint 31.1
Use the differentiation and integration properties of the Laplace transform where appropriate.
Hint 31.2
Hint 31.3
Hint 31.4
g
If the integral is uniformly convergent and s is continuous then
Z b Z b
d
g(s, t) dt = g(s, t) dt
ds a a s
Hint 31.5
Z
1 sx
etx dt = e
s x
Hint 31.6
Use integration by parts.
Hint 31.7
Z Z (n+1)T
X
est f (t) dt = est f (t) dt
0 n=0 nT
Hint 31.8
Hint 31.9
Write the answer in terms of the Gamma function.
Hint 31.10
Hint 31.11
Hint 31.12
Hint 31.13
Hint 31.14
914
Hint 31.15
Hint 31.16
Hint 31.17
Hint 31.18
Hint 31.19
Hint 31.20
Hint 31.21
Hint 31.22
Hint 31.23
915
31.8 Solutions
Solution 31.1
1.
Z
L eat = est eat dt
0
Z
= e(sa)t dt
0
(sa)t
e
= for <(s) > <(a)
sa 0
1
L eat =
for <(s) > <(a)
sa
2.
Z
L[sin(at)] = est sin(at) dt
0
Z
1
= e(s+a)t e(sa)t dt
2 0
1 e(s+a)t e(sa)t
= + , for <(s) > 0
2 s a s + a 0
1 1 1
=
2 s a s + a
a
L[sin(at)] = for <(s) > 0
s2 + a2
3.
d sin(at)
L[cos(at)] = L
dt a
sin(at)
= sL sin(0)
a
s
L[cos(at)] = for <(s) > 0
s2 + a2
4.
Z
L[sinh(at)] = est sinh(at) dt
0
Z
1
= e(s+a)t e(sa)t dt
2 0
1 e(s+a)t e(sa)t
= + for <(s) > |<(a)|
2 sa s+a 0
1 1 1
=
2 sa s+a
a
L[sinh(at)] = for <(s) > |<(a)|
s2 a2
916
5.
d sinh(at)
L[cosh(at)] = L
dt a
sinh(at)
= sL sinh(0)
a
s
L[cosh(at)] = for <(s) > |<(a)|
s2 a2
Now we use the Laplace transform of sin(at) to compute the Laplace transform of sin(at)/t.
Z
sin(at) a
L = 2 + a2
d
t
Zs
1 d
= 2+1 a
s (/a)
h i
= arctan
a s
s
= arctan
2 a
sin(at) a
L = arctan
t s
7. Z t
sin(a ) 1 sin(at)
L d = L
0 s t
Z t
sin(a ) 1 a
L d = arctan
0 s s
8.
R 2
est f (t) dt
0
L[f (t)] =
1 e2s
R st
e dt
= 0 2s
1e
1 es
=
s(1 e2s )
1
L[f (t)] =
s(1 + es )
Solution 31.2
Z
est af (t) + bg(t) dt
L[af (t) + bg(t)] =
0
Z Z
=a est f (t) dt + b est g(t) dt
0 0
= aL[f (t)] + bL[g(t)]
917
Solution 31.3
If f (t) is of exponential order , then ect f (t) is of exponential order c + .
Z
ct
L[e f (t)] = est ect f (t) dt
0
Z
= e(sc)t f (t) dt
0
= f(s c) for s > c +
Solution 31.4
First consider the Laplace transform of t0 f (t).
Solution
R 31.5
If 0 f (t)
t dt exists for positive and f (t) is of exponential order then the Laplace transform of
f (t)/t is defined for s > .
Z
f (t) 1
L = est f (t) dt
t t
Z0 Z
= et d f (t) dt
Z0 Zs
= et f (t) dt d
s 0
Z
= f() d
s
Solution 31.6
Z t Z Z t
st
L f ( ) d = e f ( ) d dx
0 0 0
st Z t Z Z t
est d
e
= f ( ) d f ( ) d dt
s 0 0 0 s dt 0
Z
1
= est f (t) dt
s 0
1
= f(s)
s
918
Solution 31.7
f (t) is periodic with period T .
Z
L[f (t)] = est f (t) dt
0
Z T Z 2T
= est f (t) dt + est f (t) dt +
0 T
Z (n+1)T
X
= est f (t) dt
n=0 nT
X Z T
= es(t+nT ) f (t + nT ) dt
n=0 0
X Z T
= esnT est f (t) dt
n=0 0
Z T
X
= est f (t) dt esnT
0 n=0
RT
0
est f (t) dt
=
1 esT
Solution 31.8
Z
f(s) = est f (t) dt
0
n Z (n+1)T
X
= est f (t) dt
0 nT
n Z
X T
= es(t+nT ) f (t + nT ) dt
0 0
n
X Z T
snT
= e est (1)n f (t) dt
0 0
Z T n
X n
= est f (t) dt (1)n esT
0 0
g(s)
f(s) = , for <(s) > 0
1 + esT
esT /2 esT /2
s1 tanh(sT /2) = s1
esT /2 + esT /2
1 esT
= s1
1 + esT
We have
T
1 est
Z
g(s) f (t) est dt = .
0 s
919
By inspection we see that this is satisfied for f (t) = 1 for 0 < t < T . We conclude:
(
1 for t [2nT . . . (2n + 1)T ),
f (t) =
1 for t [(2n + 1)T . . . (2n + 2)T ),
where n Z.
Solution 31.9
The Laplace transform of t , > 1 is
Z
f(s) = est t dt.
0
Assume s is complex-valued. The integral converges for <(s) > 0 and > 1.
Im(z)
arg(s)
Re(z)
Since the integrand is analytic in the domain < r < R, 0 < < arg(s), the integral along the
boundary of this domain vanishes.
arg(s) Z arg(s) Z !
Z R
Z Re e
+ + + ez z dz = 0
R R e arg(s) e arg(s)
We show that the integral along CR , the circular arc of radius R, vanishes as R with the
maximum modulus integral bound.
Z
e z dz R| arg(s)| max ez z
z
CR zCR
= R| arg(s)| eR cos(arg(s)) R
0 as R .
920
The integral along C , the circular arc of radius , vanishes as 0. We demonstrate this with the
maximum modulus integral bound.
Z
z
e z dz | arg(s)| max ez z
C zC
= | arg(s)| e cos(arg(s))
0 as 0.
Taking the limit as 0 and R , we see that the integral along C is equal to the integral
along the real axis. Z Z
ez z dz = ez z dz
C 0
( + 1)
L [t ] =
s+1
In the case that is a non-negative integer = n > 1 we can write this in terms of the factorial.
n!
L [tn ] =
sn+1
exists for <(s) > 0. It converges uniformly for <(s) c > 0. On this domain of uniform convergence
we can interchange differentiation and integration.
Z
df d
= est t dt
ds ds 0
Z
st
= e t dt
0 s
Z
= t est t dt
0
Z
= est t+1 dt
0
Since f0 (s) is defined for <(s) > 0, f(s) is analytic for <(s) > 0.
Let be real and positive. We make the change of variables x = t.
Z x 1
f() = ex dx
0
Z
= (+1) ex x dx
0
( + 1)
=
+1
Note that the function
( + 1)
f(s) =
s+1
921
is the analytic continuation of f(). Thus we can define the Laplace transform for all complex s in
the right half plane.
( + 1)
f(s) =
s+1
Solution 31.10
Note that f(s) is an analytic function for <(s) > 0. Consider real-valued s > 0. By definition, f(s)
is
Z
f(s) = est ln t dt.
0
Z x dx
f(s) = ex ln
0 s s
Z
1
= ex (ln x ln s) dx
s 0
ln |s| x 1 x
Z Z
= e dx + e ln x dx
s 0 s 0
ln s
= , for real s > 0
s s
Log s
f(s) = .
s s
Solution 31.11
Define
Z
f(s) = L[t ln t] = est t ln t dt.
0
This integral defines f(s) for <(s) > 0. Note that the integral converges uniformly for <(s) c > 0.
On this domain we can interchange differentiation and integration.
Z Z
st
f0 (s) = t est t Log t dt
e t ln t dt =
0 s 0
Since f0 (s) also exists for <(s) > 0, f(s) is analytic in that domain.
922
Let be real and positive. We make the change of variables x = t.
f() = L [t ln t]
Z
= et t ln t dt
0
Z x x 1
= ex ln dx
0
Z
1
= +1 ex x (ln x ln ) dx
0
Z Z
1
= +1 ex x ln x dx ln ex x dx
Z0 0
1
ex x dx ln ( + 1)
= +1
0 Z
1 d
= +1 ex x dx ln ( + 1)
d 0
1 d
= +1 ( + 1) ln ( + 1)
d
0
1 ( + 1)
= +1 ( + 1) ln
( + 1)
1
= +1 ( + 1) (( + 1) ln )
Note that the function
1
f(s) = ( + 1) (( + 1) ln s)
s+1
is an analytic continuation of f(). Thus we can define the Laplace transform for all s in the right
half plane.
1
L[t ln t] = +1 ( + 1) (( + 1) ln s) for <(s) > 0.
s
For the case = 0, we have
1
L[ln t] = (1) ((1) ln s)
s1
ln s
L[ln t] = ,
s
Solution 31.12
Method 1. We factor the denominator.
1 1
f(s) = 2
=
(s 2)(s + 1) (s 2)(s )(s + )
We expand the function in partial fractions and simplify the result.
1 1/5 (1 2)/10 (1 + 2)/10
=
(s 2)(s )(s + ) s2 s s+
1 1 1 s + 2
f(s) =
5 s 2 5 s2 + 1
923
We use a table of Laplace transforms to do the inversion.
1 s 1
L[e2t ] = , L[cos t] = , L[sin t] =
s2 s2 + 1 s2 + 1
1 2t
f (t) = e cos t 2 sin t
5
est
X
f (t) = Res , sn
s =2,,
(s 2)(s )(s + )
n
e2t et et
= + +
(2 )(2 + ) ( 2)(2) ( 2)(2)
e2t (1 + 2) et (1 2) et
= + +
5 10 10
e2t et + et et et
= + +
5 10 5
1 2t
f (t) = e cos t 2 sin t
5
Solution 31.13
924
We use a table of Laplace transforms to find the inverse Laplace transform of the first term.
" # r !
1 1 1 t/2 2
L 2 =q e sin 1 t
(s + 2 )2 + 1 4 1
2 4
4
We define r
2
= 1
4
2
to get rid of some clutter. Now we apply the convolution theorem to invert ys.
Z t
1 /2
y(t) = e sin ( ) sin(t ) d
0
t/2 1 1 1
y(t) = e cos (t) + sin (t) cos t
2
15
10
20 40 60 80 100
-5
-10
-15
Solution 31.14
We consider the solutions of
y 00 ty 0 + y = 0, y(0) = 0, y 0 (0) = 1
which are of exponential order for any > 0. We take the Laplace transform of the differential
equation.
d
s2 y 1 + (sy) + y = 0
ds
0 2 1
y + s + y =
s s
2
1 es /2
y(s) = 2
+c 2
s s
2 Evaluate the convolution integral by inspection.
925
We use that
y(0) y 0 (0)
y(s) + 2 +
s s
to conclude that c = 0.
1
y(s) =
s2
y(t) = t
Solution 31.15
Z c+
1
L1 [f(s)] = est f(s) ds
2 c
First we make the change of variable s = c + .
Z
1 ct
L1 [f(s)] = e et f(c + ) d
2
The first integral vanishes by the maximum modulus bound. Note that the length of the path of
integration is less than 2.
Z
/2
st 1/2 2(as)1/2
d max e e (2)
s
/2 [/2.../2]
= et (2)
R
0 as R
926
CR+
B
/2
L+ C
L- /2+
CR-
We show that the integral along C vanishes as 0 with the maximum modulus bound.
Z 1/2
1/2
st 2(as)1/2 max est 2(as)1/2
e e ds e ()
C s sC s
t
e
0 as 0.
Now we can express the inverse Laplace transform in terms of the integrals along L+ and L
Z + 1/2
1 1/2
f (t) est e2(as) ds
2 s
Z Z
1 1/2 2(as)1/2
1 1/2 1/2
= est e ds est e2(as) ds.
2 L+ s 2 L s
927
On L+ , s = r e , ds = e dr = dr; on L , s = r e , ds = e dr = dr. We can combine the
integrals along the top and bottom of the branch cut.
Z 0
Z
1 1
f (t) = ert e2 a r ( dr) ert e2 a r ( dr)
2 r 2 0 r
Z
1 1
= ert e2 a r + e2 a r dr
2 0 r
Z
1 1 rt
= e 2 cos 2 a r dr
2 0 r
We make the change of variables x = r.
Z
1 1 tx2
= e cos 2 ax 2x dx
0 x
Z
2 2
= etx cos 2 ax dx
0
r
2 4a/(4t)
= e
4t
ea/t
=
t
Thus the inverse Laplace transform is
ea/t
f (t) =
t
Solution 31.17
We consider the problem
d4 y
y = t, y(0) = y 0 (0) = y 00 (0) = y 000 (0) = 0.
dt4
We take the Laplace transform of the differential equation.
1
s4 y(s) s3 y(0) s2 y 0 (0) sy 00 (0) y 000 (0) y(s) =
s2
1
s4 y(s) y(s) =
s2
1
y(s) =
s2 (s4 1)
There are several ways in which we could carry out the inverse Laplace transform to find y(t). We
could expand the right side in partial fractions and then use a table of Laplace transforms. Since
the function is analytic except for isolated singularities and vanishes as s we could use the
result,
XN
L1 [f(s)] = Res est f(s), sn ,
n=1
where {sk }nk=1 are the singularities of f(s). Since we can write the function as a product of simpler
terms we could also apply the convolution theorem.
We will first do the inverse Laplace transform by expanding the function in partial fractions to
obtain simpler rational functions.
1 1
=
s2 (s4 1) s2 (s
1)(s + 1)(s )(s + )
a b c d e f
= 2+ + + + +
s s s1 s+1 s s+
928
1
a= = 1
s4 1 s=0
d 1
b= =0
ds s4 1 s=0
1 1
c= 2 =
s (s + 1)(s )(s + ) s=1 4
1 1
d= 2 =
s (s 1)(s )(s + ) s=1 4
1 1
e= 2 =
s (s 1)(s + 1)(s + ) s= 4
1 1
f= 2 =
s (s 1)(s + 1)(s ) s= 4
1 1 1 1
= 2 2
s2 (s4 1) s s + 1 s2 1
Now we use the convolution theorem to find the solution for t > 0.
Z t
1 1
L = sinh( ) sin(t ) d
s4 1 0
1
= (sinh t sin t)
2
Z t
1 1
L1 = (sinh sin ) (t ) d
s2 (s4 1) 0 2
1
= t + (sinh t + sin t)
2
929
Solution 31.18
Z t
dy
= sin t + y( ) cos(t ) d
dt 0
1 s
sy(s) y(0) = + y(s) 2
s2 + 1 s +1
(s3 + s)y(s) sy(s) = 1
1
y(s) = 3
s
t2
y(t) =
2
Solution 31.19
The Laplace transform of u(t 1) is
Z
L[u(t 1)] = est u(t 1) dt
0
Z
= es(t+1) u(t) dt
1
Z 0 Z
= es est u(t) dt + es est u(t) dt
1 0
Z 0
= es est u0 (t) dt + es u(s).
1
Solution 31.20
We consider the problem,
d2 y
y = f (t), y(0) = 1, y 0 (0) = 0,
dt2
930
where f (t) is periodic with period 2 and is defined by,
(
1 0 t < ,
f (t) =
0 t < 2.
Clearly the solution is continuous because the integral of a bounded function is continuous. The
first derivative of the solution is
Z t
y 0 (t) = sinh t + f (t) sinh(0) + f ( ) cosh(t ) d
0
Z t
y 0 (t) = sinh t + f ( ) cosh(t ) d
0
931
We use a table of Laplace transforms to do the inversion.
1 1
y = et + (sin(t) + 3 cos(t))
2 2
Solution 31.22
We consider the problem
di1
L + Ri1 + q/C = E0
dt
di2
L + Ri2 q/C = 0
dt
dq
= i1 i2
dt
E0
i1 (0) = i2 (0) = , q(0) = 0.
2R
We take the Laplace transform of the system of differential equations.
E0 q E0
L si1 + Ri1 + =
2R C s
E0 q
L si2 + Ri2 =0
2R C
sq = i1 i2
932
Now we can do the inversion with a table of Laplace transforms.
E0 1 ()t (+)t
i1 = + e e
2 R 2L
E0 1 ()t (+)t
i2 = e e
2 R 2L
CE0
q= 1+ ( + ) e(+)t ( ) e()t
2 2
We simplify the expressions to obtain the solutions.
E0 1 1 t
i1 = + e sin(t)
2 R L
E0 1 1 t
i2 = e sin(t)
2 R L
CE0
q= 1 et cos(t) + sin(t)
2
Solution 31.23
We consider the problem
y 00 + 4y 0 + 4y = 4 et , y(0) = 2, y 0 (0) = 3
We take the Laplace transform of the differential equation and solve for y(s).
4
s2 y sy(0) y 0 (0) + 4sy 4y(0) + 4y =
s+1
4
s2 y 2s + 3 + 4sy 8 + 4y =
s+1
4 2s + 5
y = +
(s + 1)(s + 2)2 (s + 2)2
4 2 3
y =
s + 1 s + 2 (s + 2)2
y = 4 et (2 + 3t) e2t
933
934
Chapter 32
935
Thus the expansion of f (x) for finite L
X nx/L
f (x) cn e
n=
L
Z L
1
cn = enx/L f (x) dx
2 L
Of course this derivation is only heuristic. In the next section we will explore these formulas
more carefully.
Since the integral in parentheses is uniformly convergent, we can interchange the order of integration.
Z Z L !
1 (x)
= f () e d d
2 L
Z L
1 e(x)
= f () d
2 ( x) L
Z
1 1
= f () eL(x) eL(x) d
2 ( x)
1 sin(L( x))
Z
= f () d
x
1
Z
sin(L)
= f ( + x) d.
In Example 32.3.3 we will show that
Z
sin(L)
d = .
0 2
936
Now we have an identity for f (x).
Z Z
1
f (x) = f () e d ex d.
2
Piecewise Continuous Functions. Now consider the case that f (x) is only piecewise continuous.
f (x+ ) 1
Z
sin(L)
= f (x+ ) d
2 0
f (x ) 1 0
Z
sin(L)
= f (x ) d
2
0
f (x+ ) + f (x ) f (x + ) f (x )
Z
I(x, L) = sin(L) d
2
Z
f (x + ) f (x+ )
sin(L) d
0
f (x + ) f (x )
is bounded for 0, and
f (x + ) f (x+ )
is bounded for 0.
R
Result 32.2.1 Let f (x) be piecewise continuous with |f (x)| dx < .
The Fourier transform of f (x) is defined
Z
1
f () = F[f (x)] = f (x) ex dx.
2
We see that the integral is uniformly convergent. The inverse Fourier transform
is defined Z
f (x+ ) + f (x ) 1
= F [f ()] = f() ex d.
2
937
1
Multiplying the right side of this equation by 1 = yields
Z Z
1
f (x) = f () e d ex d.
2
Setting = 2 and choosing sign in the exponentials gives us the Fourier transform pair
Z
1
f() = f (x) ex dx
2
Z
1
f (x) = f() ex d.
2
and
Z
f() = f (x) ex dx
Z
1
f (x) = f() ex d.
2
Be aware of the different definitions when reading other texts or consulting tables of Fourier trans-
forms.
converges for real , then finding the transform of a function is just a matter of direct integration.
We will consider several examples of such garden variety functions in this subsection. Later on we
will consider the more interesting cases when the integral does not converge for real .
Example 32.3.1 Consider the Fourier transform of ea|x| , where a > 0. Since the integral of ea|x|
is absolutely convergent, we know that the Fourier transform integral converges for real . We write
out the integral.
Z
h i 1
F ea|x| = ea|x| ex dx
2
Z 0 Z
1 1
= e axx
dx + eaxx dx
2 2 0
Z 0 Z
1 1
= e(a<()+=())x dx + e(a<()+=())x dx
2 2 0
The integral converges for |=()| < a. This domain is shown in Figure 32.1.
938
Im(z)
Re(z)
We can extend the domain of the Fourier transform with analytic continuation.
h i a
F ea|x| = , for 6= a
( 2 + a2 )
1
Example 32.3.2 Consider the Fourier transform of f (x) = x , > 0.
Z
1 1 1
F = ex dx
x 2 x
The integral converges for =() = 0. We will evaluate the integral for positive and negative real
values of .
For > 0, we will close the path of integration in the lower half-plane. Let CR be the contour
from x = R to x = R following a semicircular path in the lower half-plane. The integral along CR
vanishes as R by Jordans Lemma.
Z
1
ex dx 0 as R .
CR x
Since the integrand is analytic in the lower half-plane the integral vanishes.
1
F =0
x
For < 0, we will close the path of integration in the upper half-plane. Let CR denote the
semicircular contour from x = R to x = R in the upper half-plane. The integral along CR vanishes
939
as R goes to infinity by Jordans Lemma. We evaluate the Fourier transform integral with the
Residue Theorem.
x
1 1 e
F = 2i Res , i
x 2 x i
= e
32.3.2 Cauchy Principal Value and Integrals that are Not Absolutely
Convergent.
That the integral of f (x) is absolutely convergent
R is a sufficient but not a necessary condition
R that the
Fourier transform of f (x) exists. The integral f (x) ex dx may converge even if |f (x)| dx
does not. Furthermore, if the Fourier transform integral diverges, its principal value may exist. We
will say that the Fourier transform of f (x) exists if the principal value of the integral exists.
Z
F[f (x)] = f (x) ex dx
940
Im(z)
Re(z)
We write the integrand for > 0 as the sum of an odd and and even function.
Z
1 1 x
e dx =
2 x 2
Z Z
1
cos(x) dx + sin(x) dx =
x x
Z
1
sin(x) dx =
x
If the principal value of the integral of an even function exists, then the integral converges.
Z
1
sin(x) dx =
x
Z
1
sin(x) dx =
0 x 2
Thus we have evaluated an integral that we used in deriving the Fourier transform.
941
The Fourier transform of f+ (x) converges for =() < 0.
Z
1
F[f+ (x)] = ex dx
2 0
Z
1
= e(<()+=())x dx.
2 0
1 ex
=
2 0
= for =() < 0
2
Using analytic continuation, we can define the Fourier transform of f+ (x) for all except the point
= 0.
F[f+ (x)] =
2
We follow the same procedure for f (x). The integral converges for =() > 0.
Z 0
1
F[f (x)] = ex dx
2
Z 0
1
= e(<()+=())x dx
2
0
1 ex
=
2
= .
2
Using analytic continuation we can define the transform for all nonzero .
F[f (x)] =
2
When = 0 the integral diverges. When we consider the closure relation for the Fourier transform
we will see that
F[1] = ().
942
There is a similar closure relation for Fourier integrals. We compute the Fourier transform of (x).
Z
1
F[(x )] = (x ) ex dx
2
1
= e
2
Next we take the inverse Fourier transform.
Z
1 x
(x ) e e d
2
Z
1
(x ) e(x) d.
2
Note that the integral is divergent, but it would be impossible to represent (x) with a convergent
integral.
Example 32.4.1 The Dirac delta function can be expressed as the derivative of the Heaviside
function. (
0 for x < c,
H(x c) =
1 for x > c
Thus we can express the Fourier transform of H(x c) in terms of the Fourier transform of the delta
function.
F[(x c)] = F[H(x c)]
Z
1
(x c) ex dx = F[H(x c)]
2
1 c
e = F[H(x c)]
2
943
1
F[H(x c)] = ec
2
Thus Z
F[f (x)g(x)] = f g() = f()g( ) d.
Example 32.4.2 Using the convolution theorem and the table of Fourier transform pairs in the
appendix, we can find the Fourier transform of
1
f (x) = .
x4 + 5x2 + 4
944
We factor the fraction.
1
f (x) =
(x2 + 1)(x2 + 4)
From the table, we know that
2c
F = ec|| for c > 0.
x2 + c2
1 || 1 2||
F[f (x)] = e e
6 12
1/3 1/3
f (x) =
x2 + 1 x2 + 4
1 2 1 4
F[f (x)] = F 2 F 2
6 x +1 12 x +4
1 || 1 2||
= e e
6 12
945
32.4.4 Parsevals Theorem.
RecallPParsevals theorem for Fourier series. If f (x) is a complex valued function with the Fourier
series n= cn enx then
X Z
2
2 |cn | = |f (x)|2 dx.
n=
Let f (x) be a complex valued function that is both absolutely integrable and square integrable.
Z Z
|f (x)| dx < and |f (x)|2 dx <
We set x = 0.
Z Z
2 f()f() d = f ()f () d
Z Z
2 |f()|2 d = |f (x)|2 dx
946
The inverse Fourier transform of f( + c) is
Z
F 1 [f( + c)] = f( + c) ex d
Z
= f() e(c)x d
f
F[xf (x)] = .
Similarly, you can show that
n f
F[xn f (x)] = (i)n .
n
947
e|x| e|x|
y(x) =
2 1
G00 G = (x ), y() = 0.
2 G G = F[(x )]
1
G = 2 F[(x )]
+1
y 00 y = f (x), y() = 0,
When solving the differential equation L[y] = f with the Fourier transform, it is quite common to
use the convolution theorem. With this approach we have no need to compute the Fourier transform
of the right side. We merely denote it as F[f ] until we use f in the convolution integral.
948
The Fourier cosine transform is defined:
Z
1
Fc [f (x)] = fc () = f (x) cos(x) dx.
0
Note that f() = F[f (x)] is an odd function of . The inverse Fourier transform of f() is
Z
F 1
[f()] = f() ex d
Z
= 2 f() sin(x) d.
0
949
Result 32.6.1 The Fourier cosine transform pair is defined:
Z
1
f (x) = Fc [fc ()] = 2 fc () cos(x) d
Z 0
1
fc () = Fc [f (x)] = f (x) cos(x) dx
0
The Fourier sine transform pair is defined:
Z
1
f (x) = Fs [fs ()] = 2 fs () sin(x) d
0
Z
1
fs () = Fs [f (x)] = f (x) sin(x) dx
0
1 0
Z
0
Fc [y ] = y cos(x) dx
0
Z
1
= y cos(x) 0 + y sin(x) dx
0
1
= yc () y(0)
1 00
Z
00
Fc [y ] = y cos(x) dx
0
0
Z
1 0
= y cos(x) 0 + y sin(x) dx
0
2
Z
1
= y 0 (0) + y sin(x) 0 y cos(x) dx
0
1
= 2 fc () y 0 (0)
Sine Transform. You can show, (see Exercise 32.3), that the Fourier sine transform of the first
and second derivatives are
Fs [y 0 ] = fc ()
Fs [y 00 ] = 2 yc () + y(0).
950
gc ().
1
Z
Fc [f (x)g(x)] = f (x)g(x) cos(x) dx
0
1
Z Z
= 2 fc () cos(x) d g(x) cos(x) dx
0
Z Z 0
2
= fc ()g(x) cos(x) cos(x) dx d
0 0
gc () is an even function. If we have only defined gc () for positive argument, then gc () = gc (||).
Z
fc () gc (| |) + gc ( + ) d
=
0
Inverse Cosine Transform of a Product. Now consider the inverse Fourier cosine transform
of a product of functions. Let Fc [f (x)] = fc (), and Fc [g(x)] = gc ().
Z
Fc1 [fc ()gc ()] = 2 fc ()gc () cos(x) d
0
Z Z
1
=2 f () cos() d gc () cos(x) d
Z0 Z 0
2
= f ()gc () cos() cos(x) d d
0
Z Z0
1
= f ()gc () cos((x )) + cos((x + )) d d
0
Z 0 Z Z
1
= f () 2 gc () cos((x )) d + 2 gc () cos((x + )) d d
2 0 0 0
Z
1
= f () g(|x |) + g(x + ) d
2 0
Sine Transform of a Product. You can show, (see Exercise 32.5), that the Fourier sine transform
of a product of functions is
Z
fs () gc (| |) gc ( + ) d.
Fs [f (x)g(x)] =
0
Inverse Sine Transform of a Product. You can also show, (see Exercise 32.6), that the inverse
Fourier sine transform of a product of functions is
Z
1
Fs1 [fs ()gc ()] =
f () g(|x |) g(x + ) d.
2 0
951
Result 32.7.1 The Fourier cosine and sine transform convolution theorems
are
Z
fc () gc (| |) + gc ( + ) d
Fc [f (x)g(x)] =
0
Z
1 1
Fc [fc ()gc ()] = f () g(|x |) + g(x + ) d
2
Z 0
fs () gc (| |) gc ( + ) d
Fs [f (x)g(x)] =
0
Z
1 1
Fs [fs ()gc ()] = f () g(|x |) g(x + ) d
2 0
952
32.8 Solving Differential Equations with the Fourier Cosine
and Sine Transforms
Example 32.8.1 Consider the problem
y 00 y = 0, y(0) = 1, y() = 0.
Since the initial condition is y(0) = 1 and the sine transform of y 00 is 2 yc () + y(0) we take the
Fourier sine transform of both sides of the differential equation.
2 yc () + y(0) yc () = 0
( 2 + 1)yc () =
yc () =
( 2 + 1)
y = ex
Since the initial condition is y 0 (0) = 0, we take the Fourier cosine transform of the differential
equation. From the table of cosine transforms, Fc [e2x ] = 2/(( 2 + 4)).
1 0 2
2 yc () y (0) yc () =
( 2 + 4)
2
yc () =
( 2 + 4)( 2 + 1)
2 1/3 1/3
=
2 + 1 2 + 4
1 2/ 2 1/
=
3 + 4 3 2 + 1
2
1 2x 2 x
y= e e
3 3
953
32.9 Exercises
Exercise 32.1
Show that
sin(c)
H(x + c) H(x c) = .
Exercise 32.2
Using contour integration, find the Fourier transform of
1
f (x) = ,
x2 + c2
where <(c) 6= 0
Exercise 32.3
Find the Fourier sine transforms of y 0 (x) and y 00 (x).
Exercise 32.4
Prove the following identities.
Exercise 32.5
Show that
Z
fs () gc (| |) gc ( + ) d.
Fs [f (x)g(x)] =
0
Exercise 32.6
Show that
Z
1
Fs1 [fs ()gc ()] =
f () g(|x |) g(x + ) d.
2 0
Exercise 32.7
Let fc () = Fc [f (x)], fc () = Fs [f (x)], and assume the cosine and sine transforms of xf (x) exist.
Express Fc [xf (x)] and Fs [xf (x)] in terms of fc () and fc ().
Exercise 32.8
Solve the problem
y 00 y = e2x , y(0) = 1, y() = 0,
using the Fourier sine transform.
Exercise 32.9
Prove the following relations between the Fourier sine transform and the Fourier transform.
Exercise 32.10
Let fc () = Fc [f (x)] and fc () = Fs [f (x)]. Show that
1. Fc [xf (x)] = fc ()
954
2. Fs [xf (x)] = fc ()
3. Fc [f (cx)] = 1c fc
c for c > 0
4. Fs [f (cx)] = 1c fc
c for c > 0.
Exercise 32.11
Solve the integral equation,
Z
2 2
u() ea(x) d = ebx ,
Exercise 32.12
Evaluate
Z
1 1 cx
e sin(x) dx,
0 x
where is a positive, real number and <(c) > 0.
Exercise 32.13
Use the Fourier transform to solve the equation
y 00 a2 y = ea|x|
Exercise 32.14
1. Use the cosine transform to solve
2. Use the cosine transform to show that the Green function for the above with b = 0 is
1 a|x| 1 a(x)
G(x, ) = e e .
2a 2a
Exercise 32.15
1. Use the sine transform to solve
2. Try using the Laplace transform on this problem. Why isnt it as convenient as the Fourier
transform?
3. Use the sine transform to show that the Green function for the above with b = 0 is
1 a(x)
g(x; ) = e ea|x+|
2a
Exercise 32.16
1. Find the Green function which solves the equation
y 00 + 2y 0 + ( 2 + 2 )y = (x ), > 0, > 0,
955
2. Use this Greens function to show that the solution of
Exercise 32.17
Using Fourier transforms, find the solution u(x) to the integral equation
Z
u() 1
2 2
d = 2 0 < a < b.
[(x ) + a ] x + b2
Exercise 32.18
The Fourer cosine transform is defined by
Z
1
fc () = f (x) cos(x) dx.
0
1. From the Fourier theorem show that the inverse cosine transform is given by
Z
f (x) = 2 fc () cos(x) d.
0
f 0 (0)
2 fc () .
3. Use the cosine transform to solve the following boundary value problem.
Exercise 32.19
The Fourier sine transform is defined by
Z
1
fs () = f (x) sin(x) dx.
0
4. Try using the Laplace transform on this problem. Why isnt it as convenient as the Fourier
transform?
956
Exercise 32.20
Show that
1
F[f (x)] = (Fc [f (x) + f (x)] Fs [f (x) f (x)])
2
where F, Fc and Fs are respectively the Fourier transform, Fourier cosine transform and Fourier
sine transform.
Exercise 32.21
Find u(x) as the solution to the integral equation:
Z
u() 1
2 2
d = 2 , 0 < a < b.
(x ) + a x + b2
Use Fourier transforms and the inverse transform. Justify the choice of any contours used in the
complex plane.
957
32.10 Hints
Hint 32.1
(
1 for |x| < c,
H(x + c) H(x c) =
0 for |x| > c
Hint 32.2
Consider the two cases <() < 0 and <() > 0, closing the path of integration with a semi-circle in
the lower or upper half plane.
Hint 32.3
Hint 32.4
Hint 32.5
Hint 32.6
Hint 32.7
Hint 32.8
Hint 32.9
Hint 32.10
Hint 32.11
2
The left side is the convolution of u(x) and eax .
Hint 32.12
Hint 32.13
Hint 32.14
Hint 32.15
Hint 32.16
Hint 32.17
958
Hint 32.18
Hint 32.19
Hint 32.20
Hint 32.21
959
32.11 Solutions
Solution 32.1
Z
1
F[H(x + c) H(x c)] = (H(x + c) H(x c)) ex dx
2
Z c
1
= ex dx
2 c
c
1 ex
=
2 c
1 ec ec
=
2
sin(c)
F[H(x + c) H(x c)] =
Solution 32.2
Z
1 1 1
F 2 2
= ex dx
x +c 2 x + c2
2
Z
1 ex
= dx
2 (x c)(x + c)
If <() < 0 then we close the path of integration with a semi-circle in the upper half plane.
ex
1 1 1 c
F 2 = 2i Res , x = c = e
x + c2 2 (x c)(x + c) 2c
If > 0 then we close the path of integration in the lower half plane.
ex
1 1 1 c
F 2 = 2i Res , c = e
x + c2 2 (x c)(x + c) 2c
Thus we have that
1 1 c||
F 2 2
= e , for <(c) 6= 0.
x +c 2c
Solution 32.3
1 0
Z
Fs [y 0 ] = y sin(x) dx
0
1h i Z
= y sin(x) y cos(x) dx
0 0
= yc ()
1 00
Z
Fs [y 00 ] = y sin(x) dx
0
1h 0 i Z
= y sin(x) y 0 cos(x) dx
0 0
h i 2 Z
= y cos(x) y sin(x) dx
0 0
= 2 ys () + y(0).
960
Solution 32.4
1.
Z
1
F[f (x a)] = f (x a) ex dx
2
Z
1
= f (x) e(x+a) dx
2
Z
1
= ea f (x) ex dx
2
2. If a > 0, then
Z
1
F[f (ax)] = f (ax) ex dx
2
Z
1 1
= f () e/a d
2 a
1
= f .
a a
If a < 0, then
Z
1
F[f (ax)] = f (ax) ex dx
2
Z
1 1
= e/a d
2 a
1
= f .
a a
Thus
1
F[f (ax)] = f .
|a| a
Solution 32.5
1
Z
Fs [f (x)g(x)] = f (x)g(x) sin(x) dx
0
1
Z Z
= 2 fs () sin(x) d g(x) sin(x) dx
0
Z Z 0
2
= fs ()g(x) sin(x) sin(x) dx d
0 0
961
Solution 32.6
Z
Fs1 [fs ()Gc ()] = 2 fs ()Gc () sin(x) d
0
Z Z
1
=2 f () sin() d Gc () sin(x) d
Z0 Z 0
2
= f ()Gc () sin() sin(x) d d
0 0
1
Z Z h i
= f ()Gc () cos((x )) cos((x + )) d d
0
Z 0 Z Z
1
= f () 2 Gc () cos((x )) d 2 Gc () cos((x + )) d) d
2 0 0 0
Z
1
= f ()[g(x ) g(x + )] d
2 0
Z
1
Fs1 [fs ()Gc ()]
= f () g(|x |) g(x + ) d
2 0
Solution 32.7
1
Z
Fc [xf (x)] = xf (x) cos(x) dx
0
Z
1
= f (x) (sin(x)) dx
0
1
Z
= f (x) sin(x) dx
0
= fs ()
1
Z
Fs [xf (x)] = xf (x) sin(x) dx
0
Z
1
= f (x) ( cos(x)) dx
0
1
Z
= f (x) cos(x) dx
0
= fc ()
Solution 32.8
2/
2 ys () + y(0) ys () = 2
+4
962
/ /
ys () = + 2
( 2 2
+ 4)( + 1) ( + 1)
/(3) /(3) /
= 2 2 + 2
+4 +1 +1
2 / 1 /
= +
3 2 + 1 3 2 + 4
2 x 1 2x
y= e + e
3 3
Solution 32.9
Consider the Fourier sine transform. Let f (x) be an odd function.
1
Z
Fs [f (x)] = f (x) sin(x) dx
0
R
Note that
f (x) cos(x) dx = 0 as the integrand is odd.
Z
1
= f (x) ex dx
2
= F[f (x)]
Now consider the inverse Fourier sine transform. Let f() be an odd function.
h i Z
1
Fs f () = 2 f() sin(x) d
0
R
Note that
f() cos(x) d = 0 as the integrand is odd.
Z
= f()(i) ex d
h i
= F 1 f()
h i h i
Fs1 f() = F 1 f() , for odd f().
For general f(), use the odd extension, sign()f(||) to write the result.
h i h i
Fs1 f() = F 1 sign()f(||)
963
Solution 32.10
1
Z
Fc [xf (x)] = xf (x) cos(x) dx
0
Z
1
= f (x) sin(x) dx
0
Z
1
= f (x) sin(x) dx
0
= fs ()
1
Z
Fs [xf (x)] = xf (x) sin(x) dx
0
1
Z
= f (x) ( cos(x)) dx
0
1
Z
= f (x) cos(x) dx
0
= fc ()
1
Z
Fc [f (cx)] = f (cx) cos(x) dx
0
1
Z d
= f () cos
0 c c
1
= fc
c c
1
Z
Fs [f (cx)] = f (cx) sin(x) dx
0
1
Z d
= f () sin
0 c c
1
= fs
c c
Solution 32.11
Z
2 2
u() ea(x) d = ebx
964
a 2
u(x) = p eabx /(ab)
(a b)
Solution 32.12
Z
1 1 cx
I= e sin(x) dx
x
Z0 Z
1 zx
= e dz sin(x) dx
Z0 Z c
1
= ezx sin(x) dx dz
c 0
Z
1
= 2 + 2
dz
c z
1 h z i
= arctan
c
1 c
= arctan
2
1
= arctan
c
Solution 32.13
We consider the differential equation
y 00 a2 y = ea|x|
on the domain < x < with boundary conditions y() = 0. We take the Fourier transform
of the differential equation and solve for y().
a
2 y a2 y =
( 2 + a2 )
a
y() =
( 2 + a2 )2
We take the inverse Fourier transform to find the solution of the differential equation.
Z
a
y(x) = ex d
( 2 + a2 )2
Note that since y() is a real-valued, even function, y(x) is a real-valued, even function. Thus we
only need to evaluate the integral for positive x. If we replace x by |x| in this expression we will
have the solution that is valid for all x.
For x 0, we evaluate the integral by closing the path of integration in the upper half plane and
965
using the Residue Theorem and Jordans Lemma.
a
Z
1
y(x) = ex d
( a) ( + a)2
2
a 1 x
= 2 Res e , = a
( a)2 ( + a)2
ex
d
= 2a lim
a d ( + a)2
x ex 2 ex
= 2a lim
a ( + a)2 ( + a)3
ax ax
x e 2e
= 2a 2
4a 8a3
ax
(1 + ax) e
=
2a2
The solution of the differential equation is
1
y(x) = (1 + a|x|) ea|x| .
2a2
Solution 32.14
1. We take the Fourier cosine transform of the differential equation.
b
2 y() a2 y() = 0
b
y() =
( + a2 )
2
Now we take the inverse Fourier cosine transform. We use the fact that y() is an even
function.
1 b
y(x) = Fc
( 2 + a2 )
1 b
=F
( 2 + a2 )
b 1 x
= 2 Res e , = a
2 + a2
x
e
= 2b lim , for x 0
a + a
b
y(x) = eax
a
2 G a2 G = Fc [(x )]
1
G(; ) = 2 Fc [(x )]
+ a2
966
We express the right side as a product of Fourier cosine transforms.
G(; ) = Fc [eax ]Fc [(x )]
a
Now we can apply the Fourier cosine convolution theorem.
Z
1 1
Fc [Fc [f (x)]Fc [g(x)]] = f (t) g(|x t|) + g(x + t) dt
2 0
Z
1
(t ) ea|xt| + ea(x+t) dt
G(x; ) =
a 2 0
1 a|x|
G(x; ) = e + ea(x+)
2a
Solution 32.15
1. We take the Fourier sine transform of the differential equation.
b
2 y() + a2 y() = 0
b
y() =
( + a2 )
2
Now we take the inverse Fourier sine transform. We use the fact that y() is an odd function.
1 b
y(x) = Fs
( 2 + a2 )
1 b
= F
( 2 + a2 )
b x
= 2 Res e , = a
2 + a2
x
e
= 2b lim
a + a
= b eax for x 0
y(x) = b eax
y 00 a2 y = 0
s2 y(s) sy(0) y 0 (0) a2 y(s) = 0
bs + y 0 (0)
y(s) =
s2 a2
y 0 (0)
y(x) = b cosh(ax) + sinh(ax)
a
In order to satisfy the boundary condition at infinity we must choose y 0 (0) = ab.
y(x) = b eax
We see that solving the differential equation with the Laplace transform is not as convenient,
because the boundary condition at infinity is not automatically satisfied. We had to find a
value of y 0 (0) so that y() = 0.
967
3. The Green function problem is
2 G a2 G = Fs [(x )]
1
G(; ) = 2 Fs [(x )]
+ a2
We write the right side as a product of Fourier cosine transforms and sine transforms.
G(; ) = Fc [eax ]Fs [(x )]
a
Solution 32.16
1. We take the Fourier transform of the differential equation, solve for G and then invert.
G00 + 2G0 + 2 + 2 G = (x )
e
2 G + 2 G + 2 + 2 G =
2
e
G =
2 ( 2 2 2 2 )
Z
e ex
G= d
2( 2 2 2 2 )
Z
1 e(x)
G= d
2 ( + )( )
For x > we close the path of integration in the upper half plane and use the Residue theorem.
There are two simple poles in the upper half plane. For x < we close the path of integration
in the lower half plane. Since the integrand is analytic there, the integral is zero. G(x; ) = 0
for x < . For x > we have
e(x)
1
G(x; ) = 2 Res , = +
2 ( + )( )
!
e(x)
+ Res , =
( + )( )
e(+)(x) e(+)(x)
G(x; ) = +
2 2
1
G(x; ) = e(x) sin((x )).
968
Thus the Green function is
1 (x)
G(x; ) = e sin((x ))H(x ).
2. We use the Green function to find the solution of the inhomogeneous equation.
y 00 + 2y 0 + 2 + 2 y = g(x), y() = y() = 0
Z
y(x) = g()G(x; ) d
Z
1
y(x) = g() e(x) sin((x ))H(x ) d
1 x
Z
y(x) = g() e(x) sin((x )) d
We take the limit 0.
Z x
1
y= g() sin((x )) d
Solution 32.17
First we consider the Fourier transform of f (x) = 1/(x2 + c2 ) where <(c) > 0.
1
f () = F 2
x + c2
Z
1 1
= ex dx
2 x2 + c2
Z
1 ex
= dx
2 (x c)(x + c)
If < 0 then we close the path of integration with a semi-circle in the upper half plane.
ex
1
f() = 2i Res , x = c
2 (x c)(x + c)
ec
= , for < 0
2c
Note that f (x) = 1/(x2 + c2 ) is an even function of x so that f() is an even function of . If
f() = g() for < 0 then f () = g(||) for all . Thus
1 1 c||
F = e .
x2 + c2 2c
Now we consider the integral equation
Z
u() 1
2 + a2 ]
d = 2 0 < a < b.
[(x ) x + b2
We take the Fourier transform, utilizing the convolution theorem.
ea|| eb||
2u() =
2a 2b
a e(ba)||
u() =
2b
a 1
u(x) = 2(b a) 2
2b x + (b a)2
a(b a)
u(x) =
b(x2 + (b a)2 )
969
Solution 32.18
1. Note that fc () is an even function. We compute the inverse Fourier cosine transform.
h i
f (x) = Fc1 fc ()
Z
= fc () ex d
Z
= fc ()(cos(x) + sin(x)) d
Z
= fc () cos(x) d
Z
=2 fc () cos(x) d
0
2.
1 00
Z
00
Fc [y ] = y cos(x) dx
0
0
Z
1 0
= [y cos(x)]0 + y sin(x) dx
0
2
Z
1 0
= y (0) + [y sin(x)]0 y cos(x) dx
0
y 0 (0)
Fc [y 00 ] = 2 yc ()
b
2 y() a2 y() = 0
b
y() =
( + a2 )
2
Now we take the inverse Fourier cosine transform. We use the fact that y() is an even
function.
b
y(x) = Fc1
( 2 + a2 )
b
= F 1
( 2 + a2 )
b 1 x
= 2 Res e , = a
2 + a2
x
e
= 2b lim , for x 0
a + a
b
y(x) = eax
a
970
Solution 32.19
1. Suppose f (x) is an odd function. The Fourier transform of f (x) is
Z
1
F[f (x)] = f (x) ex dx
2
Z
1
= f (x)(cos(x) sin(x)) dx
2
Z
= f (x) sin(x) dx.
0
Note that f() = F[f (x)] is an odd function of . The inverse Fourier transform of f() is
Z
F 1
[f()] = f() ex d
Z
= 2 f() sin(x) d.
0
Z Z
1
f (x) = 2 fs () sin(x) d, fs () = f (x) sin(x) dx.
0 0
2.
1 00
Z
Fs [y 00 ] = y sin(x) dx
0
1h 0 i Z
= y sin(x) y 0 cos(x) dx
0 0
h i 2 Z
= y cos(x) y sin(x) dx
0 0
Fs [y 00 ] = 2 ys () + y(0)
b
2 y() + a2 y() = 0
b
y() =
( + a2 )
2
971
Now we take the inverse Fourier sine transform. We use the fact that y() is an odd function.
b
y(x) = Fs1
( 2 + a2 )
1 b
= F
( 2 + a2 )
b x
= 2 Res e , = a
2 + a2
x
e
= 2b lim
a + a
= b eax for x 0
y(x) = b eax
bs + y 0 (0)
y(s) =
s2 a2
y 0 (0)
y(x) = b cosh(ax) + sinh(ax)
a
In order to satisfy the boundary condition at infinity we must choose y 0 (0) = ab.
y(x) = b eax
We see that solving the differential equation with the Laplace transform is not as convenient,
because the boundary condition at infinity is not automatically satisfied. We had to find a
value of y 0 (0) so that y() = 0.
Solution 32.20
The Fourier, Fourier cosine and Fourier sine transforms are defined:
Z
1
F[f (x)] = f (x) ex dx,
2
1
Z
F[f (x)]c = f (x) cos(x) dx,
0
Z
1
F[f (x)]s = f (x) sin(x) dx.
0
We start with the right side of the identity and apply the usual tricks of integral calculus to reduce
the expression to the left side.
1
(Fc [f (x) + f (x)] Fs [f (x) f (x)])
2
Z Z Z Z
1
f (x) cos(x) dx + f (x) cos(x) dx f (x) sin(x) dx + f (x) sin(x) dx
2 0 0 0 0
Z Z Z Z
1
f (x) cos(x) dx f (x) cos(x) dx f (x) sin(x) dx f (x) sin(x) dx
2 0 0 0 0
Z Z 0 Z Z 0
1
f (x) cos(x) dx + f (x) cos(x) dx f (x) sin(x) dx f (x) sin(x) dx
2 0 0
972
Z Z
1
f (x) cos(x) dx f (x) sin(x) dx
2
Z
1 x
f (x) e dx
2
F[f (x)]
Solution 32.21
We take the Fourier transform of the integral equation, noting that the left side is the convolution
1
of u(x) and x2 +a2.
1 1
2u()F 2 =F 2
x + a2 x + b2
1
We find the Fourier transform of f (x) = x2 +c 2 . Note that since f (x) is an even, real-valued
function, f () is an even, real-valued function.
Z
1 1 1
F 2 2
= ex dx
x +c 2 x + c2
2
For x > 0 we close the path of integration in the upper half plane and apply Jordans Lemma to
evaluate the integral in terms of the residues.
ex
1
= 2 Res , x = c
2 (x c)(x + c)
ec
=
2c
1 c
= e
2c
a 2(b a)
u(x) =
2b x + (b a)2
2
a(b a)
u(x) =
b(x2 + (b a)2 )
973
974
Chapter 33
We would like to extend the factorial function so it is defined for all complex numbers.
Consider the function (z) defined by Eulers formula
Z
(z) = et tz1 dt.
0
(Here we take the principal value of tz1 .) The integral converges for <(z) > 0. If <(z) 0 then
the integrand will be at least as singular as 1/t at t = 0 and thus the integral will diverge.
(z + 1) = z(z).
For general z it is not possible to express the integral in terms of elementary functions. However,
we can evaluate the integral for some z. The value z = 1 looks particularly simple to do.
Z h i
(1) = et dt = et = 1.
0 0
975
Using the difference equation we can find the value of (n) for any positive, integral n.
(1) = 1
(2) = 1
(3) = (2)(1) = 2
(4) = (3)(2)(1) = 6
=
(n + 1) = n!.
Thus the Gamma function, (z), extends the factorial function to all complex z in the right
half-plane. For non-negative, integral n we have
(n + 1) = n!.
Since this integral converges for <(z) > 0, (z) is analytic in that domain.
The integral in Hankels formula converges for all complex z. For non-positive, integral z the
integral does not vanish. Thus because of the sine term the Gamma function has simple poles at
z = 0, 1, 2, . . .. For positive, integral z, the integrand is entire and thus the integral vanishes.
Using LHospitals rule you can show that the points, z = 1, 2, 3, . . . are removable singularities and
the Gamma function is analytic at these points. Since the only zeroes of sin(z) occur for integral
z, (z) is analytic in the entire plane except for the points, z = 0, 1, 2, . . ..
976
Difference Equation. Using integration by parts we can derive the difference equation from
Hankels formula.
Z
1
(z + 1) = et tz dt
2 sin((z + 1)) C
h i+0 Z
1
= e t tz et ztz1 dt
2 sin(z) 0 C
Z
1 t z1
= z e t dt
2 sin(z) C
= z(z).
Evaluating (1),
et tz1 dt
R
C
(1) = lim .
z1 2 sin(z)
977
With the substitution = t/n,
Z 1
= lim (1 )n nz1 z1 n d
n 0
Z 1
= lim nz (1 )n z1 d.
n 0
(n+1)z
Since limn nz = 1 we can multiply by that factor.
1 1 2z 3z (n + 1)z
= lim
z n (1 + z)(1 + z/2) (1 + z/n) 1z 2z nz
(n + 1)z
1 Y 1
=
z n=1
1 + z/n nz
We derived this formula from Eulers formula which is valid only in the left half-plane. However,
the product formula is valid for all z except z = 0, 1, 2, . . ..
978
The Euler-Mascheroni Constant. Before deriving Weierstrass product formula for the Gamma
function we will need to define the Euler-Mascheroni constant
1 1 1
= lim 1 + + + + log n = 0.5772 .
n 2 3 n
Since the product is uniformly convergent, 1/(z) is an entire function. Since 1/(z) has no
singularities, we see that (z) has no zeros.
Result 33.4.1 Eulers formula for the Gamma function is valid for <(z) > 0.
Z
(z) = et tz1 dt
0
Hankels formula defines the (z) for the entire complex plane except for the
points z = 0, 1, 2, . . ..
Z
1
(z) = et tz1 dt
2 sin(z) C
Gauss and Weierstrass product formulas are, respectively
z
1Y 1 z 1
(z) = 1+ 1+ and
z n=1 n n
h
1 z
Y z z/n i
= ze 1+ e .
(z) n=1
n
979
40000
30000
20000
10000
5 10 15 20 25 30
We could first try to approximate the integral by only looking at the domain where the integrand is
large. In Figure 33.2 the integrand in the formula for (10), et t9 , is plotted.
We see that the important part of the integrand is the hump centered around x = 9. If we
find where the integrand of (x) has its maximum
d t x1
e t =0
dx
et tx1 + (x 1) et tx2 = 0
(x 1) t = 0
t = x 1,
we see that the maximum varies with x. This could complicate our analysis. To take care of this
problem we introduce the change of variables t = xs.
Z
(x) = exs (xs)x1 x ds
0
Z
=x x
exs sx s1 ds
0
Z
=x x
ex(slog s) s1 ds
0
The integrands, (ex(slog s) s1 ), for (5) and (20) are plotted in Figure 33.3.
We see that the important part of the integrand is the hump that seems to be centered about
s = 1. Also note that the the hump becomes narrower with increasing x. This makes sense as the
ex(slog s) term is the most rapidly varying term. Instead of integrating from zero to infinity, we
could get a good approximation to the integral by just integrating over some small neighborhood
centered at s = 1. Since slog s has a minimum at s = 1, ex(slog s) has a maximum there. Because
the important part of the integrand is the small area around s = 1, it makes sense to approximate
s log s with its Taylor series about that point.
1
s log s = 1 + (s 1)2 + O (s 1)3
2
Since the hump becomes increasingly narrow with increasing x, we will approximate the 1/s term
in the integrand with its value at s = 1. Substituting these approximations into the integral, we
980
0.007
0.006
0.005
0.004
0.003
0.002
0.001
1 2 3 4
-9
210
-9
1.510
-9
110
-10
510
1 2 3 4
obtain
Z 1+
2
(x) xx ex(1+(s1) /2)
ds
1
Z 1+
2
x x
=x e ex(s1) /2
ds
1
are exponentially small. Thus instead of integrating from 1 to 1 + we can integrate from
to .
Z
2
(x) xx ex ex(s1) /2 ds
Z
2
x x
=x e exs /2 ds
r
2
= xx ex
x
(x) 2xx1/2 ex as x .
This is known as Stirlings approximation to the Gamma function. In the table below, we see
that the approximation is pretty good even for relatively small argument.
981
n (n) 2xx1/2 ex relative error
5 24 23.6038 0.0165
15 8.71783 1010 8.66954 1010 0.0055
25 6.20448 1023 6.18384 1023 0.0033
35 2.95233 1038 2.94531 1038 0.0024
45 2.65827 1054 2.65335 1054 0.0019
In deriving Stirlings approximation to the Gamma function we did a lot of hand waving. How-
ever, all of the steps can be justified and better approximations can be obtained by using Laplaces
method for finding the asymptotic behavior of integrals.
982
33.6 Exercises
Exercise 33.1
Given that
Z
2
ex dx = ,
ExerciseR33.2
3
Evaluate 0 ex dx in terms of the gamma function.
Exercise 33.3
Show that
Z
() + ()
ex sin(log x) dx = .
0 2
983
33.7 Hints
Hint 33.1
Use the change of variables, = x2 in the integral. To find the value of (n + 1/2) use the difference
relation.
Hint 33.2
Make the change of variable = x3 .
Hint 33.3
984
33.8 Solutions
Solution 33.1
Z
2
ex dx =
Z
2
ex dx =
0 2
(1)(3)(5) (2n 1)
(n + 1/2) =
2n
Solution 33.2
We make the change of variable = x3 , x = 1/3 , dx = 13 2/3 d.
Z Z
3 1
ex dx = e 2/3 d
0 0 3
1 1
=
3 3
Solution 33.3
Z Z
1 log x
ex sin(log x) dx = ex e log x dx
e
0 0 2
1 x
Z
x x dx
= e
2 0
1
= ((1 + ) (1 ))
2
1
= (() ()())
2
() + ()
=
2
985
986
Chapter 34
Bessel Functions
987
The indicial equation is
( 1) + 2 = 0
= .
If do not differ by an integer, (that is if is not a half-integer), then there will be two series
solutions of the Frobenius form.
X
X
k
y1 (z) = z ak z , y2 (z) = z bk z k
k=0 k=0
If is a half-integer, the second solution may or may not be in the Frobenius form. In any case, then
will always be at least one solution in the Frobenius form. We will determine that series solution.
y(z) and it derivatives are
X
X
X
y= ak z k+ , y0 = (k + )ak z k+1 , y 00 = (k + )(k + 1)ak z k+2 .
k=0 k=0 k=0
z 2 y 00 + zy 0 + z 2 2 y = 0
X X
X
X
(k + )(k + 1)ak z k+ + (k + )ak z k+ + ak z k++2 2 ak z k+ = 0
k=0 k=0 k=0 k=0
X
X
k 2 + 2k ak z k + ak2 z k = 0
k=0 k=2
We equate powers of z to obtain equations that determine the coefficients. The coefficient of z 0 is
the equation 0 a0 = 0. This corroborates that a0 is arbitrary, (but non-zero). The coefficient of z 1
is the equation
(1 + 2)a1 = 0
a1 = 0
k 2 + 2k ak + ak2 = 0.
ak2 ak2
ak = 2 =
k + 2k k(k + 2)
From the recurrence relation we see that all the odd coefficients are zero, a2k+1 = 0. The even
coefficients are
a2k2 (1)k a0
a2k = = 2k
4k(k + ) 2 k!(k + + 1)
Thus we have the series solution
X (1)k
y(z) = a0 z 2k .
22k k!(k + + 1)
k=0
a0 is arbitrary. We choose a0 = 2 . We call this solution the Bessel function of the first kind and
order and denote it with J (z).
X (1)k z 2k+
J (z) =
k!(k + + 1) 2
k=0
988
Recall that the Gamma function is non-zero and finite for all real arguments except non-positive
integers. (x) has singularities at x = 0, 1, 2, . . .. Therefore, J (z) is well-defined when is not
a positive integer. Since J (z) z at z = 0, J (z) is clear linearly independent to J (z) for
non-integer . In particular we note that there are two solutions of the Frobenius form when is a
half odd integer.
X (1)k z 2k
J (z) = , for 6 Z+
k!(k + 1) 2
k=0
Of course for = 0, J (z) and J (z) are identical. Consider the case that = n is a positive
integer. Since (x) + as x 0, 1, 2, . . . we see the the coefficients in the series for Jnu (z)
vanish for k = 0, . . . , n 1.
X (1)k z 2kn
Jn (z) =
k!(k n + 1) 2
k=n
X (1)k+n z 2k+n
Jn (z) =
(k + n)!(k + 1) 2
k=0
X (1)k z 2k+n
Jn (z) = (1)n
k!(k + n)! 2
k=0
Jn (z) = (1)n Jn (z)
Thus we see that Jn (z) and Jn (z) are not linearly independent for integer n.
2
00 1 0 1
u + u + 2 u = 0.
4
The point = 0 and hence the point z = is an irregular singular point. We will find the leading
order asymptotic behavior of the solutions as z +.
989
Leading Order Behavior. In order to find the leading order behavior, we substitute s = x +
t(x) where t(x) x as x into the differential equation for s. We first consider the case
s = x + t(x). We assume that t0 1 and t00 1/x.
1 2
t00 + ( + t0 )2 + ( + t0 ) + 1 2 = 0
x x
2
1
t00 + 2t0 + (t0 )2 + + t0 2 = 0
x x x
2t0 + 0
x
1
t0
2x
1
t ln x as x .
2
where u(x) ln x as x .
By substituting t = 21 ln x+u(x) into the differential equation for t, you could show that u(x)
const as x . Thus the full leading order behavior of the solutions is
y cx1/2 ex+u(x) as x
where u1 , u2 0 as x .
as x +, where u1 , u2 0 as x .
990
34.3 Bessel Functions of the First Kind
Consider the function exp( 12 z(t 1/t)). We can expand this function in a Laurent series in powers
of t,
1
X
e 2 z(t1/t) = Jn (z)tn ,
n=
= 0.
Thus for integer n, Jn (z) satisfies Bessels equation.
991
Jn (z) is called the Bessel function of the first kind. The subscript is the order. Thus J1 (z) is a
Bessel function of order 1. J0 (x) and J1 (x) are plotted in the first graph in Figure 34.1. J5 (x) is
plotted in the second graph in Figure 34.1. Note that for non-negative, integer n, Jn (z) behaves as
z n at z = 0.
0.8
0.6
0.4
0.2
2 4 6 8 10 12 14
-0.2
-0.4
0.3
0.2
0.1
5 10 15 20
-0.1
-0.2
992
For the path of integration, we are free to choose any contour that encloses the origin. Consider the
circular path on |t| = 1. Since the integral is uniformly convergent, we can interchange the order of
integration and summation.
1 z n X (1)m z 2m
I
Jn (z) = tnm1 et dt
2 2 m=0 22m m!
dn+m z
I
1 nm1 t 1
t e dt = lim (e )
2 z0 (n + m)! dz n+m
1
=
(n + m)!
X (1)m z n+2m
Jn (z) = for n 0.
m=0
m!(n + m)! 2
993
The Integrand for Non-Integer . Recall the definition of the Bessel function
I
1 z 2
J (z) = t1 etz /4t dt.
2 2
When is an integer, the integrand is single valued. Thus if you start at any point and follow any
path around the origin, the integrand will return to its original value. This property was the key to
Jn satisfying Bessels equation. If is not an integer, then this property does not hold for arbitrary
paths around the origin.
A New Contour. First, since the integrand is multiple-valued, we need to define what branch of
the function we are talking about. We will take the principal value of the integrand and introduce
a branch cut on the negative real axis. Let C be a contour that starts at z = below the branch
cut, circles the origin, and returns to the point z = above the branch cut. This contour is
shown in Figure 34.2.
Thus we define
I
1 z 2
J (z) = t1 etz /4t dt.
2 2 C
2
I
00 1 0 1 z d 1 tz2 /4t
J + J + 1 2 J = t e dt.
z z 2 2 C dt
2
Since t1 etz /4t is analytic in 0 < |z| < and | arg(z)| < , and it vanishes at z = , the
integral is zero. Thus the Bessel function of the first kind satisfies Bessels equation for all complex
orders.
Series Expansion. Because of the et factor in the integrand, the integral defining J converges
2
uniformly. Expanding ez /4t in a Taylor series yields
1 z X (1)m z 2m
I
J (z) = tm1 et dt
2 2 m=0 22m m! C
Since I
1 1
= t1 et dt,
() 2 C
994
we have the series expansion of the Bessel function
X (1)m z +2m
J (z) = .
m=0
m!( + m + 1) 2
Linear Independence. We use Abels formula to compute the Wronskian of Bessels equation.
Z z
1 1
W (z) = exp d = e log z =
z
Thus to within a function of , the Wronskian of any two solutions is 1/z. For any given ,
there are two linearly independent solutions. Note that Bessels equation is unchanged under the
transformation . Thus both J and J satisfy Bessels equation. Now we must determine
if they are linearly independent. We have already shown that for integer values of they are not
independent. (Jn = (1)n Jn .) Assume that is not an integer. We compute the Wronskian of J
and J .
J J
W [J , J ] = 0 0
J J
0
= J J J J0
Since the Wronskian is a function of times 1/z the coefficients of all of the powers of z except 1/z
must vanish.
=
z( + 1)( + 1) z( + 1)( + 1)
2
=
z()(1 )
Using an identity for the Gamma function simplifies this expression.
2
= sin()
z
Since the Wronskian is nonzero for non-integer , J and J are independent functions when is
not an integer. In this case, the general solution of Bessels equation is aJ + bJ .
z2
I
2
t + t2 t1 etz /4t dt = 0.
C 4
z2
I
1 z 2
t + t2 t1 etz /4t dt = 0.
2 2 C 4
995
2
1
t1 etz /4t
H
Since J (z) = 2 (z/2) C
dt,
" #
1 2
2 2 z
J1 + J+1 J = 0.
z z 4
2
J1 + J+1 = J
z
1 z 1
I I
1 tz 2 /4t 1 z z 2
J0 (z) =
t e dt + t 1
etz /4t dt
2 2 C 2 2 C 2t
1 z I 2 1 z +1 I 2
J0 (z) = t1 etz /4t dt t2 etz /4t dt
z 2 2 C 2 2 C
0
J = J J+1
z
From the two relations we have derived you can show that
1
J0 = (J1 + J+1 ) and J0 = J1 J .
2 z
Result 34.3.1 The Bessel function of the first kind, J (z), is defined,
I
1 z 2
J (z) = t1 etz /4t dt.
2 2 C
996
34.3.5 Bessel Functions of Half-Integer Order
Consider J1/2 (z). Start with the series expansion
X (1)m z 1/2+2m
J1/2 (z) = .
m=0
m!(1/2 + m + 1) 2
(1)(3)(2n1)
Use the identity (n + 1/2) = 2n .
X (1)m 2m+1 z 1/2+2m
=
m=0
m!(1)(3) (2m + 1) 2
1/2+m
X (1)m 2m+1 1
= z 1/2+2m
m=0
(2)(4) (2m) (1)(3) (2m + 1) 2
1/2 X
2 (1)m 2m+1
= z
z m=0
(2m + 1)!
1/2 0
J3/2 (z) = J1/2 (z) J1/2 (z)
z
1/2 1/2 1/2
1/2 2 1 2 2
= z 1/2 sin z z 3/2 sin z z 1/2 cos z
z 2
= 21/2 1/2 z 3/2 sin z + 21/2 1/2 z 3/2 sin z 21/2 1/2 cos z
1/2 1/2
2 2
= z 3/2 sin z z 1/2 cos z
1/2
2
= z 3/2 sin z z 1/2 cos z .
You can show that 1/2
2
J1/2 (z) = cos z.
z
Note that at a first glance it appears that J3/2 z 1/2 as z 0. However, if you expand the
sine and cosine you will see that the z 1/2 and z 1/2 terms vanish and thus J3/2 (z) z 3/2 as z 0
as we showed previously.
Recall that we showed the asymptotic behavior as x + of Bessel functions to be linear
combinations of
x1/2 sin(x + U1 (x)) and x1/2 cos(x + U2 (x))
where U1 , U2 0 as x +.
997
34.4 Neumann Expansions
Consider expanding an analytic function in a series of Bessel functions of the form
X
f (z) = an Jn (z).
n=0
1
The Expansion of 1/( z). Assume that z has the uniformly convergent expansion
1 X
= c0 ()J0 (z) + 2 cn ()Jn (z),
z n=1
Thus we have
"
#
X
+ c0 ()J0 (z) + 2 cn ()Jn (z) = 0
z n=1
" # " #
X X
c00 J0 +2 c0n Jn + c0 J00 +2 cn Jn0 = 0.
n=1 n=1
" # " #
X X
c00 J0 +2 c0n Jn + c0 (J1 ) + cn (Jn1 Jn+1 ) = 0.
n=1 n=1
Collecting coefficients of Jn ,
X
(c00 + c1 )J0 + (2c0n + cn+1 cn1 )Jn = 0.
n=1
Equating the coefficients of Jn , we see that the cn are given by the relations,
998
Using the recurrence relations we can calculate the cn s. The first few are:
1 1
c1 = 2
= 2
1 2 1 4
c2 = 2 3 = + 3
1 1 12 3 24
c3 = 2 2 = 2 + 4.
2 4
n1
2 n+1n! 1 + 2 4 n
2(2n2) + 24(2n2)(2n4) + + 24n(2n2)(2nn) for even n
cn () =
2 4 n1
2n1 n!
1+ + + + for odd n
n+1 2(2n2) 24(2n2)(2n4) 24(n1)(2n2)(2n(n1))
1
Uniform Convergence of the Series. We assumed before that the series expansion of z is
uniformly convergent. The behavior of cn and Jn are
2n1 n! zn
cn () = + O( n ), Jn (z) = + O(z n+1 ).
n+1 2n n!
This gives us
n n+1 !
1 z 1 z
cn ()Jn (z) = +O .
2
If z = < 1 we can bound the series with the geometric series
P n
. Thus the series is uniformly
convergent.
Neumann Expansion of an Analytic Function. Let f (z) be a function that is analytic in the
disk |z| r. Consider |z| < r and the path of integration along || = r. Cauchys integral formula
tells us that
I
1 f ()
f (z) = d.
2 z
1
Substituting the expansion for z ,
I !
1 X
= f () co ()J0 (z) + 2 cn ()Jn (z) d
2 n=1
I I
1f () X Jn (z)
= J0 (z) d + cn ()f () d
2 n=1
I
X Jn (z)
= J0 (z)f (0) + cn ()f () d.
n=1
999
Result 34.4.1 let f (z) be analytic in the disk, |z| r. Consider |z| < r
and the path of integration along || = r. f (z) has the Bessel function series
expansion
I
X Jn (z)
f (z) = J0 (z)f (0) + cn ()f () d,
n=1
where the cn satisfy
1 X
= c0 ()J0 (z) + 2 cn ()Jn (z).
z n=1
When is an integer, J and J are not linearly independent. In order to find an second lin-
early independent solution, we define the Bessel function of the second kind, (also called Webers
function),
(J
(z) cos()J (z)
sin() when is not an integer
Y = J (z) cos()J (z)
lim sin() when is an integer.
0.5
0.25
5 10 15 20
-0.25
-0.5
-0.75
-1
1000
Result 34.5.1 The Bessel function of the second kind, Y (z), is defined,
( J (z) cos()J (z)
sin()
when is not an integer
Y = J (z) cos()J (z)
lim sin()
when is an integer.
2
1
w00 + w0 1 + 2 w = 0.
z z
1001
This equation is identical to the Bessel equation except for a sign change in the last term. If we
make the change of variables = z, u() = w(z) we obtain the equation
2
001 0
u u 1 2 u = 0
2
00 1 0
u + u + 1 2 u = 0.
This is the Bessel equation. Thus J (z) is a solution to the modified Bessel equation. This motivates
us to define the modified Bessel function of the first kind
I (z) = J (z).
Since J and J are linearly independent solutions when is not an integer, I and I are linearly
independent solutions to the modified Bessel equation when is not an integer.
I (z) = J (z)
X (1)m z +2m
=
m=0
m!( + m + 1) 2
X (1)m 2m z +2m
=
m=0
m!( + m + 1) 2
z +2m
X 1
=
m=0
m!( + m + 1) 2
Modified Bessel Functions of the Second Kind. In order to have a second linearly indepen-
dent solution when is an integer, we define the modified Bessel function of the second kind
I I
(
2 sin() when is not an integer,
K (z) = I I
lim 2 sin() when is an integer.
I and K are linearly independent for all . In Figure 34.4 I0 and K0 are plotted in solid and
dashed lines, respectively.
1002
10
1 2 3 4
Result 34.7.1 The modified Bessel functions of the first and second kind,
I (z) and K (z), are defined,
I (z) = J (z).
(
I I
2 sin()
when is not an integer,
K (z) = I
lim 2 Isin() when is an integer.
The modified Bessel function of the first kind has the expansion,
X 1 z +2m
I (z) =
m=0
m!( + m + 1) 2
This is called the Bessel function of the first kind and order . Clearly J (z) is defined and is
linearly independent to J (z) if is not an integer. What happens when is an integer?
Exercise 34.2
Consider Bessels equation for integer n,
z 2 y 00 + zy 0 + z 2 n2 y = 0.
find two solutions of Bessels equation. (For n = 0 you will find only one solution.) Are the two
solutions linearly independent? Define the Bessel function of the first kind and order n,
I
1 1
Jn (z) = tn1 e 2 z(t1/t) dt,
2 C
where C is a simple, closed contour about the origin. Verify that
1
X
e 2 z(t1/t) = Jn (z)tn .
n=
Exercise 34.3
Use the generating function
1
X
e 2 z(t1/t) = Jn (z)tn
n=
z 2 y 00 + zy 0 + z 2 n2 y = 0.
Exercise 34.4
Using
2n n
Jn1 + Jn+1 = Jn and Jn0 = Jn Jn+1 ,
z z
show that
1 n
Jn0 = (Jn1 Jn+1 ) and Jn0 = Jn1 Jn .
2 z
Exercise 34.5
Find the general solution of
1 1
w00 + w0 + 1 2 w = z.
z 4z
1004
Exercise 34.6
Show that J (z) and Y (z) are linearly independent for all .
Exercise 34.7
Compute W [I , I ] and W [I , K ].
Exercise 34.8
Using the generating function,
+
z 1 X
exp t = Jn (z)tn ,
2 t n=
1.
2n
Jn (z) = Jn1 (z) + Jn+1 (z).
z
This relation is useful for recursively computing the values of the higher order Bessel functions.
2.
1
Jn0 (z) =(Jn1 Jn+1 ) .
2
This relation is useful for computing the derivatives of the Bessel functions once you have the
values of the Bessel functions of adjacent order.
3.
d
z n Jn (z) = z n Jn+1 (z).
dz
Exercise 34.9
Use the Wronskian of J (z) and J (z),
2 sin
W [J (z), J (z)] = ,
z
to derive the identity
2
J+1 (z)J (z) + J (z)J1 (z) = sin .
z
Exercise 34.10
Show that, using the generating function or otherwise,
Exercise 34.11
It is often possible to solve certain ordinary differential equations by converting them into the
Bessel equation by means of various transformations. For example, show that the solution of
y 00 + xp2 y = 0,
1005
Here c1 and c2 are arbitrary constants. Thus show that the Airy equation,
y 00 + xy = 0,
Show that
sin z cos z
j1 (z) = ,
z2 z
sinh z
i0 (z) = ,
z
k0 (z) = exp(z).
2z
Exercise 34.13
Show that as x ,
1006
34.9 Hints
Hint 34.2
Hint 34.3
Hint 34.4
Use the generating function
1
X
e 2 z(t1/t) = Jn (z)tn
n=
z 2 y 00 + zy 0 + z 2 n2 y = 0.
Hint 34.6
Use variation of parameters and the Wronskian that was derived in the text.
Hint 34.7
Compute the Wronskian of J (z) and Y (z). Use the relation
2
W [J , J ] = sin()
z
Hint 34.8
Derive W [I , I ] from the value of W [J , J ]. Derive W [I , K ] from the value of W [I , I ].
Hint 34.9
Hint 34.10
Hint 34.11
Hint 34.12
Hint 34.13
Hint 34.14
1007
34.10 Solutions
Solution 34.1
Bessels equation is
L[y] z 2 y 00 + zy 0 + z 2 n2 y = 0.
By considering
d 1 z(t1/t) 1 1 1
t e2 = x t+ + 1 e 2 z(t1/t)
dt 2 t
2 !
d2 2 1 z(t1/t)
1 2 1 1 1
t e2 = x t+ + x 2t + + 2 e 2 z(t1/t)
dt2 4 t t
we see that i d2
h 1 d 1 z(t1/t)
L e 2 z(t1/t) = t 2
3 t + 1 n 2
e2 .
dt2 dt
Thus Equation 34.1 becomes
Z 2
d 2 1 z(t1/t) d 1 z(t1/t) 2 1
t e 2 3 t e 2 +(1 n ) e 2 z(t1/t) v(t) dt = 0
C dt2 dt
We apply integration by parts to move derivatives from the kernel to v(t).
h i h 1 i h i Z
2 12 z(t1/t) z(t1/t) 0 1 1
z(t1/t)
e 2 z(t1/t) t2 v 00 (t) + 3tv(t) + 1 n2 v(t) dt =
t e v(t) t e 2 v (t) + 3t e 2 v(t) +
C C C C
h 1 Z
z(t1/t) 2 0
i 1
z(t1/t) 00
t v (t) + 3tv(t) + (1 n2 )v(t) dt = 0
2
e2 (t 3t)v(t) tv (t) + e2
C C
In order that the integral vanish, v(t) must be a solution of the differential equation
t2 v 00 + 3tv + 1 n2 v = 0.
This is an Euler equation with the solutions {tn1 , tn1 } for non-zero n and {t1 , t1 log t} for
n = 0.
Consider the case of non-zero n. Since
1
e 2 z(t1/t) t2 3t v(t) tv 0 (t)
is single-valued and analytic for t 6= 0 for the functions v(t) = tn1 and v(t) = tn1 , the boundary
term will vanish if C is any closed contour that that does not pass through the origin. Note that
the integrand in our solution,
1
e 2 z(t1/t) v(t),
is analytic and single-valued except at the origin and infinity where it has essential singularities.
Consider a simple closed contour that does not enclose the origin. The integral along such a path
would vanish and give us y(z) = 0. This is not an interesting solution. Since
1
e 2 z(t1/t) v(t),
1008
has non-zero residues for v(t) = tn1 and v(t) = tn1 , choosing any simple, positive, closed contour
about the origin will give us a non-trivial solution of Bessels equation. These solutions are
Z Z
n1 21 z(t1/t) 1
y1 (t) = t e dt, y2 (t) = tn1 e 2 z(t1/t) dt.
C C
Now consider the case n = 0. The two solutions above concide and we have the solution
Z
1
y(t) = t1 e 2 z(t1/t) dt.
C
1
Choosing v(t) = t log t would make both the boundary terms and the integrand multi-valued. We
do not pursue the possibility of a solution of this form.
The solution y1 (t) and y2 (t) are not linearly independent. To demonstrate this we make the
change of variables t 1/t in the integral representation of y1 (t).
Z
1
y1 (t) = tn1 e 2 z(t1/t) dt
ZC
1 1
= (1/t)n1 e 2 z(1/t+t) 2 dt
t
ZC
1
= (1)n tn1 e 2 z(t1/t) dt
C
= (1)n y2 (t)
Thus we see that a solution of Bessels equation for integer n is
Z
1
y(t) = tn1 e 2 z(t1/t) dt
C
Solution 34.2
The generating function is
z
X
e 2 (t1/t) = Jn (z)tn .
n=
In order to show that Jn satisfies Bessels equation we seek to show that
X
z 2 Jn00 (z) + zJn (z) + (z 2 n2 )Jn (z) tn = 0.
n=
To get the appropriate terms in the sum we will differentiate the generating function with respect
to z and t. First we differentiate it with respect to z.
1 1 z
X
t e 2 (t1/t) = Jn0 (z)tn
2 t n=
2
1 1 z
X
t e 2 (t1/t) = Jn00 (z)tn
4 t n=
1009
Now we differentiate with respect to t and multiply by t get the n2 Jn term.
z 1 z
X
1 + 2 e 2 (t1/t) = nJn (z)tn1
2 t n=
z 1 z
X
t+ e 2 (t1/t) = nJn (z)tn
2 t n=
2
2
z 1 z z 1 z
X
1 2 e 2 (t1/t) + t+ e 2 (t1/t) = n2 Jn (z)tn1
2 t 4 t n=
2
2
z 1 z z 1 z
X
t e 2 (t1/t) + t+ e 2 (t1/t) = n2 Jn (z)tn
2 t 4 t n=
X
z 2 Jn00 (z) + zJn (z) + z 2 n2 Jn (z) tn = 0
n=
Solution 34.3
n
Jn0 = Jn Jn+1
z
1
= (Jn1 + Jn+1 ) Jn+1
2
1
= (Jn1 Jn+1 )
2
n
Jn0 = Jn Jn+1
z
n 2n
= Jn Jn Jn1
z z
n
= Jn1 Jn
z
Solution 34.4
The linearly independent homogeneous solutions are J1/2 and J1/2 . The Wronskian is
2 2
W [J1/2 , J1/2 ] = sin(/2) = .
z z
Using variation of parameters, a particular solution is
Z z Z z
J1/2 () J1/2 ()
yp = J1/2 (z) d + J1/2 (z) d
2/ 2/
Z z Z z
= J1/2 (z) 2 J1/2 () d J1/2 (z) 2 J1/2 () d.
2 2
1010
Thus the general solution is
Z z Z z
y = c1 J1/2 (z) + c2 J1/2 (z) + J1/2 (z) 2 J1/2 () d J1/2 (z) 2 J1/2 () d.
2 2
We could substitute
1/2 1/2
2 2
J1/2 (z) = sin z and J1/2 = cos z
z z
into the solution, but we cannot evaluate the integrals in terms of elementary functions. (You can
write the solution in terms of Fresnel integrals.)
Solution 34.5
J J cot() J csc()
W [J , Y ] = 0
J J0 cot() J 0
csc()
J J J J
= cot() 0 csc() 0
J J0 0
J J
2
= csc() sin()
z
2
=
z
Since the Wronskian does not vanish identically, the functions are independent for all values of .
Solution 34.6
I (z) = J (z)
I I
W [I , I ] = 0
0
I I
J (z) J (z)
= 0
0
J (z) J (z)
J (z) J (z)
= 0
0
J (z) J (z)
2
= sin()
z
2
= sin()
z
I csc()(I I )
W [I , K ] = 0 2
0
I
csc()(I
2 I0 )
I I I I
= csc() 0 0 0
I0
2 I I I
2
= csc() sin()
2 z
1
=
z
1011
Solution 34.7
1. We diferentiate the generating function with respect to t.
z
X
e 2 (t1/t) = Jn (z)tn
n=
z 1 z
X
1+ e 2 (t1/t) = Jn (z)ntn1
2 t2 n=
X
1 2 X
1+ Jn (z)tn = Jn (z)ntn1
t2 n=
z n=
X X 2 X
Jn (z)tn + Jn (z)tn2 = Jn (z)ntn1
n= n=
z n=
X X 2 X
Jn1 (z)tn1 + Jn+1 (z)tn1 = Jn (z)ntn1
n= n=
z n=
2
Jn1 (z) + Jn+1 (z) = Jn (z)n
z
2n
Jn (z) = Jn1 (z) + Jn+1 (z)
z
3.
d
z n Jn (z) = nz n1 Jn (z) + z n Jn0 (z)
dz
1 2n 1
= z n Jn (z) + z n (Jn1 (z) Jn+1 (z))
2 z 2
1 n 1
= z (Jn+1 (z) + Jn1 (z)) + z n (Jn1 (z) Jn+1 (z))
2 2
d
z n Jn (z) = z n Jn+1 (z)
dz
1012
Solution 34.8
For this part we will use the identities
J0 (z) = J (z) J+1 (z), J0 (z) = J1 (z) J (z).
z z
= 2 sin()
J (z) J (z)
0 0
J (z) J (z) z
J (z) J (z) 2 sin()
J1 (z) J J (z) J+1 (z) = z
z z
J (z) J (z) J (z) J (z) 2 sin()
J1 (z) J+1 (z) z J (z) J (z) = z
2 sin()
J+1 (z)J (z) J (z)J1 (z) =
z
2
J+1 (z)J (z) + J (z)J1 (z) = sin
z
Solution 34.9
The generating function for the Bessel functions is
1
X
e 2 z(t1/t) = Jn (z)tn . (34.2)
n=
X
J0 (z) + 2 J2n (z) = 1
n=1
1013
X
J0 (z) + 2 Jn (z)n = ez (34.3)
n=1
Next we substitute t = into the generating function.
X
J0 (z) + 2 (1)n Jn (z)n = ez (34.4)
n=1
Dividing the sum of Equation 34.3 and Equation 34.4 by 2 gives us the desired identity.
X
J0 (z) + (1 + (1)n ) Jn (z)n = cos z
n=1
X
J0 (z) + 2 Jn (z)n = cos z
n=2
even n
X
J0 (z) + 2 (1)n/2 Jn (z) = cos z
n=2
even n
X
J0 (z) + 2 (1)n J2n (z) = cos z
n=1
3. Dividing the difference of Equation 34.3 and Equation 34.4 by 2 gives us the other identity.
X
(1 (1)n ) Jn (z)n = sin z
n=1
X
2 Jn (z)n1 = sin z
n=1
odd n
X
2 (1)(n1)/2 Jn (z) = sin z
n=1
odd n
X
2 (1)n J2n+1 (z) = sin z
n=0
We take the product of Equation 34.2 and Equation 34.5 to obtain the final identity.
! !
1 1
X X
Jn (z)tn Jm (z)(t)m = e 2 z(t1/t) e 2 z(t1/t) = 1
n= m=
Note that the coefficients of all powers of t except t0 in the product of sums must vanish.
X
Jn (z)tn Jn (z)(t)n = 1
n=
X
Jn2 (z) = 1
n=
X
J02 (z) + 2 Jn2 (z) = 1
n=1
1014
Solution 34.10
First we make the change of variables y(x) = x1/2 v(x). We compute the derivatives of y(x).
1
y 0 = x1/2 v 0 + x1/2 v,
2
00 1/2 00 1/2 0 1
y =x v +x v x3/2 v.
4
We substitute these into the differential equation for y.
y 00 + xp2 y = 0
1
x1/2 v 00 + x1/2 v 0 x3/2 v + xp3/2 v = 0
4
2 00 0 p 1
x v + xv + x v=0
4
Then we make the change of variables v(x) = u(), = p2 xp/2 . We write the derivatives in terms of
.
d d d d p d
x
=x = xxp/21 =
dx dx d d 2 d
2
d d d d p d p d p2 d2 p2 d
x2 2 + x =x x = = 2 2 +
dx dx dx dx 2 d 2 d 4 d 4 d
We write the differential equation for u().
p2 2 00 p2 0
2
p 2 1
u + u + u=0
4 4 4 4
1 1
u00 + u0 + 1 2 2 u = 0
p
This is the Bessel equation of order 1/p. We can write the general solution for u in terms of Bessel
functions of the first kind if p 6= 1. Otherwise, we use a Bessel function of the second kind.
2 p/2 2 p/2
y(x) = c1 xJ1/p x + c2 xJ1/p x for p 6= 0, 1
p p
2 p/2 2 p/2
y(x) = c1 xJ1/p x + c2 xY1/p x for p 6= 0
p p
The Airy equation y 00 + xy = 0 is the case p = 3. The general solution of the Airy equation is
2 3/2 2 3/2
y(x) = c1 xJ1/3 x + c2 xJ1/3 x .
3 3
Solution 34.11
Consider J1/2 (z). We start with the series expansion.
X (1)m z 1/2+2m
J1/2 (z) = .
m=0
m!(1/2 + m + 1) 2
1015
(1)(3)(2n1)
Use the identity (n + 1/2) = 2n .
X (1)m 2m+1 z 1/2+2m
=
m=0
m!(1)(3) (2m + 1) 2
1/2+m
X (1)m 2m+1 1
= z 1/2+2m
m=0
(2)(4) (2m) (1)(3) (2m + 1) 2
1/2 X
2 (1)m 2m+1
= z
z m=0
(2m + 1)!
sin z cos z
j1 (z) = .
z2 z
The modified Bessel function of the first kind is
I (z) = J (z).
We can determine I1/2 (z) from J1/2 (z).
r
1/2 2
I1/2 (z) = sin(z)
z
r
2
= sinh(z)
z
r
2
= sinh(z)
z
The spherical Bessel function i0 (z) is
sinh z
i0 (z) = .
z
1016
The modified Bessel function of the second kind is
I I
K (z) = lim
2 sin()
Thus K1/2 (z) can be determined in terms of I1/2 (z) and I1/2 (z).
K1/2 (z) = I1/2 I1/2
2
We determine I1/2 with the recursion relation
I1 (z) = I0 (z) + I (z).
z
0 1
I1/2 (z) = I1/2 (z) + I1/2 (z)
r 2z r r
2 1/2 1 2 3/2 1 2 1/2
= z cosh(z) z sinh(z) + z sinh(z)
2 2z
r
2
= cosh(z)
z
Now we can determine K1/2 (z).
r r !
2 2
K1/2 (z) = cosh(z) sinh(z)
2 z z
r
z
= e
2z
The spherical Bessel function k0 (z) is
z
k0 (z) = e .
2z
Solution 34.12
The Point at Infinity. With the change of variables z = 1/, w(z) = u() the modified Bessel
equation becomes
n2
1
w00 + w0 1 + 2 w = 0
z z
4 00 3 0
0
u + 2 u + u 1 + n2 2 u = 0
2
n2
1 1
u00 + u0 u = 0.
4 2
The point = 0 and hence the point z = is an irregular singular point. We will find the leading
order asymptotic behavior of the solutions as z +.
Controlling Factor. Starting with the modified Bessel equation for real argument
n2
1
y 00 + y 0 1 + 2 y = 0,
x x
we make the substitution y = es(x) to obtain
1 0 n2
s00 + (s0 )2 + s 1 2 = 0.
x x
n2
We know that x2 1 as x ; we will assume that s00 (s0 )2 as x . This gives us
1 0
(s0 )2 + s 10 as x .
x
To simplify the equation further, we will try the possible two-term balances.
1017
1. (s0 )2 + x1 s0 0 s0 x1 This balance is not consistent as it violates the assumption
that 1 is smaller than the other terms.
2. (s0 )2 1 0 s0 1 This balance is consistent.
1 0 0
3. xs 1 0 s x This balance is inconsistent as (s0 )2 isnt smaller than the other
terms.
Thus the only dominant balance is s0 1. This balance is consistent with our initial assumption
that s00 (s0 )2 . Thus s x and the controlling factor is ex . We are interested in the decaying
solution, so we will work with the controlling factor ex .
Leading Order Behavior. In order to find the leading order behavior, we substitute s =
x + t(x) where t(x) x as x into the differential equation for s. We assume that t0 1 and
t00 1/x.
1 n2
t00 + (1 + t0 )2 + (1 + t0 ) 1 2 = 0
x x
2
1 1 n
t00 2t0 + (t0 )2 + t0 2 = 0
x x x
Using our assumptions about the behavior of t0 and t00 ,
1
2t0 0
x
1
t0
2x
1
t ln x as x .
2
This asymptotic behavior is consistent with our assumptions.
Thus the leading order behavior of the decaying solution is
1
y c ex 2 ln x+u(x) = cx1/2 ex+u(x) as x ,
where u(x) ln x as x .
By substituting t = 21 ln x+u(x) into the differential equation for t, you could show that u(x)
const as x . Thus the full leading order behavior of the decaying solution is
y cx1/2 ex as x
where u(x) 0 as x . It turns out that the asymptotic behavior of the modified Bessel function
of the second kind is
r
x
Kn (x) e as x
2x
Asymptotic Series. Now we find the full asymptotic series for Kn (x) as x . We substitute
ex
r
x
Kn (x) e w(x)Kn (x)
2x x
into the modified Bessel equation, where w(x) is a Taylor series about x = , i.e.,
r
x X
Kn (x) e ak xk , a0 = 1.
2x
k=0
1018
We substitute these expressions into the modified Bessel equation.
x2 y 00 + xy 0 x2 + n2 y = 0
3 1
x2 w00 2x2 + x w0 + x2 + x + w + xw0 x + w x2 + n2 w = 0
4 2
1
x2 w00 2x2 w0 + n2 w = 0
4
We set a0 = 1. We use the recurrence relation to determine the rest of the coefficients.
Qk
j=1 4n2 (2j 1)2
ak =
8k k!
Now we have the asymptotic expansion of the modified Bessel function of the second kind.
Qk
4n2 (2j 1)2
r
x X j=1
Kn (x) e xk , as x
2x 8k k!
k=0
Convergence. We determine the domain of convergence of the series with the ratio test. The
1019
Taylor series about infinity will converge outside of some circle.
ak+1 (x)
lim <1
k ak (x)
ak+1 xk1
lim <1
k ak xk
2
4n (2k + 1)2 1
lim |x| < 1
k 8(k + 1)
< |x|
The series does not converge for any x in the finite complex plane. However, if we take only a finite
number of terms in the series, it gives a good approximation of Kn (x) for large, positive x. At
x = 10, the one, two and three term approximations give relative errors of 0.01, 0.0006 and 0.00006,
respectively.
1020
Part V
1021
Chapter 35
Transforming Equations
1023
35.1 Exercises
Exercise 35.1
Find the Laplacian in cylindrical coordinates (r, , z).
x = r cos , y = r sin , z
Exercise 35.2
Find the Laplacian in spherical coordinates (r, , ).
1024
35.2 Hints
Hint 35.1
Hint 35.2
1025
35.3 Solutions
Solution 35.1
p
h1 = (cos )2 + (sin )2 + 0 = 1
p
h2 = (r sin )2 + (r cos )2 + 0 = r
p
h3 = 0 + 0 + 1 2 = 1
1 u 1 u u
2 u = +r + r
r rr r z z
2 2
1 u 1 u u
2 u = r + 2 2 + 2
r r r r z
Solution 35.2
p
h1 = (sin cos )2 + (sin sin )2 + (cos )2 = 1
p
h2 = (r cos cos )2 + (r cos sin )2 + (r sin )2 = r
p
h3 = (r sin sin )2 + (r sin cos )2 + 0 = r sin
1 u u 1 u
2 u = r 2
sin + sin +
r2 sin r r sin
2u
1 u 1 u 1
2 u = 2 r2 + 2 sin + 2
r r r r sin r sin 2
1026
Chapter 36
We classify the equation by the sign of the discriminant. At a given point x0 , y0 , the equation is
classified as one of the following types:
b2 ac > 0 : hyperbolic
b2 ac = 0 : parabolic
b2 ac < 0 : elliptic
If an equation has a particular type for all points x, y in a domain then the equation is said to be
of that type in the domain. Each of these types has a canonical form that can be obtained through
a change of independent variables. The type of an equation indicates much about the nature of its
solution.
We seek a change of independent variables, (a different coordinate system), such that Equa-
tion 36.1 has a simpler form. We will find that a second order quasi-linear partial differential
equation in two variables can be transformed to one of the canonical forms:
u = G(, , u, u , u ), hyperbolic
u = G(, , u, u , u ), parabolic
u + u = G(, , u, u , u ), elliptic
u x = x u + x u
u y = y u + y u
uxx = x2 u + 2x x u + x2 u + xx u + xx u
uxy = x y u + (x y + y x )u + x y u + xy u + xy u
uyy = y2 u + 2y y u + y2 u + yy u + yy u
1027
Substituting these into Equation 36.1 yields an equation in and .
(, )u + (, )u + (, )u = H(, , u, u , u ) (36.2)
u = G(, , u, u , u ) (36.3)
We require that the u and u terms vanish. That is = = 0 in Equation 36.2. This gives us
two constraints on and .
u u = K(, , u, u , u ). (36.5)
We can transform Equation 36.3 to this form with the change of variables
= + , = .
1028
Example 36.1.1 Consider the wave equation with a source.
Since 0 (1)(c2 ) > 0, the equation is hyperbolic. We find the new variables.
dx
= c, x = ct + const, = x + ct
dt
dx
= c, x = ct + const, = x ct
dt
Then we determine t and x in terms of and .
+
t= , x=
2c 2
We calculate the derivatives of and .
t = c x = 1
t = c x = 1
utt = c2 u 2c2 u + c2 u
uxx = u + u
If s(x, t) = 0, then the equation is u = 0 we can integrate with respect to and to obtain
the solution, u = f () + g(). Here f and g are arbitrary C 2 functions. In terms of t and x, we have
To put the wave equation in the form of Equation 36.5 we make a change of variables
= + = 2x, = = 2ct
2
utt c uxx = s(x, t)
4c u 4c2 u = s
2
,
2 2c
1
u u = 2 s ,
4c 2 2c
1029
We calculate the derivatives of and .
x = 2x y = 2y
x = 2x y = 2y
Then we calculate the derivatives of u.
ux = 2x(u u )
uy = 2y(u + u )
uxx = 4x2 (u 2u + u ) + 2(u u )
uyy = 4y 2 (u + 2u + u ) + 2(u + u )
Finally we transform the equation to canonical form.
y 2 uxx x2 uyy = 0
8x2 y 2 u 8x2 y 2 u + 2y 2 (u u ) + 2x2 (u + u ) = 0
1 1
16 ( ) ( + )u = 2u 2u
2 2
u u
u =
2( 2 2 )
We integrate with respect to and to obtain the solution, u = f () + g(). Here f and g are
arbitrary C 2 functions. In terms of x and y, we have
This solution makes a lot of sense, because the real and imaginary parts of an analytic function are
harmonic.
1030
36.1.2 Parabolic equations
Now we consider a parabolic equation, (b2 ac = 0). We seek a change of independent variables
that will put Equation 36.1 in the form
u = G(, , u, u , u ). (36.6)
We require that the u and u terms vanish. That is = = 0 in Equation 36.2. This gives us
two constraints on and .
ax x + b(x y + y x ) + cy y = 0, ax2 + 2bx y + cy2 = 0
We consider the case a 6= 0. The latter constraint allows us to solve for x /y .
x b b2 ac b
= =
y a a
With this information, the former constraint is trivial.
ax x + b(x y + y x ) + cy y = 0
ax (b/a) + b(x + y (b/a)) + cy = 0
(ac b2 )y = 0
0=0
Thus we have a first order partial differential equation for the coordinate which we can solve with
the method of characteristics.
b
x + y = 0
a
The coordinate is chosen to be anything linearly independent of . The characteristic equations
for are
dy b d
= , (x, y(x)) = 0
dx a dx
Solving the differential equation for y(x) determines (x, y). We just write the solution for y(x) in the
form F (x, y(x)) = const. Since the solution of the differential equation for is (x, y(x)) = const,
we then have = F (x, y). Upon solving for and choosing a linearly independent , we divide
Equation 36.2 by (, ) to obtain the canonical form.
In the case that a = 0, we would instead have the constraint,
b
x + y = 0.
c
1031
Example 36.1.4 Consider
y 2 uxx + x2 uyy = 0. (36.8)
For x 6= 0 and y 6= 0 this equation is elliptic. We find new variables that will put this equation in
the form u = G(). From Example 36.1.2 we see that they are
p
dy y 2 x2 x y2 x2
= = , y dy = x dx, = + const, = y 2 + x2
dx y2 y 2 2
p
dy 2
y x 2 x y2 x2
= 2
= , y dy = x dx, = + const, = y 2 x2
dx y y 2 2
The variables that will put Equation 36.8 in canonical form are
+
= = y2 , = = x2
2 2
We calculate the derivatives of and .
x = 0 y = 2y
x = 2x y = 0
ux = 2xu
uy = 2yu
uxx = 4x2 u + 2u
uyy = 4y 2 u + 2u
y 2 uxx + x2 uyy = 0
(4 u + 2u ) + (4u + 2u ) = 0
1 1
u + u = u u
2 2
d2 u
=0
dx2
This equation has the solution,
u = ax + b.
Applying the boundary conditions we see that
u = b.
1032
To determine the constant, we note that the heat energy in the rod is constant in time.
Z 1 Z 1
u(x, t) dx = u(x, 0) dx
0 0
Z 1 Z 1
b dx = x dx
0 0
1033
36.3 Exercises
Exercise 36.1
Classify and transform the following equation into canonical form.
Exercise 36.2
Classify as hyperbolic, parabolic, or elliptic in a region R each of the equations:
1. ut = (pux )x
2. utt = c2 uxx u
3. (qux )x + (qut )t = 0
where p(x), c(x, t), q(x, t), and (x) are given functions that take on only positive values in a region
R of the (x, t) plane.
Exercise 36.3
Transform each of the following equations for (x, y) into canonical form in appropriate regions
1. xx y 2 yy + x + x2 = 0
2. xx + xyy = 0
The equation in part (b) is known as Tricomis equation and is a model for transonic fluid flow in
which the flow speed changes from supersonic to subsonic.
1034
36.4 Hints
Hint 36.1
Hint 36.2
Hint 36.3
1035
36.5 Solutions
Solution 36.1
For y = 1, the equation is parabolic. For this case it is already in the canonical form, uxx = 0.
For y 6= 1, the equation is elliptic. We find new variables that will put the equation in the form
u = G(, , u, u , u ).
dy p
= (1 + y)2 = (1 + y)
dx
dy
= dx
1+y
log(1 + y) = x + c
1 + y = c ex
(1 + y) ex = c
= (1 + y) ex
= = (1 + y) ex
The variables that will put the equation in canonical form are
+
= = (1 + y) cos x, = = (1 + y) sin x.
2 2
We calculate the derivatives of and .
x = (1 + y) sin x y = cos x
x = (1 + y) cos x y = sin x
Then we calculate the derivatives of u.
ux = (1 + y) sin(x)u + (1 + y) cos(x)u
uy = cos(x)u + sin(x)u
uxx = (1 + y)2 sin2 (x)u + (1 + y)2 cos2 (x)u (1 + y) cos(x)u (1 + y) sin(x)u
uyy = cos2 (x)u + sin2 (x)u
We substitute these results into the differential equation to obtain the canonical form.
uxx + (1 + y)2 uyy = 0
(1 + y)2 (u + u ) (1 + y) cos(x)u (1 + y) sin(x)u = 0
2 + 2 (u + u ) u u = 0
u + u
u + u =
2 + 2
Solution 36.2
1.
ut = (pux )x
puxx + 0uxt + 0utt + px ux ut = 0
Since 02 p0 = 0, the equation is parabolic.
2.
utt = c2 uxx u
utt + 0utx c2 uxx + u = 0
Since 02 (1)(c2 ) > 0, the equation is hyperbolic.
1036
3.
(qux )x + (qut )t = 0
quxx + 0uxt + qutt + qx ux + qt ut = 0
Since 02 qq < 0, the equation is elliptic.
Solution 36.3
1. For y 6= 0, the equation is hyperbolic. We find the new independent variables.
p
dy y2
= = y, y = c ex , ex y = c, = ex y
dx p1
dy y2
= = y, y = c ex , ex y = c, = ex y
dx 1
Next we determine x and y in terms of and .
p
= y 2 , y =
p p 1
= ex , ex = /, x = log
2
We calculate the derivatives of and .
x = ex y =
p
y = ex = /
x = ex y =
p
y = ex = /
Then we calculate the derivatives of .
s s
= + , = +
x y
s s
x = + , y = +
xx = 2 2 + 2 + + , yy = + 2 +
Finally we transform the equation to canonical form.
xx y 2 yy + x + x2 = 0
4 + + + + log =0
1
= + log
2
2. For x < 0, the equation is hyperbolic. We find the new independent variables.
dy 2 2
= x, y= x x + c, = x x y
dx 3 3
dy 2 2
= x, y = x x + c, = x x + y
dx 3 3
1037
Next we determine x and y in terms of and .
1/3
3
x= ( + ) , y=
4 2
xx + xyy = 0
(6( + )) 1/3
+ (6( + ))1/3 + (6( + ))2/3 ( + ) = 0
+
=
12( + )
For x > 0, the equation is elliptic. The variables we defined before are complex-valued.
2 2
= x3/2 y, = x3/2 + y
3 3
We choose the new real-valued variables.
= , = ( + )
=
=
=
1038
Chapter 37
Separation of Variables
1039
We have a regular Sturm-Liouville problem for X(x).
X 00 + X = 0, X(0) = X 0 (h) = 0
The eigenvalues and orthonormal eigenfunctions are
2 r
(2n 1) 2 (2n 1)
n = , Xn = sin x , n Z+ .
2h h 2h
Now we solve the equation for T (t).
T 0 = n T
T = c en t
The eigen-solutions of the partial differential equation that satisfy the homogeneous boundary con-
ditions are r
2 p
un (x, t) = sin n x en t .
h
We seek a solution of the problem that is a linear combination of these eigen-solutions.
r
X 2 p
u(x, t) = an sin n x en t
n=1
h
1040
Now we substitute u(x, t) = v(x, t) + (x) into Equation 37.1.
2
(v + (x)) = 2 (v + (x)) + s(x)
t x
vt = vxx + 00 (x) + s(x)
vt = vxx (37.2)
Since the equilibrium solution satisfies the inhomogeneous boundary conditions, v(x, t) satisfies
homogeneous boundary conditions.
v(0, t) = v(h, t) = 0.
We seek a solution for v(x, t) that is a linear combination of eigen-solutions of the heat equation.
We substitute the separation of variables, v(x, t) = X(x)T (t) into Equation 37.2
T0 X 00
= =
T X
This gives us two ordinary differential equations.
X 00 + X = 0, X(0) = X(h) = 0
0
T = T.
The Sturm-Liouville problem for X(x) has the eigenvalues and orthonormal eigenfunctions,
n 2 r
2 nx
n = , Xn = sin , n Z+ .
h h h
We solve for T (t).
2
Tn = c e(n/h) t .
The eigen-solutions of the partial differential equation are
r
2 nx 2
vn (x, t) = sin e(n/h) t .
h h
The solution for v(x, t) is a linear combination of these.
r
X 2 nx 2
v(x, t) = an sin e(n/h) t
n=1
h h
1041
37.4 Inhomogeneous Equations with Homogeneous Bound-
ary Conditions
Now consider the heat equation with a time dependent source, s(x, t).
In general we cannot transform the problem to one with a homogeneous differential equation. Thus
we cannot represent the solution in a series of the eigen-solutions of the partial differential equation.
Instead, we will do the next best thing and expand the solution in a series of eigenfunctions in Xn (x)
where the coefficients depend on time.
X
u(x, t) = un (t)Xn (x)
n=1
We will find these eigenfunctions with the separation of variables, u(x, t) = X(x)T (t) applied to the
homogeneous equation, ut = uxx , which yields,
r
2 nx
Xn (x) = sin , n Z+ .
h h
Now we have a first order, ordinary differential equation for each of the un (t). We obtain initial
conditions from the initial condition for u(x, t).
r
X 2 nx
u(x, 0) = un (0) sin = f (x)
n=1
h h
r Z h
2 nx
un (0) = sin f (x) dx fn
h 0 h
r
X 2 nx
u(x, t) = un (t) sin ,
n=1
h h
Z t
2 2
un (t) = fn e(n/h) t + e(n/h) (t )
sn ( ) d.
0
1042
37.5 Inhomogeneous Boundary Conditions
Consider the temperature of a one-dimensional rod of length h. The left end is held at the
temperature (t), the heat flow at right end is specified, there is a time-dependent source and the
initial temperature distribution is known at time t = 0. To find the temperature we solve the
problem:
in Equation 37.4.
vt + t = (vxx + xx ) + s(x, t)
vt = vxx + s(x, t) t
Thus we have a heat equation with the source s(x, t) t (x, t). We could apply separation of
variables to find a solution of the form
r
X 2 (2n 1)x
u(x, t) = (x, t) + un (t) sin .
n=1
h 2h
Note that the eigenfunctions satisfy the homogeneous boundary conditions while u(x, t) does not. If
we choose any fixed time t = t0 and form the periodic extension of the function u(x, t0 ) to define it
for x outside the range (0, h), then this function will have jump discontinuities. This means that our
eigenfunction expansion will not converge uniformly. We are not allowed to differentiate the series
with respect to x. We cant just plug the series into the partial differential equation to determine
the coefficients. Instead, we will multiply Equation 37.4, by an eigenfunction and integrate from
x = 0 to x = h. To avoid differentiating the series with respect to x, we will use integration by parts
1043
2
(2n1)
to move derivatives from u(x, t) to the eigenfunction. (We will denote n = 2h .)
r Z h r Z h
2 p 2 p
sin( n x)(ut uxx ) dx = sin( n x)s(x, t) dx
h 0 h 0
r r Z h
2 h p i h 2 p p
0
un (t) ux sin( n x) + n ux cos( n x) dx = sn (t)
h 0 h 0
r r ih r 2 Z h
2 2 p h p p
u0n (t) (1)n ux (h, t) + n u cos( n x) + n u sin( n x) dx = sn (t)
h h 0 h 0
r r
2 2 p
u0n (t) (1)n (t) n u(0, t) + n un (t) = sn (t)
h h
r
2 p
u0n (t) + n un (t) = n (t) + (1)n (t) + sn (t)
h
Now we have an ordinary differential equation for each of the un (t). We obtain initial conditions for
them using the initial condition for u(x, t).
r
X 2 p
u(x, 0) = un (0) sin( n x) = f (x)
n=1
h
r Z h
2 p
un (0) = sin( n x)f (x) dx fn
h 0
utt = uxx
ux (0, t) = 0, ux (1, t) = u(1, t), u(x, 0) = f (x), ut (x, 0) = g(x).
00 00
= = .
1044
0 (0) = 0 b = 0.
0
(1) + (1) = 0 a cosh( ) + a sinh( ) = 0
a = 0.
Since there is only the trivial solution, there are no negative eigenvalues.
0 (0) = 0 a = 0.
(1) + 0 (1) = 0 b + 0 = 0.
0 (0) b = 0.
0
(1) + (1) = 0 a cos( ) a sin( ) = 0
cos( ) = sin( )
= cot( )
By looking at Figure 37.1, (the plot shows the functions f (x) = x, f (x) = cot x and has lines
at x = n), we see that there are an infinite number of positive eigenvalues and that
n (n)2 as n .
10
2 4 6 8 10
-2
1045
The solution for is p p
n = an cos( n t) + bn sin( n t).
Thus the solution to the differential equation is
X p p p
u(x, t) = cos( n x)[an cos( n t) + bn sin( n t)].
n=1
Let
X p
f (x) = fn cos( n x)
n=1
X p
g(x) = gn cos( n x).
n=1
1046
37.8 Exercises
Exercise 37.1
Solve the following problem with separation of variables.
Exercise 37.2
Consider a thin half pipe of unit radius laying on the ground. It is heated by radiation from above.
We take the initial temperature of the pipe and the temperature of the ground to be zero. We model
this problem with a heat equation with a source term.
ut = uxx + A sin(x)
u(0, t) = u(, t) = 0, u(x, 0) = 0
Exercise 37.3
Consider Laplaces Equation 2 u = 0 inside the quarter circle of radius 1 (0 2 , 0 r 1).
Write the problem in polar coordinates u = u(r, ) and use separation of variables to find the solution
subject to the following boundary conditions.
1.
u
(r, 0) = 0, u r, = 0, u(1, ) = f ()
2
2.
u u u
(r, 0) = 0, r, = 0, (1, ) = g()
2 r
Under what conditions does this solution exist?
Exercise 37.4
Consider the 2-D heat equation
ut = (uxx + uyy ),
on a square plate 0 < x < 1, 0 < y < 1 with two sides insulated
ux (0, y, t) = 0 ux (1, y, t) = 0,
u(x, 0, t) = 0 u(x, 1, t) = 0,
ux (0, t) = 0 ux (, t) = 0,
1047
Exercise 37.6
Obtain Poissons formula to solve the Dirichlet problem for the circular region 0 r < R, 0 < 2.
That is, determine a solution (r, ) to Laplaces equation
2 = 0
Exercise 37.7
Consider the temperature of a ring of unit radius. Solve the problem
ut = u , u(, 0) = f ()
Exercise 37.8
Solve the Laplaces equation by separation of variables.
Exercise 37.9
Solve Laplaces equation in the unit disk with separation of variables.
u = 0, 0 < r < 1
u(1, ) = f ()
2 u 1 u 1 2u
u + + .
r2 r r r2 2
Exercise 37.10
Find the normal modes of oscillation of a drum head of unit radius. The drum head obeys the wave
equation with zero displacement on the boundary.
1 2v 1 2v
1 v
v r + 2 2 = 2 2, v(1, , t) = 0
r r r r c t
Exercise 37.11
Solve the equation
t = a2 xx , 0 < x < l, t>0
with boundary conditions (0, t) = (l, t) = 0, and initial conditions
(
x, 0 x l/2,
(x, 0) =
l x, l/2 < x l.
Comment on the differentiability ( that is the number of finite derivatives with respect to x ) at
time t = 0 and at time t = , where > 0 and 1.
1048
Exercise 37.12
Consider a one-dimensional rod of length L with initial temperature distribution f (x). The tem-
peratures at the left and right ends of the rod are held at T0 and T1 , respectively. To find the
temperature of the rod for t > 0, solve
Exercise 37.13
For 0 < x < l solve the problem
t = a2 xx + w(x, t) (37.5)
(0, t) = 0, x (l, t) = 0, (x, 0) = f (x)
d2 (x)
+ (x) = 0,
dx2
(0) = 0 (l) = 0.
Exercise 37.14
Solve the heat equation of Exercise 37.13 with the same initial conditions but with the boundary
conditions
(0, t) = 0, c(l, t) + x (l, t) = 0.
Here c > 0 is a constant. Although it is not possible to solve for the eigenvalues in closed form,
show that the eigenvalues assume a simple form for large values of .
Exercise 37.15
Use a series expansion technique to solve the problem
t = a2 xx + 1, t > 0, 0<x<l
Exercise 37.16
Let (x, t) satisfy the equation
t = a2 xx
for 0 < x < l, t > 0 with initial conditions (x, 0) = 0 for 0 < x < l, with boundary conditions
(0, t) = 0 for t > 0, and (l, t) + x (l, t) = 1 for t > 0. Obtain two series solutions for this problem,
one which is useful for large t and the other useful for small t.
Exercise 37.17
A rod occupies the portion 1 < x < 2 of the x-axis. The thermal conductivity depends on x in such
a manner that the temperature (x, t) satisfies the equation
t = A2 (x2 x )x (37.6)
1049
where A is a constant. For (1, t) = (2, t) = 0 for t > 0, with (x, 0) = f (x) for 1 < x < 2, show
that the appropriate series expansion involves the eigenfunctions
1 n ln x
n (x) = sin .
x ln 2
Work out the series expansion for the given boundary and initial conditions.
Exercise 37.18
Consider a string of length L with a fixed left end a free right end. Initially the string is at rest with
displacement f (x). Find the motion of the string by solving,
utt = c2 uxx , 0 < x < L, t > 0,
u(0, t) = 0, ux (L, t) = 0,
u(x, 0) = f (x), ut (x, 0) = 0,
with separation of variables.
Exercise 37.19
Consider the equilibrium temperature distribution in a two-dimensional block of width a and height
b. There is a heat source given by the function f (x, y). The vertical sides of the block are held at zero
temperature; the horizontal sides are insulated. To find this equilibrium temperature distribution,
solve the potential equation,
uxx + uyy = f (x, y), 0 < x < a, 0 < y < b,
u(0, y) = u(a, y) = 0, uy (x, 0) = uy (x, b) = 0,
with separation of variables.
Exercise 37.20
Consider the vibrations of a stiff beam of length L. More precisely, consider the transverse vibrations
of an unloaded beam, whose weight can be neglected compared to its stiffness. The beam is simply
supported at x = 0, L. (That is, it is resting on fulcrums there. u(0, t) = 0 means that the beam
is resting on the fulcrum; uxx (0, t) = 0 indicates that there is no bending force at that point.) The
beam has initial displacement f (x) and velocity g(x). To determine the motion of the beam, solve
utt + a2 uxxxx = 0, 0 < x < L, t > 0,
u(x, 0) = f (x), ut (x, 0) = g(x),
u(0, t) = uxx (0, t) = 0, u(L, t) = uxx (L, t) = 0,
with separation of variables.
Exercise 37.21
The temperature along a magnet winding of length L carrying a current I satisfies, (for some > 0):
ut = uxx + I 2 u.
The ends of the winding are kept at zero, i.e.,
u(0, t) = u(L, t) = 0;
and the initial temperature distribution is
u(x, 0) = g(x).
Find u(x, t) and determine the critical current ICR which is defined as the least current at which the
winding begins to heat up exponentially. Suppose that < 0, so that the winding has a negative
coefficient of resistance with respect to temperature. What can you say about the critical current
in this case?
1050
Exercise 37.22
The e-folding time of a decaying function of time is the time interval, e , in which the magnitude
of the function is reduced by at least 1e . Thus if u(x, t) = et f (x) + et g(x) with > > 0 then
e = 1 . A body with heat conductivity has its exterior surface maintained at temperature zero.
Initially the interior of the body is at the uniform temperature T > 0. Find the e-folding time of
the body if it is:
a) An infinite slab of thickness a.
b) An infinite cylinder of radius a.
c) A sphere of radius a.
Note that in (a) the temperature varies only in the z direction and in time; in (b) and (c) the
temperature varies only in the radial direction and in time.
u
d) What are the e-folding times if the surfaces are perfectly insulated, (i.e., n = 0, where n is
the exterior normal at the surface)?
Exercise 37.23
Solve the heat equation with a time-dependent diffusivity in the rectangle 0 < x < a, 0 < y < b.
The top and bottom sides are held at temperature zero; the lateral sides are insulated. We have the
initial-boundary value problem:
ut = (t) (uxx + uyy ) , 0 < x < a, 0 < y < b, t > 0,
u(x, 0, t) = u(x, b, t) = 0,
ux (0, y, t) = ux (a, y, t) = 0,
u(x, y, 0) = f (x, y).
The diffusivity, (t), is a known, positive function.
Exercise 37.24
A semi-circular rod of infinite extent is maintained at temperature T = 0 on the flat side and at
T = 1 on the curved surface:
x2 + y 2 = 1, y > 0.
Find the steady state temperature in a cross section of the rod using separation of variables.
Exercise 37.25
Use separation of variables to find the steady state temperature u(x, y) in a slab: x 0, 0 y 1,
which has zero temperature on the faces y = 0 and y = 1 and has a given distribution: u(y, 0) = f (y)
on the edge x = 0, 0 y 1.
Exercise 37.26
Find the solution of Laplaces equation subject to the boundary conditions.
u = 0, 0 < < , a < r < b,
u(r, 0) = u(r, ) = 0, u(a, ) = 0, u(b, ) = f ().
Exercise 37.27
1051
Find the ensuing motion.
c) Compare the kinetic energies of each harmonic in the two solutions. Where should the string
be struck in order to maximize the energy in the nth harmonic in each case?
Exercise 37.28
If the striking hammer is not perfectly rigid, then its effect must be included as a time dependent
forcing term of the form:
(
v cos (x) sin t
2d , for |x | < d, 0 < t < ,
s(x, t) =
0 otherwise.
Find the motion of the string for t > . Discuss the effects of the width of the hammer and duration
of the blow with regard to the energy in overtones.
Exercise 37.29
Find the propagating modes in a square waveguide of side L for harmonic signals of frequency
when the propagation speed of the medium is c. That is, we seek those solutions of
utt c2 u = 0,
where u = u(x, y, z, t) has the form u(x, y, z, t) = v(x, y, z) et , which satisfy the conditions:
u(x, y, z, t) = 0 for x = 0, L, y = 0, L, z > 0,
lim |u| =
6 and 6= 0.
z
Indicate in terms of inequalities involving k = /c and appropriate eigenvalues, n,m say, for which
n and m the solutions un,m satisfy the conditions.
Exercise 37.30
Find the modes of oscillation and their frequencies for a rectangular drum head of width a and
height b. The modes of oscillation are eigensolutions of
utt = c2 u, 0 < x < a, 0 < y < b,
u(0, y) = u(a, y) = u(x, 0) = u(x, b) = 0.
Exercise 37.31
Using separation of variables solve the heat equation
t = a2 (xx + yy )
in the rectangle 0 < x < lx , 0 < y < ly with initial conditions
(x, y, 0) = 1,
and boundary conditions
(0, y, t) = (lx , y, t) = 0, y (x, 0, t) = y (x, ly , t) = 0.
Exercise 37.32
Using polar coordinates and separation of variables solve the heat equation
t = a2 2
in the circle 0 < r < R0 with initial conditions
(r, , 0) = V
where V is a constant, and boundary conditions
(R0 , , t) = 0.
1052
1. Show that for t > 0,
!
X a2 j0,n
2
J0 (j0,n r/R0 )
(r, , t) = 2V exp 2 t ,
n=1
R0 j0,n J1 (j0,n )
J0 (j0,n ) = 0, n = 1, 2, . . .
2. For any fixed r, 0 < r < R0 , use the asymptotic approximation for the Jn Bessel functions
for large argument (this can be found in any standard math tables) to determine the rate of
decay of the terms of the series solution for at time t = 0.
Exercise 37.33
Consider the solution of the diffusion equation in spherical coordinates given by
x = r sin cos ,
y = r sin sin ,
z = r cos ,
where r is the radius, is the polar angle, and is the azimuthal angle. We wish to solve the
equation on the surface of the sphere given by r = R, 0 < < , and 0 < < 2. The diffusion
equation for the solution (, , t) in these coordinates on the surface of the sphere becomes
a2 1 2
1
= 2 sin + . (37.7)
t R sin sin2 2
1. Using separation of variables show that a solution can be found in the form
(, , t) = T (t)()(),
where T ,, obey ordinary differential equations in t,, and respectively. Derive the ordinary
differential equations for T and , and show that the differential equation obeyed by is given
by
d2
c = 0,
d2
where c is a constant.
2. Assuming that (, , t) is determined over the full range of the azimuthal angle, 0 < < 2,
determine the allowable values of the separation constant c and the corresponding allowable
functions . Using these values of c and letting x = cos rewrite in terms of the variable x the
differential equation satisfied by . What are appropriate boundary conditions for ? The
resulting equation is known as the generalized or associated Legendre equation.
1053
3. Assume next that the initial conditions for are chosen such that
(, , t = 0) = f (),
where f () is a specified function which is regular at the north and south poles (that is = 0
and = ). Note that the initial condition is independent of the azimuthal angle . Show
that in this case the method of separation of variables gives a series solution for of the form
X
(, t) = Al exp(2l t)Pl (cos ),
l=0
where Pl (x) is the lth Legendre polynomial, and determine the constants l as a function of
the index l.
Useful facts:
d dPl (x)
(1 x2 ) + l(l + 1)Pl (x) = 0
dx dx
P0 (x) = 1
P1 (x) = x
3 2 1
P2 (x) = x
2 2
if l 6= m
(
Z 1 0
dxPl (x)Pm (x) = 2
1 if l = m
2l + 1
Exercise 37.34
Let (x, y) satisfy Laplaces equation
xx + yy = 0
in the rectangle 0 < x < 1, 0 < y < 2, with (x, 2) = x(1 x), and with = 0 on the other three
sides. Use a series solution to determine inside the rectangle. How many terms are required to
give ( 21 , 1) with about 1% (also 0.1%) accuracy; how about x ( 12 , 1)?
Exercise 37.35
Let (r, , ) satisfy Laplaces equation in spherical coordinates in each of the two regions r < a,
r > a, with 0 as r . Let
where m and n m are integers. Find in r < a and r > a. In electrostatics, this problem
corresponds to that of determining the potential of a spherical harmonic type charge distribution
over the surface of the sphere. In this way one can determine the potential due to an arbitrary
surface charge distribution since any charge distribution can be expressed as a series of spherical
harmonics.
Exercise 37.36
Obtain a formula analogous to the Poisson formula to solve the Neumann problem for the circular
region 0 r < R, 0 < 2. That is, determine a solution (r, ) to Laplaces equation
2 = 0
1054
in polar coordinates given r (R, ). Show that
2
r2
Z
R 2r
(r, ) = r (R, ) ln 1 cos( ) + 2 d
2 0 R R
Exercise 37.37
Investigate solutions of
t = a2 xx
obtained by setting the separation constant C = ( + )2 in the equations obtained by assuming
= X(x)T (t):
T0 X 00 C
= C, = 2.
T X a
1055
37.9 Hints
Hint 37.1
Hint 37.2
Hint 37.3
Hint 37.4
Hint 37.5
Hint 37.6
Hint 37.7
Impose the boundary conditions
Hint 37.8
Apply the separation of variables u(x, y) = X(x)Y (y). Solve an eigenvalue problem for X(x).
Hint 37.9
Hint 37.10
Hint 37.11
Hint 37.12
There are two ways to solve the problem. For the first method, expand the solution in a series of
the form
X nx
u(x, t) = an (t) sin .
n=1
L
Because of the inhomogeneous boundary conditions, the convergence of the series will not be uniform.
You can differentiate the series with respect to t, but not with respect to x. Multiply the partial
differential equation by the eigenfunction sin(nx/L) and integrate from x = 0 to x = L. Use
integration by parts to move derivatives in x from u to the eigenfunctions. This process will yield a
first order, ordinary differential equation for each of the an s.
For the second method: Make the change of variables v(x, t) = u(x, t) (x), where (x) is the
equilibrium temperature distribution to obtain a problem with homogeneous boundary conditions.
Hint 37.13
Hint 37.14
1056
Hint 37.15
Hint 37.16
Hint 37.17
Hint 37.18
Use separation of variables to find eigen-solutions of the partial differential equation that satisfy the
homogeneous boundary conditions. There will be two eigen-solutions for each eigenvalue. Expand
u(x, t) in a series of the eigen-solutions. Use the two initial conditions to determine the constants.
Hint 37.19
Expand the solution in a series of eigenfunctions in x. Determine these eigenfunctions by using
separation of variables on the homogeneous partial differential equation. You will find that the
answer has the form,
X nx
u(x, y) = un (y) sin .
n=1
a
Substitute this series into the partial differential equation to determine ordinary differential equations
for each of the un s. The boundary conditions on u(x, y) will give you boundary conditions for the
un s. Solve these ordinary differential equations with Green functions.
Hint 37.20
Solve this problem by expanding the solution in a series of eigen-solutions that satisfy the par-
tial differential equation and the homogeneous boundary conditions. Use the initial conditions to
determine the coefficients in the expansion.
Hint 37.21
Use separation of variables to find eigen-solutions that satisfy the partial differential equation and
the homogeneous boundary conditions. The solution is a linear combination of the eigen-solutions.
The whole solution will be exponentially decaying if each of the eigen-solutions is exponentially
decaying.
Hint 37.22
For parts (a), (b) and (c) use separation of variables. For part (b) the eigen-solutions will involve
Bessel functions. For part (c) the eigen-solutions will involve spherical Bessel functions. Part (d) is
trivial.
Hint 37.23
The solution is a linear combination of eigen-solutions of the partial differential equation that satisfy
the homogeneous boundary conditions. Determine the coefficients in the expansion with the initial
condition.
Hint 37.24
The problem is
1 1
urr + ur + 2 u = 0, 0 < r < 1, 0 < <
r r
u(r, 0) = u(r, ) = 0, u(0, ) = 0, u(1, ) = 1
The solution is a linear combination of eigen-solutions that satisfy the partial differential equation
and the three homogeneous boundary conditions.
1057
Hint 37.25
Hint 37.26
Hint 37.27
Hint 37.28
Hint 37.29
Hint 37.30
Hint 37.31
Hint 37.32
Hint 37.33
Hint 37.34
Hint 37.35
Hint 37.36
Hint 37.37
1058
37.10 Solutions
Solution 37.1
We expand the solution in eigenfunctions in x and y which satify the boundary conditions.
X mx ny
u= umn (t) sin sin
m,n=1
a b
We solve the ordinary differential equations for the coefficients umn (t) subject to their initial condi-
tions.
Z t
m 2 n 2 m 2 n 2
umn (t) = exp + (t ) qmn ( ) d +fmn exp + t
0 a b a b
Solution 37.2
After looking at this problem for a minute or two, it seems like the answer would have the form
u = sin(x)T (t).
This form satisfies the boundary conditions. We substitute it into the heat equation and the initial
condition to determine T
1059
Now we have the solution of the heat equation.
A
sin(x) 1 et
u=
Solution 37.3
First we write the Laplacian in polar coordinates.
1 1
urr + ur + 2 u = 0
r r
1. We introduce the separation of variables u(r, ) = R(r)().
1 1
R00 + R0 + 2 R00 = 0
r r
R00 R0 00
r2 +r = =
R R
We have a regular Sturm-Liouville problem for and a differential equation for R.
First we solve the problem for to determine the eigenvalues and eigenfunctions. The Rayleigh
quotient is
R /2 0 2
( ) d
= 0R /2
0
2 d
Immediately we see that the eigenvalues are non-negative. If 0 = 0, then the right boundary
condition implies that = 0. Thus = 0 is not an eigenvalue. We find the general solution
of Equation 37.8 for positive .
= c1 cos + c2 sin
Now we solve the differential equation for R. Since this is an Euler equation, we make the
substitition R = r .
Rn = r2n1 .
1060
The solution of Laplaces equation is a linear combination of the eigensolutions.
X
u= un r2n1 cos ((2n 1))
n=1
1 1
R00 + R0 + 2 R00 = 0
r r
00
2R R0 00
r +r = =
R R
We have a regular Sturm-Liouville problem for and a differential equation for R.
First we solve the problem for to determine the eigenvalues and eigenfunctions. We recognize
this problem as the generator of the Fourier cosine series.
n = (2n)2 , n Z0+ ,
1
0 = , n = cos (2n) , n Z+
2
Now we solve the differential equation for R. Since this is an Euler equation, we make the
substitition R = r .
Rn = r2n .
1061
Note that the constant term is missing in this cosine series. g() has such a series expansion
only if
Z /2
g() d = 0.
0
This is the condition for the existence of a solution of the problem. If this is satisfied, we can
solve for the coefficients in the expansion. u0 is arbitrary.
4 /2
Z
un = g() cos (2n) d, n Z+
0
Solution 37.4
1.
ut = (uxx + uyy )
XY T 0 = (X 00 Y T + XY 00 T )
T0 X 00 Y 00
= + =
T X Y
00 00
X Y
= =
X Y
We have boundary value problems for X(x) and Y (y) and a differential equation for T (t).
X 00 + X = 0, X 0 (0) = X 0 (1) = 0
Y 00 + ( )Y = 0, Y (0) = Y (1) = 0
T 0 = T
Solution 37.5
We use the separation of variables u(x, t) = X(x)T (t) to find eigensolutions of the heat equation
that satisfy the boundary conditions at x = 0, .
ut = uxx
XT 0 = X 00 T
T0 X 00
= =
T X
1062
The problem for X(x) is
X 00 + X = 0, X 0 (0) = X 0 () = 0.
The eigenfunctions form the familiar cosine series.
1
n = n2 , n Z0+ , X0 = , Xn = cos(nx)
2
Next we solve the differential equation for T (t).
Tn0 = n2 Tn
2
T0 = 1, Tn = en t
Solution 37.6
We expand the solution in a Fourier series.
1 X X
= a0 (r) + an (r) cos(n) + bn (r) sin(n)
2 n=1 n=1
We substitute the series into the Laplaces equation to determine ordinary differential equations for
the coefficients.
1 2
r + 2 2 =0
r r r
1 1 1
a000 + a00 = 0, a00n + a0n n2 an = 0, b00n + b0n n2 bn = 0
r r r
The solutions that are bounded at r = 0 are, (to within multiplicative constants),
a0 (r) = 1, an (r) = rn , bn (r) = rn .
Thus (r, ) has the form
1 X X
(r, ) = c0 + cn rn cos(n) + dn rn sin(n)
2 n=1 n=1
1063
The coefficients are
1 2 2 2
Z Z Z
1 1
c0 = (R, ) d, cn = (R, ) cos(n) d, dn = (R, ) sin(n) d.
0 Rn 0 Rn 0
Solution 37.7
In order that the solution is continuously differentiable, (which it must be in order to satisfy the
differential equation), we impose the boundary conditions
ut = u
T 0 = 00 T
T0 00
= =
T
1
n = n2 , n = en , n Z.
2
Now we solve the problems for Tn (t) to obtain eigen-solutions of the heat equation.
Tn0 = n2 Tn
2
Tn = en t
1064
We use the initial conditions to determine the coefficients.
X 1
u(, 0) = un en = f ()
n=
2
Z 2
1
un = en f () d
2 0
Solution 37.8
Substituting u(x, y) = X(x)Y (y) into the partial differential equation yields
X 00 Y 00
= = .
X Y
With the homogeneous boundary conditions, we have the two problems
X 00 + X = 0, X(0) = X(1) = 0,
Y 00 Y = 0, Y (1) = 0.
The eigenvalues and orthonormal eigenfunctions for X(x) are
n = (n)2 , Xn = 2 sin(nx).
X
u(x, y) = un 2 sin(nx) sinh(n(1 y))
n=1
Z 1
un = 2 sin(n)f () d
0
Solution 37.9
We substitute u(r, ) = R(r)() into the partial differential equation.
2 u 1 u 1 2u
2
+ + 2 2 =0
r r r r
1 0 1
R + R + 2 R00 = 0
00
r r
00
2R R0 00
r +r = =
R R
r2 R00 + rR0 R = 0, 00 + = 0
We assume that u is a strong solution of the partial differential equation and is thus twice contin-
uously differentiable, (u C 2 ). In particular, this implies that R and are bounded and that
1065
is continuous and has a continuous first derivative along = 0. This gives us a boundary value
problem for and a differential equation for R.
00 + = 0, (0) = (2), 0 (0) = 0 (2)
r2 R00 + rR0 R = 0, R is bounded
The eigensolutions for form the familiar Fourier series.
n = n2 , n Z0+
(1) 1
0 = , (1)
n = cos(n), n Z+
2
(2)
n = sin(n), n Z+
Now we find the bounded solutions for R. The equation for R is an Euler equation so we use the
substitution R = r .
r2 Rn00 + rRn0 n Rn = 0
( 1) + n = 0
p
= n
First we consider the case 0 = 0. The solution is
R = a + b ln r.
Boundedness demands that b = 0. Thus we have the solution
R = 1.
Now we consider the case n = n2 > 0. The solution is
Rn = arn + brn .
Boundedness demands that b = 0. Thus we have the solution
Rn = r n .
The solution for u is a linear combination of the eigensolutions.
a0 X
u(r, ) = + (an cos(n) + bn sin(n)) rn
2 n=1
Solution 37.10
A normal mode of frequency is periodic in time.
v(r, , t) = u(r, ) et
We substitute this form into the wave equation to obtain a Helmholtz equation, (also called a reduced
wave equation).
1 2u 2
1 u
r + 2 2 = 2 u, u(1, ) = 0,
r r r r c
2 u 1 u 1 2u
+ + + k 2 u = 0, u(1, ) = 0
r2 r r r2 2
1066
Here we have defined k = c. We apply the separation of variables u = R(r)() to the Helmholtz
equation.
r2 R00 + rR0 + R00 + k 2 r2 R = 0,
R00 R0 00
r2 +r + k2 r2 = = 2
R R
Now we have an ordinary differential equation for R(r) and an eigenvalue problem for ().
2
00 1 0 2
R + R + k 2 R = 0, R(0) is bounded, R(1) = 0,
r r
00 + 2 = 0, () = (), 0 () = 0 ().
We compute the eigenvalues and eigenfunctions for .
n = n, n Z0+
1
0 = , (1)
n = cos(n), (2)
n = sin(n), n Z+
2
The differential equations for the Rn are Bessel equations.
n2
1
Rn00 + Rn0 + k 2 2 Rn = 0, Rn (0) is bounded, Rn (1) = 0
r r
The general solution is a linear combination of order n Bessel functions of the first and second kind.
Rn (r) = c1 Jn (kr) + c2 Yn (kr)
Since the Bessel function of the second kind, Yn (kr), is unbounded at r = 0, the solution has the
form
Rn (r) = cJn (kr).
Applying the second boundary condition gives us the admissable frequencies.
Jn (k) = 0
knm = jnm , Rnm = Jn (jnm r), n Z0+ , m Z+
Here jnm is the mth positive root of Jn . We combining the above results to obtain the normal modes
of oscillation.
1
v0m = J0 (j0m r) ecj0m t , m Z+
2
vnm = cos(n + )Jnm (jnm r) ecjnm t , n, m Z+
Some normal modes are plotted in Figure 37.2. Note that cos(n+) represents a linear combination
of cos(n) and sin(n). This form is preferrable as it illustrates the circular symmetry of the problem.
Solution 37.11
We will expand the solution in a complete, orthogonal set of functions {Xn (x)}, where the coefficients
are functions of t. X
= Tn (t)Xn (x)
n
We will use separation of variables to determine a convenient set {Xn }. We substitite = T (t)X(x)
into the diffusion equation.
t = a2 xx
XT 0 = a2 X 00 T
T0 X 00
= =
a2 T X
T 0 = a2 T, X 00 + X = 0
1067
Figure 37.2: The Normal Modes u01 through u34
Note that in order to satisfy (0, t) = (l, t) = 0, the Xn must satisfy the same homogeneous
boundary conditions, Xn (0) = Xn (l) = 0. This gives us a Sturm-Liouville problem for X(x).
X 00 + X = 0, X(0) = X(l) = 0
n 2 nx
n = , Xn = sin , n Z+
l l
Thus we seek a solution of the form
X nx
= Tn (t) sin . (37.10)
n=1
l
This solution automatically satisfies the boundary conditions. We will assume that we can differen-
tiate it. We will substitite this form into the diffusion equation and the initial condition to determine
1068
the coefficients in the series, Tn (t). First we substitute Equation 37.10 into the partial differential
equation for to determine ordinary differential equations for the Tn .
t = a2 xx
X nx X n 2 nx
Tn0 (t) sin = a2 Tn (t) sin
n=1
l n=1
l l
an 2
Tn0 = Tn
l
Now we substitute Equation 37.10 into the initial condition for to determine initial conditions for
the Tn .
X nx
Tn (0) sin = (x, 0)
n=1
l
Rl
sin nx
0 l (x, 0) dx
Tn (0) = R l 2 nx
0
sin l dx
Z l
2 nx
Tn (0) = sin (x, 0) dx
l 0 l
2 l/2 2 l/2
Z nx Z nx
Tn (0) = sin x dx + sin (l x) dx
l 0 l l 0 l
4l n
Tn (0) = 2 2 sin
n 2
4l
T2n1 (0) = (1)n , T2n (0) = 0, n Z+
(2n 1)2 2
We solve the ordinary differential equations for Tn subject to the initial conditions.
2 !
4l a(2n 1)
T2n1 (t) = (1)n exp t , T2n (t) = 0, n Z+
(2n 1)2 2 l
From the initial condition, we know that the the solution at t = 0 is C 0 . That is, it is continuous,
but not differentiable. The series representation of the solution at t = 0 is
2
4X l (2n 1)x
= (1)n sin .
l n=1 (2n 1) l
1069
Solution 37.12
Because of the inhomogeneous boundary conditions, the convergence of the series will not be uniform.
We can differentiate the series with respect to t, but not with respect to x. We multiply the partial
differential equation by an eigenfunction and integrate from x = 0 to x = L. We use integration by
parts to move derivatives from u to the eigenfunction.
ut uxx = 0
Z L mx
(ut uxx ) sin dx = 0
0 L
Z L X !
m L
nx mx h mx iL Z mx
0
an (t) sin sin dx ux sin + ux cos dx = 0
0 n=1
L L L 0 L 0 L
L 0 m h mx iL m 2 Z L mx
am (t) + u cos + u sin dx = 0
2 L L 0 L 0 L
!
L 0 m m 2 Z L X nx mx
m
am (t) + ((1) u(L, t) u(0, t)) + an (t) sin sin dx = 0
2 L L 0 n=1
L L
L 0 m L m 2
am (t) + ((1)m T1 T0 ) + am (t) = 0
2 L 2 L
m 2 2m
a0m (t) + am (t) = 2 (T0 (1)m T1 )
L L
Now we have a first order differential equation for each of the an s. We obtain initial conditions for
each of the an s from the initial condition for u(x, t).
u(x, 0) = f (x)
X nx
an (0) sin = f (x)
n=1
L
2 L
Z nx
an (0) = f (x) sin dx fn
L 0 L
1070
By solving the first order differential equation for an (t), we obtain
Note that the series does not converge uniformly due to the 1/n term.
Method 2. For our second method we transform the problem to one with homogeneous bound-
ary conditions so that we can use the partial differential equation to determine the time dependence
of the eigen-solutions. We make the change of variables v(x, t) = u(x, t) (x) where (x) is some
function that satisfies the inhomogeneous boundary conditions. If possible, we want (x) to satisfy
the partial differential equation as well. For this problem we can choose (x) to be the equilibrium
solution which satisfies
00 (x) = 0, (0)T0 , (L) = T1 .
This has the solution
T1 T0
(x) = T0 + x.
L
With the change of variables,
T1 T0
v(x, t) = u(x, t) T0 + x ,
L
Now we substitute the separation of variables v(x, t) = X(x)T (t) into the partial differential equa-
tion.
T 0 = 2 T,
X 00 = 2 X, X(0) = X(L) = 0.
The problem for X is a regular Sturm-Liouville problem and has the solutions
n nx
n = , Xn = sin , n N.
L L
The ordinary differential equation for T becomes,
n 2
Tn0 = Tn ,
L
which, (up to a multiplicative constant), has the solution,
2
Tn = e(n/L) t .
Thus the eigenvalues and eigen-solutions of the partial differential equation are,
n nx 2
n = , vn = sin e(n/L) t , n N.
L L
1071
Let v(x, t) have the series expansion,
nx
X 2
v(x, t) = an sin e(n/L) t .
n=1
L
2 L
Z
T1 T0 nx
an = f (x) T0 + x sin dx
L 0 L L
2(T0 (1)n T1 )
an = fn
n
With the coefficients defined above, the solution for u(x, t) is
2(T0 (1)n T1 )
T1 T0 X nx 2
u(x, t) = T0 + x+ fn sin e(n/L) t .
L n=1
n L
Since the coefficients in the sum decay exponentially for t > 0, we see that the series is uniformly
convergent for positive t. It is clear that the two solutions we have obtained are equivalent.
Solution 37.13
First we solve the eigenvalue problem for (x), which is the problem we would obtain if we applied
separation of variables to the partial differential equation, t = xx . We have the eigenvalues and
orthonormal eigenfunctions
2 r
(2n 1) 2 (2n 1)x
n = , n (x) = sin , n Z+ .
2l l 2l
We expand the solution and inhomogeneity in Equation 37.5 in a series of the eigenvalues.
X
(x, t) = Tn (t)n (x)
n=1
X Z l
w(x, t) = wn (t)n (x), wn (t) = n (x)w(x, t) dx
n=1 0
Since satisfies the same homgeneous boundary conditions as , we substitute the series into
Equation 37.5 to determine differential equations for the Tn (t).
X
X
X
Tn0 (t)n (x) = a2 Tn (t)(n )n (x) + wn (t)n (x)
n=1 n=1 n=1
2
(2n 1)
Tn0 (t) = a2 Tn (t) + wn (t)
2l
Now we substitute the series for into its initial condition to determine initial conditions for the
Tn .
X
(x, 0) = Tn (0)n (x) = f (x)
n=1
Z l
Tn (0) = n (x)f (x) dx
0
1072
We solve for Tn (t) to determine the solution, (x, t).
2 ! Z t 2 ! !
(2n 1)a (2n 1)a
Tn (t) = exp t Tn (0) + wn ( ) exp d
2l 0 2l
Solution 37.14
Separation of variables leads to the eigenvalue problem
First we consider the case = 0. A set of solutions of the differential equation is {1, x}. The solution
that satisfies the left boundary condition is (x) = x. The right boundary condition imposes the
constraint l + c = 0. Since c is positive, this has no solutions. = 0 is not an eigenvalue.
Now we consider 6= 0. A set of solutions of the differential equation
is {cos( x), sin( x)}.
The solution that satisfies the left boundary condition is = sin( x). The right boundary condi-
tion imposes the constraint
c sin l + cos l = 0
tan l =
c
For large , the we can determine approximate solutions.
p (2n 1)
n l , n Z+
2
2
(2n 1)
n , n Z+
2l
The eigenfunctions are
sin n x
n (x) = qR , n Z+ .
l
sin2
0
n x dx
We expand (x, t) and w(x, t) in series of the eigenfunctions.
X
(x, t) = Tn (t)n (x)
n=1
X Z l
w(x, t) = wn (t)n (x), wn (t) = n (x)w(x, t) dx
n=1 0
Since satisfies the same homgeneous boundary conditions as , we substitute the series into
Equation 37.5 to determine differential equations for the Tn (t).
X
X
X
Tn0 (t)n (x) = a2 Tn (t)(n )n (x) + wn (t)n (x)
n=1 n=1 n=1
Tn0 (t) = a2 n Tn (t) + wn (t)
Now we substitute the series for into its initial condition to determine initial conditions for the
Tn .
X
(x, 0) = Tn (0)n (x) = f (x)
n=1
Z l
Tn (0) = n (x)f (x) dx
0
1073
We solve for Tn (t) to determine the solution, (x, t).
Z t
2 2
Tn (t) = exp a n t Tn (0) + wn ( ) exp a n d
0
Solution 37.15
First we seek a function u(x, t) that satisfies the boundary conditions u(0, t) = t, ux (l, t) = cu(l, t).
We try a function of the form u = (ax + b)t. The left boundary condition imposes the constraint
b = 1. We then apply the right boundary condition no determine u.
at = c(al + 1)t
c
a=
1 + cl
cx
u(x, t) = 1 t
1 + cl
Now we define to be the difference of and u.
(x, t) = (x, t) u(x, t)
satisfies an inhomogeneous diffusion equation with homogeneous boundary conditions.
( + u)t = a2 ( + u)xx + 1
t = a2 xx + 1 + a2 uxx ut
cx
t = a2 xx +
1 + cl
The initial and boundary conditions for are
(x, 0) = 0, (0, t) = 0, x (l, t) = c(l, t).
We solved this system in problem 2. Just take
cx
w(x, t) = , f (x) = 0.
1 + cl
The solution is
X
(x, t) = Tn (t)n (x),
n=1
Z t
wn exp a2 n (t ) d,
Tn (t) =
0
Z l
cx
wn (t) = n (x) dx.
0 1 + cl
This determines the solution for .
Solution 37.16
First we solve this problem with a series expansion. We transform the problem to one with homo-
geneous boundary conditions. Note that
x
u(x) =
l+1
satisfies the boundary conditions. (It is the equilibrium solution.) We make the change of variables
= u. The problem for is
t = a2 xx ,
x
(0, t) = (l, t) + x (l, t) = 0, (x, 0) = .
l+1
1074
This is a particular case of what we solved in Exercise 37.14. We apply the result of that problem.
The solution for (x, t) is
x X
(x, t) = + Tn (t)n (x)
l + 1 n=1
sin n x
n (x) = qR , n Z+
l 2
0
sin n x dx
tan l =
Tn (t) = Tn (0) exp a2 n t
Z l
x
Tn (0) = n (x) dx
0 l+1
This expansion is useful for large t because the coefficients decay exponentially with increasing t.
Now we solve this problem with the Laplace transform.
t = a2 xx , (0, t) = 0, (l, t) + x (l, t) = 1, (x, 0) = 0
1
s = a2 xx , (0, s) = 0, (l, s) + x (l, s) =
s
s 1
xx 2 = 0, (0, s) = 0, (l, s) + x (l, s) =
a s
The solution that satisfies the left boundary condition is
sx
= c sinh .
a
We apply the right boundary condition to determine the constant.
sinh asx
=
s sinh asl + as cosh asl
exp asx exp asx 1
=
s sl 1 s/a
s 1 + a exp a 1 1+s/a exp 2 asl
exp s(xl)
exp s(xl) n
a a X 1 s/a 2 sln
= exp
s 1+ s n=0
1 + s/a a
a
(1 s/a)n
1 X s((2n + 1)l x)
= exp
s n=0
(1 + s/a)n+1 a
!
X (1 s/a)n s((2n + 1)l + x)
exp
n=0
(1 + s/a)n+1 a
1075
By expanding
(1 s/a)n
(1 + s/a)n+1
in binomial series all the terms would be of the form
s((2n 1)l x)
sm/23/2 exp .
a
We take the inverse Laplace transform to obtain an appoximation of the solution for t 1.
2
2
exp (lx)
4a2t exp (l+x)
4a2t
(x, t) 2a2 t
lx l+x
lx l+x
erfc erfc , for t 1
2a t 2a t
Solution 37.17
We apply the separation of variables (x, t) = X(x)T (t).
t = A2 x2 x
x
0
XT 0 = T A2 x2 X 0
T0 (x2 X 0 )0
= =
A2 T X
This gives us a regular Sturm-Liouville problem.
0
x2 X 0 + X = 0, X(1) = X(2) = 0
x2 X 00 + 2xX 0 + X = 0 (37.11)
( 1) + 2 + = 0
1 1 4
=
2
1 p
= 1/4
2
First we consider the case of a double root when = 1/4. The solutions of Equation 37.11 are
{x1/2 , x1/2 ln x}. The solution that satisfies the left boundary condition is X = x1/2 ln x. Since
this does not satisfy the right boundary condition, = 1/4 is not an eigenvalue.
Now we consider 6= 1/4. The solutions of Equation 37.11 are
1 p 1 p
cos 1/4 ln x , sin 1/4 ln x .
x x
1 p
sin 1/4 ln x .
x
1076
The right boundary condition imposes the constraint
p
1/4 ln 2 = n, n Z+ .
We substitute the expansion for into the initial condition to determine the coefficients.
X
(x, 0) = n Xn (x) = f (x)
n=1
Z 2
n = Xn (x)f (x) dx
1
Solution 37.18
We substitute the separation of variables u(x, t) = X(x)T (t) into the partial differential equation.
T 00 = c2 2 T,
X 00 = 2 X, X(0) = X 0 (L) = 0.
The problem for X is a regular Sturm-Liouville eigenvalue problem. From the Rayleigh quotient,
L RL RL 0 2
2 [0 ]0 + 0 (0 )2 dx ( ) dx
= RL = R0 L
2
dx 2 dx
0 0
1077
we see that there are only positive eigenvalues. For 2 > 0 the general solution of the ordinary
differential equation is
X = a1 cos(x) + a2 sin(x).
The solution that satisfies the left boundary condition is
X = a sin(x).
For non-trivial solutions, the right boundary condition imposes the constraint,
cos (L) = 0,
1
= n , n N.
L 2
The eigenvalues and eigenfunctions are
(2n 1) (2n 1)x
n = , Xn = sin , n N.
2L 2L
The differential equation for T becomes
2
(2n 1)
T 00 = c2 T,
2L
which has the two linearly independent solutions,
(1) (2n 1)ct (2) (2n 1)ct
Tn = cos , Tn = sin .
2L 2L
The eigenvalues and eigen-solutions of the partial differential equation are,
(2n 1)
n = , n N,
2L
(2n 1)x (2n 1)ct (2n 1)x (2n 1)ct
u(1)
n = sin cos , u(2)
n = sin sin .
2L 2L 2L 2L
We expand u(x, t) in a series of the eigen-solutions.
X (2n 1)x (2n 1)ct (2n 1)ct
u(x, t) = sin an cos + bn sin .
n=1
2L 2L 2L
bn = 0.
The initial condition u(x, 0) = f (x) allows us to determine the remaining coefficients,
X (2n 1)x
u(x, 0) = an sin = f (x),
n=1
2L
L
(2n 1)x
Z
2
an = f (x) sin dx.
L 0 2L
The series solution for u(x, t) is,
X (2n 1)x (2n 1)ct
u(x, t) = an sin cos .
n=1
2L 2L
1078
Solution 37.19
We will solve this problem with an eigenfunction expansion in x. To determine a suitable set of
eigenfunctions, we substitute the separation of variables u(x, y) = X(x)Y (y) into the homogeneous
partial differential equation.
uxx + uyy = 0
(XY )xx + (XY )yy = 0
X 00 Y 00
= = 2
X Y
With the boundary conditions at x = 0, a, we have the regular Sturm-Liouville problem,
X 00 = 2 X, X(0) = X(a) = 0,
We substitute this series into the partial differential equation and boundary conditions at y = 0, b.
nx
X n 2 nx
un (y) sin + u00n (y) sin = f (x)
n=1
a a a
X nx
X nx
u0n (0) sin = u0n (b) sin =0
n=1
a n=1
a
We obtain the ordinary differential equations for the coefficients in the expansion.
n 2
u00n (y) un (y) = fn (y), u0n (0) = u0n (b) = 0, n Z+ .
a
We will solve these ordinary differential equations with Green functions.
Consider the Green function problem,
n 2
gn00 (y; ) gn (y; ) = (y ), gn0 (0; ) = gn0 (b; ) = 0.
a
The homogeneous solutions
ny n(y b)
cosh and cosh
a a
1079
satisfy the left and right boundary conditions, respectively. We compute the Wronskian of these two
solutions.
cosh(ny/a) cosh(n(y b)/a)
W (y) = n
n
a sinh(ny/a) a sinh(n(y b)/a)
n ny n(y b) ny n(y b)
= cosh sinh sinh cosh
a a a a a
n nb
= sinh
a a
The Green function is
Solution 37.20
We will solve this problem by expanding the solution in a series of eigen-solutions that satisfy the
partial differential equation and the homogeneous boundary conditions. We will use the initial
conditions to determine the coefficients in the expansion. We substitute the separation of variables,
u(x, t) = X(x)T (t) into the partial differential equation.
T 00 = a2 4 T,
X = c1 + c2 x + c3 x2 + c4 x3 .
Only the trivial solution satisfies the boundary conditions. = 0 is not an eigenvalue. For 6= 0, a
set of linearly independent solutions is
{ex , ex , ex , ex }.
Another linearly independent set, (which will be more useful for this problem), is
1080
Both sin(x) and sinh(x) satisfy the left boundary conditions. Consider the linear combination
c1 cos(x) + c2 cosh(x). The left boundary conditions impose the two constraints c1 + c2 = 0,
c1 c2 = 0. Only the trivial linear combination of cos(x) and cosh(x) can satisfy the left
boundary condition. Thus the solution has the form,
X = c1 sin(x) + c2 sinh(x).
The right boundary conditions impose the constraints,
(
c1 sin(L) + c2 sinh(L) = 0,
c1 2 sin(L) + c2 2 sinh(L) = 0
(
c1 sin(L) + c2 sinh(L) = 0,
c1 sin(L) + c2 sinh(L) = 0
This set of equations has a nontrivial solution if and only if the determinant is zero,
sin(L) sinh(L)
sin(L) sinh(L) = 2 sin(L) sinh(L) = 0.
Since sinh(z) is nonzero in 0 arg(z) < /2, z 6= 0, and sin(z) has the zeros z = n, n N in this
domain, the eigenvalues and eigenfunctions are,
n nx
n = , Xn = sin , n N.
L L
The differential equation for T becomes,
n 4
T 00 = a2 T,
L
which has the solutions,
n 2 n 2
cos a t , sin a t .
L L
The eigen-solutions of the partial differential equation are,
nx n 2 nx n 2
u(1)
n = sin cos a t , u (2)
n = sin sin a t , n N.
L L L L
We expand the solution of the partial differential equation in a series of the eigen-solutions.
nx
X n 2 n 2
u(x, t) = sin cn cos a t + dn sin a t
n=1
L L L
The initial condition for u(x, t) and ut (x, t) allow us to determine the coefficients in the expansion.
X nx
u(x, 0) = cn sin = f (x)
n=1
L
X n 2 nx
ut (x, 0) = dn a sin = g(x)
n=1
L L
1081
Solution 37.21
X 00 = 2 X, X(0) = X(L) = 0
The eigenvalues and eigenfunctions for X are
n nx
n = , Xn = sin , n N.
L L
The differential equation for T becomes,
0 n 2 2
Tn = I Tn ,
L
which has the solution,
n 2
Tn = c exp I 2 t .
L
From this solution, we see that the critical current is
r
ICR = .
L
If I is greater that this, then the eigen-solution for n = 1 will be exponentially growing. This
would make the whole solution exponentially growing. For I < ICR , each of the Tn is exponentially
decaying. The eigen-solutions of the partial differential equation are,
n 2 2
nx
un = exp I t sin , n N.
L L
We expand u(x, t) in its eigen-solutions, un .
X n 2 nx
u(x, t) = an exp I 2 t sin
n=1
L L
If < 0, then the solution is exponentially decaying regardless of current. Thus there is no critical
current.
1082
Solution 37.22
a) The problem is
ut (x, y, z, t) = u(x, y, z, t), < x < , < y < , 0 < z < a, t > 0,
u(x, y, z, 0) = T, u(x, y, 0, t) = u(x, y, a, t) = 0.
Because of symmetry, the partial differential equation in four variables is reduced to a problem
in two variables,
We will solve this problem with an expansion in eigen-solutions of the partial differential
equation that satisfy the homogeneous boundary conditions. We substitute the separation of
variables u(z, t) = Z(z)T (t) into the partial differential equation.
ZT 0 = Z 00 T
T0 Z 00
= = 2
T Z
With the boundary conditions at z = 0, a we have the Sturm-Liouville eigenvalue problem,
Z 00 = 2 Z, Z(0) = Z(a) = 0,
The solution for u is a linear combination of the eigen-solutions. The slowest decaying eigen-
solution is z 2
u1 (z, t) = sin exp t .
a a
Thus the e-folding time is
a2
e = .
2
b) The problem is
ut (r, , z, t) = u(r, , z, t), 0 < r < a, 0 < < 2, < z < , t > 0,
u(r, , z, 0) = T, u(0, , z, t) is bounded, u(a, , z, t) = 0.
1083
The Laplacian in cylindrical coordinates is
1 1
u = urr + ur + 2 u + uzz .
r r
Because of symmetry, the solution does not depend on or z.
1
ut (r, t) = urr (r, t) + ur (r, t) , 0 < r < a, t > 0,
r
u(r, 0) = T, u(0, t) is bounded, u(a, t) = 0.
We will solve this problem with an expansion in eigen-solutions of the partial differential
equation that satisfy the homogeneous boundary conditions at r = 0 and r = a. We substitute
the separation of variables u(r, t) = R(r)T (t) into the partial differential equation.
0 00 1 0
RT = R T + R T
r
0 00 0
T R R
= + = 2
T R rR
We have the eigenvalue problem,
1
R00 + R0 + 2 R = 0, R(0) is bounded, R(a) = 0.
r
Recall that the Bessel equation,
2
00 1 0 2
y + y + 2 y = 0,
x x
has the general solution y = c1 J (x)+c2 Y (x). We discard the Bessel function of the second
kind, Y , as it is unbounded at the origin. The solution for R(r) is
R(r) = J0 (r).
Applying the boundary condition at r = a, we see that the eigenvalues and eigenfunctions are
n n r
n = , R n = J0 , n N,
a a
where {n } are the positive roots of the Bessel function J0 .
The differential equation for T becomes,
2
n
Tn0 = Tn ,
a
which has the solutions,
2 !
n
Tn = exp t .
a
The eigen-solutions of the partial differential equation for u(r, t) are,
2 !
n r n
un (r, t) = J0 exp t .
a a
The solution u(r, t) is a linear combination of the eigen-solutions, un . The slowest decaying
eigenfunction is,
2 !
1 r 1
u1 (r, t) = J0 exp t .
a a
1084
Thus the e-folding time is
a2
e = .
12
c) The problem is
ut (r, , , t) = u(r, , , t), 0 < r < a, 0 < < 2, 0 < < , t > 0,
u(r, , , 0) = T, u(0, , , t) is bounded, u(a, , , t) = 0.
We will solve this problem with an expansion in eigen-solutions of the partial differential
equation that satisfy the homogeneous boundary conditions at r = 0 and r = a. We substitute
the separation of variables u(r, t) = R(r)T (t) into the partial differential equation.
0 00 2 0
RT = R T + R T
r
T0 R00 2 R0
= + = 2
T R r R
We have the eigenvalue problem,
2
R00 + R0 + 2 R = 0, R(0) is bounded, R(a) = 0.
r
Recall that the equation,
2 0 ( + 1)
y 00 + y + 2 y = 0,
x x2
has the general solution y = c1 j (x) + c2 y (x), where j and y are the spherical Bessel
functions of the first and second kind. We discard y as it is unbounded at the origin. (The
spherical Bessel functions are related to the Bessel functions by
r
j (x) = J+1/2 (x).)
2x
The solution for R(r) is
Rn = j0 (r).
Applying the boundary condition at r = a, we see that the eigenvalues and eigenfunctions are
n r
n
n = , Rn = j0 , n N.
a a
The problem for T becomes
2
n
Tn0 = Tn ,
a
which has the solutions, 2
n
Tn = exp t .
a
1085
The eigen-solutions of the partial differential equation are,
r 2
n n
un (r, t) = j0 exp t .
a a
d) If the edges are perfectly insulated, then no heat escapes through the boundary. The temper-
ature is constant for all time. There is no e-folding time.
Solution 37.23
We will solve this problem with an eigenfunction expansion. Since the partial differential equation is
homogeneous, we will find eigenfunctions in both x and y. We substitute the separation of variables
u(x, y, t) = X(x)Y (y)T (t) into the partial differential equation.
XY T 0 = (t) (X 00 Y T + XY 00 T )
T0 X 00 Y 00
= + = 2
(t)T X Y
X 00 Y 00
= 2 = 2
X Y
First we have a Sturm-Liouville eigenvalue problem for X,
X 00 = 2 X, X 0 (0) = X 0 (a) = 0,
1086
m=0, n=1 m=0, n=2 m=0, n=3
mx ny
Figure 37.3: The eigenfunctions cos a sin b
Z a Z b
2 n
c0n = f (x, y) sin dy dx
ab 0 0 b
Z a Z b
4 m n
cmn = f (x, y) cos sin dy dx
ab 0 0 a b
Solution 37.24
The steady state temperature satisfies Laplaces equation, u = 0. The Laplacian in cylindrical
coordinates is,
1 1
u(r, , z) = urr + ur + 2 u + uzz .
r r
Because of the homogeneity in the z direction, we reduce the partial differential equation to,
1 1
urr + ur + 2 u = 0, 0 < r < 1, 0 < < .
r r
1087
The boundary conditions are,
We will solve this problem with an eigenfunction expansion. We substitute the separation of variables
u(r, ) = R(r)T () into the partial differential equation.
1 1
R00 T + R0 T + 2 RT 00 = 0
r r
00
2R R0 T 00
r +r = = 2
R R T
We have the regular Sturm-Liouville eigenvalue problem,
T 00 = 2 T, T (0) = T () = 0,
( 1) + n2 = 0,
= n.
Rn = c1 rn + c2 rn .
un = rn sin(n).
4 X n
u(r, ) = r sin(n).
n=1
odd n
1088
Solution 37.25
The problem is
We substitute the separation of variables u(x, y) = X(x)Y (y) into the partial differential equation.
X 00 Y + XY 00 = 0
X 00 Y 00
= = 2
X Y
We have the regular Sturm-Liouville problem,
Y 00 = 2 Y, Y (0) = Y (1) = 0,
Xn = c enx .
un = enx sin(ny), n N.
Z 1
an = 2 f (y) sin(ny) dy
0
Solution 37.26
The Laplacian in polar coordinates is
1 1
u urr + ur + 2 u .
r r
Since we have homogeneous boundary conditions at = 0 and = , we will solve this problem
with an eigenfunction expansion. We substitute the separation of variables u(r, ) = R(r)() into
Laplaces equation.
1 1
R00 + R0 + 2 R00 = 0
r r
R00 R0 00
r2 +r = = 2 .
R R
1089
We have a regular Sturm-Liouville eigenvalue problem for .
00 = 2 , (0) = () = 0
n n
n = , n = sin , n Z+ .
Solution 37.27
Because we are interest in the harmonics of the motion, we will solve this problem with an eigen-
function expansion in x. We substitute the separation of variables u(x, t) = X(x)T (t) into the wave
equation.
XT 00 = c2 X 00 T
T 00 X 00
= = 2
c2 T X
The eigenvalue problem for X is,
X 00 = 2 X, X(0) = X(L) = 0,
1090
The ordinary differential equation for the Tn are,
nc 2
Tn00 = Tn ,
L
which have the linearly independent solutions,
nct nct
cos , sin .
L L
Since the string initially has zero displacement, each of the an are zero.
nx
X nct
u(x, t) = bn sin sin
n=1
L L
Now we use the initial velocity to determine the coefficients in the expansion. Because the position
is a continuous function of x, and there is a jump discontinuity in the velocity as a function of x,
the coefficients in the expansion will decay as 1/n2 .
(
X nc nx v for |x | < d
ut (x, 0) = bn sin =
n=1
L L 0 for |x | > d.
Z L
nc 2 nx
bn = ut (x, 0) sin dx
L L 0 L
Z +d
2 nx
bn = v sin dx
nc
d L
4Lv nd n
= 2 2 sin sin
n c L L
4Lv X 1 nd n nx nct
u(x, t) = sin sin sin sin .
2 c n=1 n2 L L L L
1091
Z +d
2 (x ) nx
bn = v cos sin dx
nc d 2d L
2
2 8dL v nd n L
2 2
n c(L 4d n ) 2 cos L sin L for d 6= 2n ,
bn = n
v 2nd L
2 2 2nd + L sin sin L for d =
n c L 2n
v X 1 2nd n nx nct L
u(x, t) = 2 2nd + L sin sin sin sin for d = .
c n=1 n2 L L L L 2n
We note that the kinetic energies of the nth harmonic decay as 1/n2 .
L
Curved Hammer. We assume that d 6= 2n . The nth harmonic is
8dL2 v
nd n nx nct
un = cos sin sin sin .
n 2 c(L2 4d2 n2 ) L L L L
The kinetic energy of the nth harmonic is
2
L un 16d2 L3 v 2
Z
2 nd 2 n 2 nct
En = dx = 2 2 cos sin cos .
2 0 t (L 4d2 n2 )2 L L L
This will be maximized if
n
sin2 = 1,
L
(2m 1)L
= , m = 1, . . . , n
2n
We note that the kinetic energies of the nth harmonic decay as 1/n4 .
1092
Solution 37.28
In mathematical notation, the problem is
Since this is an inhomogeneous partial differential equation, we will expand the solution in a series
of eigenfunctions in x for which the coefficients are functions of t. The solution for u has the form,
X nx
u(x, t) = un (t) sin .
n=1
L
Substituting this expression into the inhomogeneous partial differential equation will give us ordinary
differential equations for each of the un .
2
2 n
X nx
00
un + c un sin = s(x, t).
n=1
L L
Since the initial position and velocity of the string is zero, we have
First we solve the differential equation on the range 0 < t < . The homogeneous solutions are
nct nct
cos , sin .
L L
Since the right side of the ordinary differential equation is a constant times sin(t/), which is an
eigenfunction of the differential operator, we can guess the form of a particular solution, pn (t).
t
pn (t) = d sin
We substitute this into the ordinary differential equation to determine the multiplicative constant
d.
8d 2 L3 v
nd n t
pn (t) = 3 2 cos sin sin
(L c2 2 n2 )(L2 4d2 n2 ) L L
1093
The general solution for un (t) is
8d 2 L3 v
nct nct nd n t
un (t) = a cos +b sin 3 2 2 2 2 2 2 2
cos sin sin .
L L (L c n )(L 4d n ) L L
We use the initial conditions to determine the constants a and b. The solution for 0 < t < is
8d 2 L3 v
nd n L nct t
un (t) = 3 2 cos sin sin sin .
(L c2 2 n2 )(L2 4d2 n2 ) L L cn L
The solution for t > , the solution is a linear combination of the homogeneous solutions. This
linear combination is determined by the position and velocity at t = . We use the above solution
to determine these quantities.
8d 2 L4 v
nd n nc
un () = 3 cos sin sin
cn(L2 c2 2 n2 )(L2 4d2 n2 ) L L L
2 3
0 8d L v nd n nc
un () = 2 cos sin 1 + cos
(L2 c2 2 n2 )(L2 4d2 n2 ) L L L
From the initial conditions at t = , we see that the solution for t > is
8d 2 L3 v
nd n
un (t) = 3 2 cos sin
(L c2 2 n2 )(L2 4d2 n2 ) L L
L nc nc(t ) nc nc(t )
sin cos + 1 + cos sin .
cn L L L L
Width of the Hammer. The nth harmonic has the width dependent factor,
d nd
cos .
L2 4d2 n2 L
Differentiating this expression and trying to find zeros to determine extrema would give us an
equation with both algebraic and transcendental terms. Thus we dont attempt to find the maxima
exactly. We know that d < L. The cosine factor is large when
nd
m, m = 1, 2, . . . , n 1,
L
mL
d , m = 1, 2, . . . , n 1.
n
Substituting d = mL/n into the width dependent factor gives us
d
(1)m .
L2 (1 4m2 )
Thus we see that the amplitude of the nth harmonic and hence its kinetic energy will be maximized
for
L
d
n
1094
The cosine term in the width dependent factor vanishes when
(2m 1)L
d= , m = 1, 2, . . . , n.
2n
The kinetic energy of the nth harmonic is minimized for these widths.
L
For the lower harmonics, n 2d , the kinetic energy is proportional to d2 ; for the higher har-
L
monics, n 2d , the kinetic energy is proportional to 1/d2 .
Duration of the Blow. The nth harmonic has the duration dependent factor,
2
L nc nc(t ) nc nc(t )
sin cos + 1 + cos sin .
L2 n2 c2 2 nc L L L L
Solution 37.29
Substituting u(x, y, z, t) = v(x, y, z) et into the wave equation will give us a Helmholtz equation.
We find the propagating modes with separation of variables. We substitute v = X(x)Y (y)Z(z) into
the Helmholtz equation.
X 00 Y Z + XY 00 Z + XY Z 00 + k 2 XY Z = 0
X 00 Y 00 Z 00
= + + k2 = 2
X Y Z
The eigenvalue problem in x is
X 00 = 2 X, X(0) = X(L) = 0,
Y 00 Z 00 n 2
= + k2 = 2
Y Z L
The eigenvalue problem in y is
Y 00 = 2 Y, Y (0) = Y (L) = 0,
1095
which has the solutions,
m my
n = , Ym = sin .
L L
Now we have an ordinary differential equation for Z,
2
Z 00 + k 2 n2 + m2 Z = 0.
L
We define the eigenvalues,
2
2n,m = k 2 n2 + m2 .
L
2
If k 2 L n2 + m2 < 0, then the solutions for Z are,
s !
2
exp (n2 + m2 ) k 2 z .
L
{1, z}
The solution Z = 1 satisfies the boundedness and nonzero condition at infinity. This corresponds to
a standing wave.
2
If k 2 L n2 + m2 > 0, then the solutions for Z are,
en,m z .
These satisfy the boundedness and nonzero conditions at infinity. For values of n, m satisfying
2
k2 L n2 + m2 0, there are the propagating modes,
nx my
un,m = sin sin e(tn,m z) .
L L
Solution 37.30
We substitute the separation of variables u(x, y, t) = X(x)Y (y)T (t) into Equation 37.12.
T 00 X 00 Y 00
2
= + =
c T X Y
X 00 Y 00
= =
X Y
This gives us differential equations for X(x), Y (y) and T (t).
X 00 = X, X(0) = X(a) = 0
00
Y = ( )Y, Y (0) = Y (b) = 0
T 00 = c2 T
1096
Then we solve the problem for Y .
m 2 n 2 ny
m,n = + , Ym,n = sin
a b b
Finally we determine T . r !
cos m 2 n 2
Tm,n = c + t
sin a b
The modes of oscillation are
r !
mx ny cos m 2 n 2
um,n = sin sin c + t .
a b sin a b
Solution 37.31
We substitute the separation of variables = X(x)Y (y)T (t) into the differential equation.
t = a2 (xx + yy ) (37.13)
0 2 00 00
XY T = a (X Y T + XY T )
T0 X 00 Y 00
= + =
a2 T X Y
T0 X 00 Y 00
2
= , = =
a T X Y
First we solve the eigenvalue problem for X.
X 00 + X = 0, X(0) = X(lx ) = 0
2
m mx
m = , Xm (x) = sin , m Z+
lx lx
1097
Then we solve the eigenvalue problem for Y .
Y 00 + ( m )Y = 0, Y 0 (0) = Y 0 (ly ) = 0
2
n ny
mn = m + , Ymn (y) = cos , n Z0+
ly ly
Next we solve the differential equation for T , (up to a multiplicative constant).
T 0 = a2 mn T
T (t) = exp a2 mn t
cmn = 0, m Z+ , n Z+
X
(x, y, t) = cm0 m0 (x, y, t)
m=1
p
X 2 2lx ly
sin(m x) exp a2 mn t
(x, y, t) =
m=1
m
odd m
1098
Here we have done an even periodic continuation of the problem in the y variable. Thus the boundary
conditions
y (x, 0, t) = y (x, ly , t) = 0
are automatically satisfied. Note that this problem does not depend on y. Thus we only had to
solve
t = a2 xx , 0 < x < lx
(x, 0) = 1, (0, t) = (ly , t) = 0.
Solution 37.32
1. Since the initial and boundary conditions do not depend on , neither does . We apply the
separation of variables = u(r)T (t).
t = a2 (37.14)
1
t = a2 (rr )r (37.15)
r
T0 1
= (ru0 )0 = (37.16)
a2 T r
We solve the eigenvalue problem for u(r).
The Bessel function of the second kind, Y0 , is not bounded at r = 0, so c2 = 0. We use the
boundary condition at r = R to determine the eigenvalues.
2
j0,n j0,n r
n = , un (r) = cJ0
R R
We choose the constant c so that the eigenfunctions are orthonormal with respect to the
weighting function r.
j r
J0 0,n R
un (r) = r
R R 2 j0,n r
0
rJ0 R
2 j0,n r
= J0
RJ1 (j0,n ) R
T 0 = a2 n T
2 !
aj0,n
Tn = exp t
R2
1099
The solution is a linear combination of the eigensolutions.
2 !
X 2 j0,n r aj0,n
= cn J0 exp t
n=1
RJ1 (j0,n ) R R2
(r, , 0) = V
X 2 j0,n r
cn J0 =V
n=1
RJ1 (j0,n ) R
Z R
2 j0,n r
cn = Vr J0 dr
0 RJ1 (j0,n ) R
2 R
cn = V J1 (j0,n )
RJ1 (j0,n ) j0,n /R
2 VR
cn =
j0,n
J j0,n r 2 !
X 0 R aj0,n
(r, , t) = 2V exp t
j J (j )
n=1 0,n 1 0,n
R2
2.
r
2
J (r) cos r , r +
r 2 4
1
j,n n +
2 4
For large n, the terms in the series solution at t = 0 are
q
j r 2R j0,n r
J0 0,n R j0,n r cos R 4
q
j0,n J1 (j0,n ) j0,n j0,n cos j0,n 3
2
4
(n1/4)r
R cos R 4
.
r(n 1/4) cos ((n 1))
Solution 37.33
1. We substitute the separation of variables = T (t)()() into Equation 37.7
a2
1 1
T 0 = 2 (sin T 0 ) + T 00
R sin sin2
R2 T 0 1 00
1 0 0
= (sin ) + =
a2 T sin sin2
sin 00
(sin 0 )0 + sin2 = =
We have differential equations for each of T , and .
a2
1
T 0 = T, (sin 0 )0 + = 0, 00 + = 0
R2 sin sin2
1100
2. In order that the solution be continuously differentiable, we need the periodic boundary con-
ditions
1
n = n 2 , n = en , n Z.
2
d 1 d
x = cos , () = P (x), sin2 = 1 x2 , =
dx sin d
1 1
(sin2 0 )0 + =0
sin sin sin2
n2
0
1 x2 P 0 + P =0
1 x2
3. If the solution does not depend on , then the only one of the n that will appear in the
solution is 0 = 1/ 2. The equations for T and P become
0
1 x2 P 0 + P = 0, P (1) bounded,
2
a
T 0 = T.
R2
a2
T 0 = l(l + 1) 2 T
2 R
a l(l + 1)
Tl = exp t
R2
2
X a l(l + 1)
= Al Pl (cos ) exp t
R2
l=0
1101
4. We determine the coefficients in the expansion from the initial condition.
(, 0) = 2 cos2 1
X
Al Pl (cos ) = 2 cos2 1
l=0
3 1
A0 + A1 cos + A2 cos2 + = 2 cos2 1
2 2
1 4
A0 = , A1 = 0, A2 = , A3 = A4 = = 0
3 3
6a2
1 4
(, t) = P0 (cos ) + P2 (cos ) exp 2 t
3 3 R
2
1 2 6a
(, t) = + 2 cos2 exp 2 t
3 3 R
Solution 37.34
Since we have homogeneous boundary conditions at x = 0 and x = 1, we will expand the solution
in a series of eigenfunctions in x. We determine a suitable set of eigenfunctions with the separation
of variables, = X(x)Y (y).
xx + yy = 0 (37.17)
00 00
X Y
= =
X Y
We have differential equations for X and Y .
X 00 + X = 0, X(0) = X(1) = 0
Y 00 Y = 0, Y (0) = 0
Yn (y) = sinh(ny).
1102
The solution at x = 1/2, y = 1 is
8 X 1 sinh(n)
(1/2, 1) = .
3 n=1 n3 sinh(2n)
odd n
Since R1 0.0000693169 we see that one term is sufficient for 1% or 0.1% accuracy.
Now consider x (1/2, 1).
8 X 1 sinh(ny)
x (x, y) = 2 cos(nx)
n=1 n2 sinh(2n)
odd n
x (1/2, 1) = 0
Since all the terms in the series are zero, accuracy is not an issue.
Solution 37.35
The solution has the form
(
rn1 Pnm (cos ) sin(m), r > a
=
rn Pnm (cos ) sin(m), r < a.
The boundary condition on at r = a gives us the constraint
an1 an = 0
= a2n1 .
Then we apply the boundary condition on r at r = a.
(n + 1)an2 na2n1 an1 = 1
an+2
=
2n + 1
( n+2
a
2n+1 rn1 Pnm (cos ) sin(m), r > a
= n+1
a2n+1 rn Pnm (cos ) sin(m), r<a
Solution 37.36
We expand the solution in a Fourier series.
1 X X
= a0 (r) + an (r) cos(n) + bn (r) sin(n)
2 n=1 n=1
We substitute the series into the Laplaces equation to determine ordinary differential equations for
the coefficients.
1 2
r + 2 2 =0
r r r
1 1 1
a000 + a00 = 0, a00n + a0n n2 an = 0, b00n + b0n n2 bn = 0
r r r
1103
The solutions that are bounded at r = 0 are, (to within multiplicative constants),
1 X X
(r, ) = c0 + cn rn cos(n) + dn rn sin(n)
2 n=1 n=1
X
X
r (R, ) = ncn Rn1 cos(n) + ndn Rn1 sin(n)
n=1 n=1
In order that r (R, ) have a Fourier series of this form, it is necessary that
Z 2
r (R, ) d = 0.
0
We substitute the coefficients into our series solution to determine it up to the additive constant.
R X 1 r n 2
Z
(r, ) = r (R, ) cos(n( )) d
n=1 n R 0
R 2
Z X 1 r n
(r, ) = r (R, ) cos(n( )) d
0 n=1
n R
Z r n1
R 2
Z X
n()
(r, ) = r (R, ) d< e d
0 n=1 0
Rn
!
R 2
Z r X
n n()
Z
1
(r, ) = r (R, )< n
e d d
0 0 n=1 R
!
R 2
Z r
1 R e()
Z
(r, ) = r (R, )< () d d
0 0 1 R e
R 2
Z r
(r, ) = r (R, )< ln 1 e() d
0 R
Z 2
R r
(r, ) = r (R, ) ln 1 e() d
0 R
Z 2
r2
R r
(r, ) = r (R, ) ln 1 2 cos( ) + 2 d
2 0 R R
Solution 37.37
We will assume that both and are nonzero. The cases of real and pure imaginary have already
been covered. We solve the ordinary differential equations, (up to a multiplicative constant), to find
1104
special solutions of the diffusion equation.
T0 X 00 ( + )2
= ( + )2 , = 2
T X a
+
T = exp ( + )2 t ,
X = exp x
a
T = exp 2 2 t + 2t , X = exp x x
a a
= exp 2 2 t x + 2t x
a a
1105
1106
Chapter 38
Finite Transforms
and apply a finite cosine transform in the y direction. Integrating from 0 to b yields
Z b Z b
2
vxx + vyy + k v dy = (x )(y ) dy,
0 0
b
Z b
vy 0 + vxx + k 2 v dy = (x ),
0
Z b
vxx + k 2 v dy = (x ).
0
Substituting in Equation 38.1 and using the orthogonality of the cosines gives us
2
c000 (x) + k 2 c0 (x) = (x ).
b
Multiplying by cos(ny/b) and integrating form 0 to b yields
Z b ny Z b ny
2
vxx + vyy + k v cos dy = (x )(y ) cos dy.
0 b 0 b
The vyy term becomes
Z b ny h ny ib Z b n ny
vyy cos dy = vy cos vy sin dy
0 b b 0 0 b b
h n ny ib Z b n 2 ny
= v sin v cos dy.
b b 0 0 b b
1107
The right-hand-side becomes
Z b ny n
(x )(y ) cos dy = (x ) cos .
0 b b
Thus the partial differential equation becomes
Z b n 2 ny n
vxx v + k 2 v cos dy = (x ) cos .
0 b b b
Substituting in Equation 38.1 and using the orthogonality of the cosines gives us
n 2 2 n
c00n (x) + k 2 cn (x) = (x ) cos .
b b b
Now we need to solve for the coefficients in the expansion of v(x, y). The homogeneous solutions
for c0 (x) are ekx . The solution for u(x, y, t) must satisfy the radiation condition. The waves at
x = travel to the left and the waves at x = + travel to the right. The two solutions of that
will satisfy these conditions are, respectively,
y1 = ekx , y2 = ekx .
The Wronskian of these two solutions is 2k. Thus the solution for c0 (x) is
ekx< ekx>
c0 (x) =
bk
We need to consider three cases for the equation for cn .
p
k > n/b Let = k 2 (n/b)2 . The homogeneous solutions that satisfy the radiation condition
are
y1 = ex , y2 = ex .
The Wronskian of the two solutions is 2. Thus the solution is
ex< ex> n
cn (x) = cos .
b b
n
In the case that cos b = 0 this reduces to the trivial solution.
k = n/b The homogeneous solutions that are bounded at infinity are
y1 = 1, y2 = 1.
If the right-hand-side is nonzero there is no way to combine these solutions to satisfy both
the continuity and the derivative jump conditions. Thus if cos n
b 6= 0 there is no bounded
n
solution. If cos b = 0 then the solution is not unique.
cn (x) = const.
p
k < n/b Let = (n/b)2 k 2 . The homogeneous solutions that are bounded at infinity are
y1 = ex , y2 = ex .
The Wronskian of these solutions is 2. Thus the solution is
ex< ex> n
cn (x) = cos
b b
n
In the case that cos b = 0 this reduces to the trivial solution.
1108
38.1 Exercises
Exercise 38.1
A slab is perfectly insulated at the surface x = 0 and has a specified time varying temperature f (t)
at the surface x = L. Initially the temperature is zero. Find the temperature u(x, t) if the heat
conductivity in the slab is = 1.
Exercise 38.2
Solve
1109
38.2 Hints
Hint 38.1
Hint 38.2
1110
38.3 Solutions
Solution 38.1
The problem is
We will solve this problem with an eigenfunction expansion. We find these eigenfunction by replacing
the inhomogeneous boundary condition with the homogeneous one, u(L, t) = 0. We substitute the
separation of variables v(x, t) = X(x)T (t) into the homogeneous partial differential equation.
XT 0 = X 00 T
T0 X 00
= = 2 .
T X
This gives us the regular Sturm-Liouville eigenvalue problem,
X 00 = 2 X, X 0 (0) = X(L) = 0,
Our solution for u(x, t) will be an eigenfunction expansion in these eigenfunctions. Since the inho-
mogeneous boundary condition is a function of t, the coefficients will be functions of t.
X
u(x, t) = an (t) cos(n x)
n=1
Since u(x, t) does not satisfy the homogeneous boundary conditions of the eigenfunctions, the series
is not uniformly convergent and we are not allowed to differentiate it with respect to x. We substitute
the expansion into the partial differential equation, multiply by the eigenfunction and integrate from
x = 0 to x = L. We use integration by parts to move derivatives from u to the eigenfunctions.
ut = uxx
Z L Z L
ut cos(m x) dx = uxx cos(m x) dx
0 0
!
Z L X Z L
L
a0n (t) cos(n x) cos(m x) dx = [ux cos(m x)]0 + ux m sin(m x) dx
0 n=1 0
Z L
L 0 L
a (t) = [um sin(m x)]0 u2m cos(m x) dx
2 m 0
Z L X !
L 0 2
a (t) = m u(L, t) sin(m L) m an (t) cos(n x) cos(m x) dx
2 m 0 n=1
L 0 L
a (t) = m (1)n f (t) 2m am (t)
2 m 2
0 2 n
am (t) + m am (t) = (1) m f (t)
From the initial condition u(x, 0) = 0 we see that am (0) = 0. Thus we have a first order differential
equation and an initial condition for each of the am (t).
1111
This equation has the solution,
Z t
2
am (t) = (1)n m em (t ) f ( ) d.
0
Solution 38.2
m L
mx iL Z mx
h L
ux sin ux cos dx + u00m (y) = 0
L 0 L 0 L 2
h m mx iL m 2 Z L mx L
u cos u sin dx + u00m (y) = 0
L L 0 L 0 L 2
m m L m 2 L
h(y)(1)m + g(y) um (y) + u00m (y) = 0
L L 2 L 2
m 2
00 m
um (y) um (y) = 2m ((1) h(y) g(y))
L
Now we have an ordinary differential equation for the un (y). In order that the solution is bounded,
we require that each un (y) is bounded as y . We use the boundary condition u(x, 0) = f (x) to
determine boundary conditions for the um (y) at y = 0.
X nx
u(x, 0) = un (0) sin = f (x)
n=1
L
Z L
2 nx
un (0) = fn f (x) sin dx
L 0 L
Thus we have the problems,
n 2
u00n (y) un (y) = 2n ((1)n h(y) g(y)) , un (0) = fn , un (+) bounded,
L
for the coefficients in the expansion. We will solve these with Green functions. Consider the
associated Green function problem
n 2
G00n (y; ) Gn (y; ) = (y ), Gn (0; ) = 0, Gn (+; ) bounded.
L
The homogeneous solutions that satisfy the boundary conditions are
ny
sinh and eny/L ,
L
1112
respectively. The Wronskian of these solutions is
sinh ny ny/L
L
e n 2ny/L
sinh ny n ny/L = e .
n
L L L e L
Gn (y; ) = .
n e2n/L
Using the Green function we determine the un (y) and thus the solution of Laplaces equation.
Z
ny/L
un (y) = fn e +2n Gn (y; ) ((1)n h() g()) d
0
X nx
u(x, y) = un (y) sin .
n=1
L
1113
1114
Chapter 39
1115
39.1 Exercises
Exercise 39.1
Is the solution of the Cauchy problem for the heat equation unique?
Exercise 39.2
Consider the heat equation with a time-independent source term and inhomogeneous boundary
conditions.
ut = uxx + q(x)
u(0, t) = a, u(h, t) = b, u(x, 0) = f (x)
Exercise 39.3
Is the Cauchy problem for the backward heat equation
well posed?
Exercise 39.4
Derive the heat equation for a general 3 dimensional body, with non-uniform density (x), specific
heat c(x), and conductivity k(x). Show that
u(x, t) 1
= (ku(x, t))
t c
where u is the temperature, and you may assume there are no internal sources or sinks.
Exercise 39.5
Verify Duhamels Principal: If u(x, t, ) is the solution of the initial value problem:
Exercise 39.6
Modify the derivation of the diffusion equation
k
t = a2 xx , a2 = , (39.2)
c
so that it is valid for diffusion in a non-homogeneous medium for which c and k are functions of x
and and so that it is valid for a geometry in which A is a function of x. Show that Equation (39.2)
above is in this case replaced by
cAt = (kAx )x .
Recall that c is the specific heat, k is the thermal conductivity, is the density, is the temperature
and A is the cross-sectional area.
1116
39.2 Hints
Hint 39.1
Hint 39.2
Hint 39.3
Hint 39.4
Hint 39.5
Check that the expression for w(x, t) satisfies the partial differential equation and initial condition.
Recall that Z x Z x
h(x, ) d = hx (x, ) d + h(x, x).
x a a
Hint 39.6
1117
39.3 Solutions
Solution 39.1
Let u and v both be solutions of the Cauchy problem for the heat equation. Let w be the difference
of these solutions. w satisfies the problem
wt + 2 w = 0, w(, 0) = 0
w = 0
w=0
Since uv = 0, we conclude that the solution of the Cauchy problem for the heat equation is unique.
Solution 39.2
Let (x) be the equilibrium temperature. It satisfies an ordinary differential equation boundary
value problem.
q(x)
00 = , (0) = a, (h) = b
To solve this boundary value problem we find a particular solution p that satisfies homogeneous
boundary conditions and then add on a homogeneous solution h that satisfies the inhomogeneous
boundary conditions.
q(x)
00p = , p (0) = p (h) = 0
00h = 0, h (0) = a, h (h) = b
We find homogeneous solutions which respectively satisfy the left and right homogeneous boundary
conditions.
y1 = x, y2 = h x
Then we compute the Wronskian of these solutions and write down the Green function.
x h x
W = = h
1 1
1
G(x|) = x< (h x> )
h
The homogeneous solution that satisfies the inhomogeneous boundary conditions is
ba
h = a + x
h
Now we have the equilibrium temperature.
h
ba
Z
1 q()
=a+ x+ x< (h x> ) d
h 0 h
hx x
Z h
ba
Z
x
=a+ x+ q() d + (h )q() d
h h 0 h x
1118
Let v denote the deviation from the equilibrium temperature.
u=+v
v satisfies a heat equation with homogeneous boundary conditions and no source term.
v = X(x)T (t)
XT 0 = X 00 T
T0 X 00
= =
T X
We have a regular Sturm-Liouville problem for X and a differential equation for T .
X 00 + X = 0, X(0) = X() = 0
n 2 nx
n = , Xn = sin , n Z+
h h
T 0 = T
n 2
Tn = exp t
h
v is a linear combination of the eigensolutions.
X nx n 2
v= vn sin exp t
n=1
h h
The coefficients are determined from the initial condition, v(x, 0) = f (x) (x).
2 h
Z nx
vn = (f (x) (x)) sin dx
h 0 h
We have determined the solution of the original problem in terms of the equilibrium temperature
and the deviation from the equilibrium. u = + v.
Solution 39.3
A problem is well posed if there exists a unique solution that depends continiously on the nonho-
mogeneous data.
First we find some solutions of the differential equation with the separation of variables u =
X(x)T (t).
ut + uxx = 0, > 0
XT 0 + X 00 T = 0
T0 X 00
= =
T X
X 00 + X = 0, T 0 = T
u = cos x et , u = sin x et
Note that
u = cos x et
satisfies the Cauchy problem
ut + uxx = 0, u(x, 0) = cos x
1119
Consider 1. The initial condition is small, it satisfies |u(x, 0)| < . However the solution for any
positive time can be made arbitrarily large by choosing a sufficiently large, positive value of . We
can make the solution exceed the value M at time t by choosing a value of such that
et > M
1 M
> ln .
t
Thus we see that Equation 39.1 is ill posed because the solution does not depend continuously on
the initial data. A small change in the initial condition can produce an arbitrarily large change in
the solution for any fixed time.
Solution 39.4
Consider a Region of material, R. Let u be the temperature and be the heat flux. The amount
of heat energy in the region is Z
cu dx.
R
We equate the rate of change of heat energy in the region with the heat flux across the boundary of
the region. Z Z
d
cu dx = n ds
dt R R
We apply the divergence theorem to change the surface integral to a volume integral.
Z Z
d
cu dx = dx
dt R R
Z
u
c + dx = 0
R t
Since the region is arbitrary, the integral must vanish identically.
u
c =
t
We apply Fouriers law of heat conduction, = ku, to obtain the heat equation.
u 1
= (ku)
t c
Solution 39.5
We verify Duhamels principal by showing that the integral expression for w(x, t) satisfies the partial
differential equation and the initial condition. Clearly the initial condition is satisfied.
Z 0
w(x, 0) = u(x, 0 , ) d = 0
0
Now we substitute the expression for w(x, t) into the partial differential equation.
Z t Z t
2
u(x, t , ) d = 2 u(x, t , ) d + f (x, t)
t 0 x 0
Z t Z t
u(x, t t, t) + ut (x, t , ) d = uxx (x, t , ) d + f (x, t)
0 0
Z t Z t
f (x, t) + ut (x, t , ) d = uxx (x, t , ) d + f (x, t)
0 0
Z t
(ut (x, t , ) d uxx (x, t , )) d
0
1120
Solution 39.6
We equate the rate of change of thermal energy in the segment ( . . . ) with the heat entering the
segment through the endpoints.
Z
t cA dx = k(, ())A()x (, t) k(, ())A()x (, t)
Z
t cA dx = [kAx ]
Z Z
t cA dx = (kAx )x dx
Z
cAt (kAx )x dx = 0
cAt = (kAx )x .
1121
1122
Chapter 40
Laplaces Equation
40.1 Introduction
Laplaces equation in n dimensions is
u = 0
where
2 2
= 2 + + 2 .
x1 xn
The inhomogeneous analog is called Poissons Equation.
u = f (x)
CONTINUE
G = (x ).
1123
40.3 Exercises
Exercise 40.1
Is the solution of the following Dirichlet problem unique?
uxx + uyy = q(x, y), < x < , y>0
u(x, 0) = f (x)
Exercise 40.2
Is the solution of the following Dirichlet problem unique?
uxx + uyy = q(x, y), < x < , y>0
2 2
u(x, 0) = f (x), u bounded as x + y
Exercise 40.3
Not all combinations of boundary conditions/initial conditions lead to so called well-posed problems.
Essentially, a well posed problem is one where the solutions depend continuously on the boundary
data. Otherwise it is considered ill posed.
Consider Laplaces equation on the unit-square
uxx + uyy = 0,
with u(0, y) = u(1, y) = 0 and u(x, 0) = 0, uy (x, 0) = sin(nx).
1. Show that even as 0, you can find n so that the solution can attain any finite value for
any y > 0. Use this to then show that this problem is ill posed.
2. Contrast this with the case where u(0, y) = u(1, y) = 0 and u(x, 0) = 0, u(x, 1) = sin(nx).
Is this well posed?
Exercise 40.4
Use the fundamental solutions for the Laplace equation
2 G = (x )
in three dimensions
1
G(x|) =
4|x |
to derive the mean value theorem for harmonic functions
Z
1
u(p) = u() dA ,
4R2 SR
that relates the value of any harmonic function u(x) at the point x = p to the average of its value
on the boundary of the sphere of radius R with center at p, (SR ).
Exercise 40.5
Use the fundamental solutions for the modified Helmholz equation
2 u u = (x )
in three dimensions
1
u (x|) = e |x| ,
4|x |
to derive a generalized mean value theorem:
sinh R 1
Z
u(p) = u(x) dA
R 4R2 S
that relates the value of any solution u(x) at a point P to the average of its value on the sphere of
radius R (S) with center at P.
1124
Exercise 40.6
Consider the uniqueness of solutions of 2 u(x) = 0 in a two dimensional region R with boundary
curve C and a boundary condition n u(x) = a(x)u(x) on C. State a non-trivial condition on
the function a(x) on C for which solutions are unique, and justify your answer.
Exercise 40.7
Solve Laplaces equation on the surface of a semi-infinite cylinder of unit radius, 0 < < 2, z > 0,
where the solution, u(, z) is prescribed at z = 0: u(, 0) = f ().
Exercise 40.8
Solve Laplaces equation in a rectangle.
1125
40.4 Hints
Hint 40.1
Hint 40.2
Hint 40.3
Hint 40.4
Hint 40.5
Hint 40.6
Hint 40.7
Hint 40.8
1126
40.5 Solutions
Solution 40.1
Let u and v both be solutions of the Dirichlet problem. Let w be the difference of these solutions.
w satisfies the problem
wxx + wyy = 0, < x < , y>0
w(x, 0) = 0.
Since w = cy is a solution. We conclude that the solution of the Dirichlet problem is not unique.
Solution 40.2
Let u and v both be solutions of the Dirichlet problem. Let w be the difference of these solutions.
w satisfies the problem
wxx + wyy = 0, < x < , y>0
2 2
w(x, 0) = 0, w bounded as x + y .
We solve this problem with a Fourier transform in x.
2 w + wyy = 0, w(, 0) = 0, w bounded as y
(
c1 cosh y + c2 sinh(y), 6= 0
w =
c1 + c2 y, =0
w = 0
w=0
Since u v = 0, we conclude that the solution of the Dirichlet problem is unique.
Solution 40.3
1. We seek a solution of the form u(x, y) = sin(nx)Y (y). This form satisfies the boundary
conditions at x = 0, 1.
uxx + uyy = 0
(n) Y + Y 00 = 0, Y (0) = 0
2
Y = c sinh(ny)
Now we apply the inhomogeneous boundary condition.
uy (x, 0) = sin(nx) = cn sin(nx)
u(x, y) = sin(nx) sinh(ny)
n
For = 0 the solution is u = 0. Now consider any > 0. For any y > 0 and any finite value
M , we can choose a value of n such that the solution along y = 0 takes on all values in the
range [M . . . M ]. We merely choose a value of n such that
sinh(ny) M
.
n
Since the solution does not depend continuously on boundary data, this problem is ill posed.
2. We seek a solution of the form u(x, y) = c sin(nx) sinh(ny). This form satisfies the dif-
ferential equation and the boundary conditions at x = 0, 1 and at y = 0. We apply the
inhomogeneous boundary condition at y = 1.
u(x, 1) = sin(nx) = c sin(nx) sinh(n)
sinh(ny)
u(x, y) = sin(nx)
sinh(n)
1127
For = 0 the solution is u = 0. Now consider any > 0. Note that |u| for (x, y)
[0 . . . 1] [0 . . . 1]. The solution depends continuously on the given boundary data. This
problem is well posed.
Solution 40.4
The Green function problem for a sphere of radius R centered at the point is
G = (x ), G|x|=R = 0. (40.1)
We will solve Laplaces equation, u = 0, where the value of u is known on the boundary of the
sphere of radius R in terms of this Green function.
First we solve for u(x) in terms of the Green function.
Z Z
(uG Gu) d = u(x ) d = u(x)
S S
Z Z
G u
(uG Gu) d = G u dA
S n n
ZS
G
= u dA
S n
Z
G
u(x) = u dA
S n
We are interested in the value of u at the center of the sphere. Let = |p |
Z
G
u(p) = u() (p|) dA
S
We do not need to compute the general solution of Equation 40.1. We only need the Green
function at the point x = p. We know that the general solution of the equation G = (x ) is
1
G(x|) = + v(x),
4|x |
where v(x) is an arbitrary harmonic function. The Green function at the point x = p is
1
G(p|) = + const.
4|p |
We add the constraint that the Green function vanishes at = R. This determines the constant.
1 1
G(p|) = +
4|p | 4R
1 1
G(p|) = +
4 4R
1
G (p|) =
42
1128
Solution 40.5
The Green function problem for a sphere of radius R centered at the point is
G G = (x ), G|x|=R = 0. (40.2)
u u = 0,
where the value of u is known on the boundary of the sphere of radius R in terms of this Green
function.
in terms of this Green function.
Let L[u] = u u.
Z Z
(uL[G] GL[u]) d = u(x ) d = u(x)
S S
Z Z
(uL[G] GL[u]) d = (uG Gu) d
S
ZS
G u
= u G dA
n n
ZS
G
= u dA
S n
Z
G
u(x) = u dA
S n
such that c1 + c2 = 1. The Green function is symmetric with respect to x and . We add the
constraint that the Green function vanishes at = R. This gives us two equations for c1 and c2 .
c1 R c2 R
c1 + c2 = 1, e e =0
4R 4R
1 e2 R
c1 = , c2 =
e2 R 1 e2 R 1
sinh ( R)
G(p|) =
4 sinh R
cosh ( R) sinh ( R)
G (p|) =
4 sinh R 42 sinh R
G (p|)||=R =
4R sinh R
1129
Now we are prepared to write u(p) in terms of the Green function.
Z
u(p) = u() dA
S 4 sinh R
Z
u(p) = u(x) dA
S 4R sinh R
Solution 40.6
First we think of this problem in terms of the the equilibrium solution of the heat equation. The
boundary condition expresses Newtons law of cooling. Where a = 0, the boundary is insulated.
Where a > 0, the rate of heat loss is proportional to the temperature. The case a < 0 is non-physical
and we do not consider this scenario further. We know that if the boundary is entirely insulated,
a = 0, then the equilibrium temperature is a constant that depends on the initial temperature
distribution. Thus for a = 0 the solution of Laplaces equation is not unique. If there is any point
on the boundary where a is positive then eventually, all of the heat will flow out of the domain. The
equilibrium temperature is zero, and the solution of Laplaces equation is unique, u = 0. Therefore
the solution of Laplaces equation is unique if a is continuous, non-negative and not identically zero.
Now we prove our assertion. First note that if we substitute f = vu in the divergence theorem,
Z Z
f dx = f n ds,
R R
Since the first integral is non-negative and the last is non-positive, the integrals vanish. This implies
that u = 0. u is a constant. In order to satisfy the boundary condition where a is non-zero, u
must be zero. Thus the unique solution in this scenario is u = 0.
Solution 40.7
The mathematical statement of the problem is
u u + uzz = 0, 0 < < 2, z > 0,
u(, 0) = f ().
We have the implicit boundary conditions,
u(0, z) = u(2, z), u (0, z) = u (0, z)
and the boundedness condition,
u(, +) bounded.
We expand the solution in a Fourier series. (This ensures that the boundary conditions at = 0, 2
are satisfied.)
X
u(, z) = un (z) en
n=
1130
We substitute the series into the partial differential equation to obtain ordinary differential equations
for the un .
n2 un (z) + u00n (z) = 0
The general solutions of this equation are
(
c1 + c2 z, for n = 0,
un (z) =
c1 enz +c2 enz 6 0.
for n =
We substitute the series into the initial condition at z = 0 to determine the multiplicative constants.
X
u(, 0) = un (0) en = f ()
n=
Z 2
1
un (0) = f () en d fn
2 0
Note that Z 2
1
u(, z) f0 = f () d
2 0
as z +.
Solution 40.8
The decomposition of the problem is shown in Figure 40.1.
1131
We substitute the separation of variables u(x, y) = X(x)Y (y) into Laplaces equation.
X 00 Y 00
= = 2
X Y
We have the eigenvalue problem,
X 00 = 2 X, X(0) = X(a) = 0,
We determine the coefficients from the inhomogeneous boundary conditions. (Here we see how our
choice of solutions for Y (y) is convenient.)
X n nx nb
uy (x, 0) =
n sin cosh = g1 (x)
n=1
a a a
nb 2 a
Z
a nx
n = sech g1 (x) sin dx
n a a 0 a
X nx ny
u(x, y) = n sin cosh
n=1
a a
Z a
nb 2 nx
n = sech g2 (x) sin dx
a a 0 a
We substitute the separation of variables u(x, y) = X(x)Y (y) into Laplaces equation.
X 00 Y 00
= = 2
X Y
We have the eigenvalue problem,
Y 00 = 2 Y, Y 0 (0) = Y (b) = 0,
1132
which has the solutions,
(2n 1) (2n 1)y
n = , Yn = cos , n N.
2b 2b
1133
1134
Chapter 41
Waves
1135
41.1 Exercises
Exercise 41.1
Consider the 1-D wave equation
utt uxx = 0
on the domain 0 < x < 4 with initial displacement
(
1, 1 < x < 2
u(x, 0) =
0, otherwise,
1.
u(0, t) = u(4, t) = 0
2.
ux (0, t) = ux (4, t) = 0
In each case plot u(x, t) for t = 12 , 1, 32 , 2 and combine onto a general plot in the x, t plane (up to a
sufficiently large time) so the behavior of u is clear for arbitrary x, t.
Exercise 41.2
Sketch the solution to the wave equation:
Z x+ct
1 1
u(x, t) = (u(x + ct, 0) + u(x ct, 0)) + ut (, 0) d,
2 2c xct
Exercise 41.3
1. Consider the solution of the wave equation for u(x, t):
utt = c2 uxx
on the infinite interval < x < with initial displacement of the form
(
h(x) for x > 0,
u(x, 0) =
h(x) for x < 0,
2. Use a similar idea to explain how you could use the general solution of the wave equation
to solve the finite interval problem (0 < x < l) in which u(0, t) = u(l, t) = 0 for all t, with
u(x, 0) = h(x) and ut (x, 0) = 0. Take h(0) = h(l) = 0.
1136
Exercise 41.4
The deflection u(x, T ) = (x) and velocity ut (x, T ) = (x) for an infinite string (governed by
utt = c2 uxx ) are measured at time T , and we are asked to determine what the initial displacement
and velocity profiles u(x, 0) and ut (x, 0) must have been. An alert student suggests that this problem
is equivalent to that of determining the solution of the wave equation at time T when initial conditions
u(x, 0) = (x), ut (x, 0) = (x) are prescribed. Is she correct? If not, can you rescue her idea?
Exercise 41.5
In obtaining the general solution of the wave equation the interval was chosen to be infinite in order
to simplify the evaluation of the functions () and () in the general solution
But this general solution is in fact valid for any interval be it infinite or finite. We need only choose
appropriate functions (), () to satisfy the appropriate initial and boundary conditions. This is
not always convenient but there are other situations besides the solution for u(x, t) in an infinite
domain in which the general solution is of use. Consider the whip-cracking problem,
utt = c2 uxx ,
Hint: (From physical considerations conclude that you can take () = 0. Your solution will
corroborate this.) Use the initial conditions to determine () and () for > 0. Then use the
initial condition to determine () for < 0.
Exercise 41.6
Let u(x, t) satisfy the equation
utt = c2 uxx ;
(with c a constant) in some region of the (x, t) plane.
1. Show that the quantity (ut cux ) is constant along each straight line defined by x ct =
constant, and that (ut + cux ) is constant along each straight line of the form x + ct = constant.
These straight lines are called characteristics; we will refer to typical members of the two
families as C+ and C characteristics, respectively. Thus the line x ct = constant is a C+
characteristic.
2. Let u(x, 0) and ut (x, 0) be prescribed for all values of x in < x < , and let (x0 , t0 )
be some point in the (x, t) plane, with t0 > 0. Draw the C+ and C characteristics through
(x0 , t0 ) and let them intersect the x-axis at the points A,B. Use the properties of these curves
derived in part (a) to determine ut (x0 , t0 ) in terms of initial data at points A and B. Using a
similar technique to obtain ut (x0 , ) with 0 < < t, determine u(x0 , t0 ) by integration with
respect to , and compare this with the solution derived in class:
Z x+ct
1 1
u(x, t) = (u(x + ct, 0) + u(x ct, 0)) + ut (, 0)d.
2 2c xct
Observe that this method of characteristics again shows that u(x0 , t0 ) depends only on that
part of the initial data between points A and B.
1137
Exercise 41.7
The temperature u(x, t) at a depth x below the Earths surface at time t satisfies
ut = uxx .
u(0, t) = T cos(t).
Exercise 41.8
An infinite cylinder of radius a produces an external acoustic pressure field u satisfying:
utt = c2 u,
u(a, , t) = f () et
where f () is a known function. Note that the waves must be outgoing at infinity, (radiation
condition at infinity). Find the solution, u(r, , t). We seek a periodic solution of the form,
u(r, , t) = v(r, ) et .
Exercise 41.9
Plane waves are incident on a soft cylinder of radius a whose axis is parallel to the plane of
the waves. Find the field scattered by the cylinder. In particular, examine the leading term of
the solution when a is much smaller than the wavelength of the incident waves. If v(x, y, t) is the
scattered field it must satisfy:
Exercise 41.10
Consider the flow of electricity in a transmission line. The current, I(x, t), and the voltage, V (x, t),
obey the telegraphers system of equations:
Ix = CVt + GV,
Vx = LIt + RI,
1138
where C is the capacitance, G is the conductance, L is the inductance and R is the resistance.
a) Show that both I and V satisfy a damped wave equation.
b) Find the relationship between the physical constants, C, G, L and R such that there exist
damped traveling wave solutions of the form:
1139
41.2 Hints
Hint 41.1
Hint 41.2
Hint 41.3
Hint 41.4
Hint 41.5
From physical considerations conclude that you can take () = 0. Your solution will corroborate
this. Use the initial conditions to determine () and () for > 0. Then use the initial condition
to determine () for < 0.
Hint 41.6
Hint 41.7
a) Substitute u(x, t) = <(A etx ) into the partial differential equation and solve for . Assume
that has positive real part so that the solution vanishes as x +.
Hint 41.8
Seek a periodic solution of the form,
u(r, , t) = v(r, ) et .
You will find that the vn satisfy Bessels equation. Choose the vn so that u satisfies the boundary
condition at r = a and the radiation condition at infinity.
The Bessel functions have the asymptotic behavior,
r
2
Jn () cos( n/2 /4), as ,
r
2
Yn () sin( n/2 /4), as ,
r
2 i(n/2/4)
Hn(1) () e , as ,
r
2 i(n/2/4)
Hn(2) () e , as .
Hint 41.9
1140
Hint 41.10
1141
41.3 Solutions
Solution 41.1
1. The initial position is
1 3
x .
u(x, 0) = H
2 2
We extend the domain of the problem to ( . . . ) and add image sources in the initial
condition so that u(x, 0) is odd about x = 0 and x = 4. This enforces the boundary conditions
at these two points.
utt uxx = 0, x ( . . . ), t (0 . . . )
X 1 3 1 13
u(x, 0) = H x 8n H
x 8n , ut (x, 0) = 0
n=
2 2 2 2
0.4 0.4
0.2 0.2
1 2 3 4 1 2 3 4
-0.2 -0.2
-0.4 -0.4
0.4 0.4
0.2 0.2
1 2 3 4 1 2 3 4
-0.2 -0.2
-0.4 -0.4
Figure 41.1: The solution at t = 1/2, 1, 3/2, 2 for the boundary conditions u(0, t) = u(4, t) = 0.
1142
We use DAlemberts solution to solve this problem.
1 X 1 3 1 3
u(x, t) = H x 8n t + H
x 8n + t
2 n= 2 2 2 2
1 13 1 13
+H x 8n t + H x 8n + t
2 2 2 2
The solution for several times is plotted in Figure 41.2. Note that the solution is periodic in
time with period 8. Figure 41.3 shows the solution in the phase plane for 0 < t < 8. Note the
even reflections at the boundaries.
1 1
0.8 0.8
0.6 0.6
0.4 0.4
0.2 0.2
1 2 3 4 1 2 3 4
1 1
0.8 0.8
0.6 0.6
0.4 0.4
0.2 0.2
1 2 3 4 1 2 3 4
Figure 41.2: The solution at t = 1/2, 1, 3/2, 2 for the boundary conditions ux (0, t) = ux (4, t) = 0.
Solution 41.2
1.
Z x+ct
1 1
u(x, t) = (u(x + ct, 0) + u(x ct, 0)) + ut (, 0) d
2 2c xct
Z x+ct
1
u(x, t) = sin( ) d
2c xct
sin(x) sin(ct)
u(x, t) =
c
Figure 41.4 shows the solution for c = = 1.
2. We can write the initial velocity in terms of the Heaviside function.
1
for 0 < x < 1
ut (x, 0) = 1 for 1 < x < 0
0 for |x| > 1.
1143
t t
0 1 0 1
1/2 0 1/2 0
1/2 1/2
0 1
0 0
0 1
-1/2 1/2
-1/2 1/2
0 -1 0 0 1 0
-1/2 1/2
-1/2 1/2
0 1
0 0
0 1
1/2 1/2
1/2 1/2
0 0
0 1 0 1
x x
Figure 41.3: The solution in the phase plane for the boundary conditions u(0, t) = u(4, t) = 0 and
ux (0, t) = ux (4, t) = 0.
1
0.5
u 6
0
-0.5
-1 4
-5 t
2
0
x
5 0
Z b
H(x c) dx = min(b a, max(b c, 0)).
a
1144
Now we find an expression for the solution.
Z x+ct
1 1
u(x, t) = (u(x + ct, 0) + u(x ct, 0)) + ut (, 0) d
2 2c xct
Z x+ct
1
u(x, t) = (H( + 1) + 2H( ) H( 1)) d
2c xct
u(x, t) = min(2ct, max(x + ct + 1, 0)) + 2 min(2ct, max(x + ct, 0)) min(2ct, max(x + ct 1, 0))
Figure 41.5 shows the solution for c = 1.
1
0.5
u 3
0
-0.5
-1 2
-4 t
-2 1
0
x 2
4 0
Solution 41.3
1. The solution on the interval ( . . . ) is
1
u(x, t) =(h(x + ct) + h(x ct)).
2
Now we solve the problem on (0 . . . ). We define the odd extension of h(x).
(
h(x) for x > 0,
h(x) = = sign(x)h(|x|)
h(x) for x < 0,
Note that
d
h0 (0 ) =(h(x))x0+ = h0 (0+ ) = h0 (0+ ).
dx
2
Thus h(x) is piecewise C . Clearly
1
u(x, t) = (h(x + ct) + h(x ct))
2
satisfies the differential equation on (0 . . . ). We verify that it satisfies the initial condition
and boundary condition.
1
u(x, 0) =(h(x) + h(x)) = h(x)
2
1 1
u(0, t) = (h(ct) + h(ct)) = (h(ct) h(ct)) = 0
2 2
1145
2. First we define the odd extension of h(x) on the interval (l . . . l).
h(x) = sign(x)h(|x|), x (l . . . l)
Then we form the odd periodic extension of h(x) defined on ( . . . ).
x+l x + l
h(x) = sign x 2l h x 2l
, x ( . . . )
2l 2l
We note that h(x) is piecewise C 2 . Also note that h(x) is odd about the points x = nl, n Z.
That is, h(nl x) = h(nl + x). Clearly
1
u(x, t) = (h(x + ct) + h(x ct))
2
satisfies the differential equation on (0 . . . l). We verify that it satisfies the initial condition
and boundary conditions.
1
u(x, 0) = (h(x) + h(x))
2
u(x, 0) = h(x)
x+l x + l
u(x, 0) = sign x 2l h x 2l
2l 2l
u(x, 0) = h(x)
1 1
u(0, t) = (h(ct) + h(ct)) = (h(ct) h(ct)) = 0
2 2
1 1
u(l, t) = (h(l + ct) + h(l ct)) = (h(l + ct) h(l + ct)) = 0
2 2
Solution 41.4
Change of Variables. Let u(x, t) be the solution of the problem with deflection u(x, T ) = (x)
and velocity ut (x, T ) = (x). Define
v(x, ) = u(x, T ).
We note that u(x, 0) = v(x, T ). v( ) satisfies the wave equation.
v = c2 vxx
The initial conditions for v are
v(x, 0) = u(x, T ) = (x), v (x, 0) = ut (x, T ) = (x).
Thus we see that the student was correct.
Direct Solution. DAlemberts solution is valid for all x and t. We formally substitute t T for
t in this solution to solve the problem with deflection u(x, T ) = (x) and velocity ut (x, T ) = (x).
Z x+c(tT )
1 1
u(x, t) = ((x + c(t T )) + (x c(t T ))) + ( ) d
2 2c xc(tT )
This satisfies the wave equation, because the equation is shift-invariant. It also satisfies the initial
conditions.
Z x
1 1
u(x, T ) = ((x) + (x)) + ( ) d = (x)
2 2c x
1 1
ut (x, t) = (c0 (x + c(t T )) c0 (x c(t T ))) + ((x + c(t T )) + (x c(t T )))
2 2
1 0 0 1
ut (x, T ) = (c (x) c (x)) + ((x) + (x)) = (x)
2 2
1146
Solution 41.5
Since the solution is a wave moving to the right, we conclude that we could take () = 0. Our
solution will corroborate this.
The form of the solution is
u(x, 0) = () + () = 0, > 0
ut (x, 0) = c0 () c 0 () = 0, > 0
() + () = 0, > 0,
() () = 2k, > 0,
This determines u(x, t) for x > 0 as it depends on () only for > 0. The constant k is arbitrary.
Changing k does not change u(x, t). For simplicity, we take k = 0.
u(x, t) = (x ct)
(
0 for x ct < 0
u(x, t) =
(t x/c) for x ct > 0
u(x, t) = (t x/c)H(ct x)
Solution 41.6
1. We write the value of u along the line x ct = k as a function of t: u(k + ct, t). We differentiate
ut cux with respect to t to see how the quantity varies.
d
(ut (k + ct, t) cux (k + ct, t)) = cuxt + utt c2 uxx cuxt
dt
= utt c2 uxx
=0
Thus ut cux is constant along the line x ct = k. Now we examine ut + cux along the line
x + ct = k.
d
(ut (k ct, t) + cux (k ct, t)) = cuxt + utt c2 uxx + cuxt
dt
= utt c2 uxx
=0
1147
2. From part (a) we know
ut (x0 , t0 ) cux (x0 , t0 ) = ut (x0 ct0 , 0) cux (x0 ct0 , 0)
ut (x0 , t0 ) + cux (x0 , t0 ) = ut (x0 + ct0 , 0) + cux (x0 + ct0 , 0).
We add these equations to find ut (x0 , t0 ).
1
(ut (x0 ct0 , 0) cux (x0 ct0 , 0)ut (x0 + ct0 , 0) + cux (x0 + ct0 , 0))
ut (x0 , t0 ) =
2
Since t0 was arbitrary, we have
1
ut (x0 , ) = (ut (x0 c, 0) cux (x0 c, 0)ut (x0 + c, 0) + cux (x0 + c, 0))
2
for 0 < < t0 . We integrate with respect to to determine u(x0 , t0 ).
Z t0
1
u(x0 , t0 ) = u(x0 , 0) + (ut (x0 c, 0) cux (x0 c, 0)ut (x0 + c, 0) + cux (x0 + c, 0)) d
0 2
1 t0
Z
= u(x0 , 0) + (cux (x0 c, 0) + cux (x0 + c, 0)) d
2 0
1 t0
Z
+ (ut (x0 c, 0) + ut (x0 + c, 0)) d
2 0
1
= u(x0 , 0) + (u(x0 ct0 , 0) u(x0 , 0) + u(x0 + ct0 , 0) u(x0 , 0))
2
Z x0 ct0 Z x0 +ct0
1 1
+ ut (, 0) d + ut (, 0) d
2c x0 2c x0
Z x0 +ct0
1 1
= (u(x0 ct0 , 0) + u(x0 + ct0 , 0)) + ut (, 0) d
2 2c x0 ct0
We have DAlemberts solution.
Z x+ct
1 1
u(x, t) = (u(x ct, 0) + u(x + ct, 0)) + ut (, 0) d
2 2c xct
Solution 41.7
a) We substitute u(x, t) = A etx into the partial differential equation and take the real part
as the solution. We assume that has positive real part so the solution vanishes as x +.
A etx = 2 A etx
= 2
r
= (1 + )
2
A solution of the partial differential equation is,
r
u(x, t) = < A exp t (1 + ) x ,
2
r r
u(x, t) = A exp x cos t x .
2 2
Applying the initial condition, u(0, t) = T cos(t), we obtain,
r r
u(x, t) = T exp x cos t x .
2 2
1148
b) At a fixed depth x = h, the temperature is
r r
u(h, t) = T exp h cos t h .
2 2
c) The solution
p is an exponentially
decaying, traveling wave that propagates into the Earth with
speed / /(2) = 2. More generally, the wave
travels in the positive direction with speed /a. Figure 41.6 shows such a wave for a sequence
of times.
The phase lag, (x) is the time that it takes for the wave to reach a depth of x. It satisfies,
r
(x) x = 0,
2
x
(x) = .
2
d) Let year be the frequency for annual temperature variation, then day = 365year . If xyear is
the depth that a particular yearly temperature variation reaches and xday is the depth that
this same variation in daily temperature reaches, then
r r
year day
exp xyear = exp xday ,
2 2
r r
year day
xyear = xday ,
2 2
xyear
= 365.
xday
1149
Solution 41.8
We seek a periodic solution of the form,
u(r, , t) = v(r, ) et .
Substituting this into the wave equation will give us a Helmholtz equation for v.
2 v = c2 v
1 1 2
vrr + vr + 2 v + 2 v = 0
r r c
We have the boundary condition v(a, ) = f () and the radiation condition at infinity. We expand
v in a Fourier series in in which the coefficients are functions of r. You can check that en are the
eigenfunctions obtained with separation of variables.
X
v(r, ) = vn (r) en
n=
We substitute this expression into the Helmholtz equation to obtain ordinary differential equations
for the coefficients vn .
2
n2
X 1
vn00 + vn0 + 2
2
vn en = 0
n=
r c r
2 n2
1
vn00 + vn0 + 2
2 vn = 0.
r c r
which has as linearly independent solutions the Bessel and Neumann functions,
r r
Jn , Yn ,
c c
or the Hankel functions,
r r
Hn(1) , Hn(2) .
c c
The functions have the asymptotic behavior,
r
2
Jn () cos( n/2 /4), as ,
r
2
Yn () sin( n/2 /4), as ,
r
2 i(n/2/4)
Hn(1) () e , as ,
r
2 i(n/2/4)
Hn(2) () e , as .
u(r, , t) will be an outgoing wave at infinity if it is the sum of terms of the form ei(tconstr) . Thus
the vn must have the form r
vn (r) = bn Hn(2)
c
for some constants, bn . The solution for v(r, ) is
X r
v(r, ) = bn Hn(2) en .
n=
c
1150
We determine the constants bn from the boundary condition at r = a.
X a
v(a, ) = bn Hn(2) en = f ()
n=
c
Z 2
1
bn = (2)
f () en d
2Hn (a/c) 0
X r
u(r, , t) = et bn Hn(2) en
n=
c
Solution 41.9
We substitute the form v(x, y, t) = u(r, ) et into the wave equation to obtain a Helmholtz
equation.
c2 u + 2 u = 0
1 1
urr + ur + 2 u + k 2 u = 0
r r
We solve the Helmholtz equation with separation of variables. We expand u in a Fourier series.
X
u(r, ) = un (r) en
n=
We substitute the sum into the Helmholtz equation to determine ordinary differential equations for
the coefficients.
n2
00 1 0 2
un + un + k 2 un = 0
r r
This is Bessels equation, which has as solutions the Bessel and Neumann functions, {Jn (kr), Yn (kr)}
(1) (2)
or the Hankel functions, {Hn (kr), Hn (kr)}.
Recall that the solutions of the Bessel equation have the asymptotic behavior,
r
2
Jn () cos( n/2 /4), as ,
r
2
Yn () sin( n/2 /4), as ,
r
2 i(n/2/4)
Hn(1) () e , as ,
r
2 i(n/2/4)
Hn(2) () e , as .
From this we see that only the Hankel function of the first kink will give us outgoing waves as
. Our solution for u becomes,
X
u(r, ) = bn Hn(1) (kr) en .
n=
1151
We evaluate the integral with the identities,
Z 2
1
Jn (x) = ex cos en d,
2in 0
Jn (x) = (1)n Jn (x).
Thus we obtain,
X ()n Jn (ka)
u(r, ) = (1)
Hn(1) (kr) en .
n= Hn (ka)
When a 1/k, i.e. ka 1, the Bessel function has the behavior,
(ka/2)n
Jn (ka) .
n!
In this case, the n 6= 0 terms in the sum are much smaller than the n = 0 term. The approximate
solution is,
(1)
H0 (kr)
u(r, ) (1)
,
H0 (ka)
(1)
H0 (kr)
v(r, , t) (1)
et .
H0 (ka)
Solution 41.10
a) (
Ix = CVt + GV,
Vx = LIt + RI
First we derive a single partial differential equation for I. We differentiate the two partial differential
equations with respect to x and t, respectively and then eliminate the Vxt terms.
(
Ixx = CVtx + GVx ,
Vxt = LItt + RIt
Ixx + LCItt + RCIt = GVx
1152
Thus we see that I and V both satisfy the same damped wave equation.
b) We substitute V (x, t) = et (f (x at) + g(x + at)) into the damped wave equation for V .
RC + LG RG t RC + LG
2
+ e (f + g) + 2 + a et (f 0 + g 0 )
LC LC LC
1 t 00
+ a2 et (f 00 + g 00 ) e (f + g 00 ) = 0
LC
Since f and g are arbitrary functions, the coefficients of et (f +g), et (f 0 +g 0 ) and et (f 00 +g 00 )
must vanish. This gives us three constraints.
1 RC + LG RC + LG RG
a2 = 0, 2 + = 0, 2 + =0
LC LC LC LC
The first equation determines the wave speed to be a = 1/ LC. We substitute the value of from
the second equation into the third equation.
RC + LG RG
= , 2 + =0
2LC LC
In order for damped waves to propagate, the physical constants must satisfy,
2
RG RC + LG
= 0,
LC 2LC
4RGLC (RC + LG)2 = 0,
(RC LG)2 = 0,
RC = LG.
1153
1154
Chapter 42
Similarity Methods
We see now that if we had guessed that the solution of this partial differential equation was only
dependent on powers of x/t we could have changed variables to and f and instead solved the
ordinary differential equation
df
G , f, = 0.
d
By using similarity methods one can reduce the number of independent variables in some PDEs.
1155
This has the solution m = n. The similarity variable, , will be unchanged under the transformation
to the temporary variables. One choice is
t t0 n t0
= = 0 m = 0.
x x x
Writing the two partial derivative in terms of ,
d 1 d
= =
t t d x d
d t d
= = 2
x x d x d
du du
2 u=0
d d
du u
=
d 1 2
Thus we have reduced the partial differential equation to an ordinary differential equation that is
much easier to solve.
Z !
d
u() = exp
1 2
Z !
1/2 1/2
u() = exp + d
1 1+
1 1
u() = exp log(1 ) + log(1 + )
2 2
u() = (1 )1/2 (1 + )1/2
1/2
1 + t/x
u(x, t) =
1 t/x
Thus we have found a similarity solution to the partial differential equation. Note that the existence
of a similarity solution does not mean that all solutions of the differential equation are similarity
solutions.
d d
= = x
t t d d
d d
= = x1 t
x x d d
du du
x+1 + x1 t2 u = 0.
d d
If there is a value of such that we can write this equation in terms of , then = x t is a similarity
variable. If = 1 then the coefficient of the first term is trivially in terms of . The coefficient of
the second term then becomes x2 t2 . Thus we see = x1 t is a similarity variable.
1156
Example 42.0.2 To see another application of similarity variables, any partial differential equation
of the form ut ux
F tx, u, , =0
x t
is equivalent to the ODE
du du
F , u, , =0
d d
where = tx. Performing the change of variables,
1 u 1 du 1 du du
= = x =
x t x t d x d d
1 u 1 du 1 du du
= = t = .
t x t x d t d d
For example the partial differential equation
u x u
u + + tx2 u = 0
t t x
which can be rewritten
1 u 1 u
u + + txu = 0,
x t t x
is equivalent to
du du
u + + u = 0
d d
where = tx.
1157
42.1 Exercises
Exercise 42.1
Consider the 1-D heat equation
ut = uxx
Assume that there exists a function (x, t) such that it is possible to write u(x, t) = F ((x, t)).
Re-write the PDE in terms of F (), its derivatives and (partial) derivatives of . By guessing that
this transformation takes the form = xt , find a value of so that this reduces to an ODE for F ()
(i.e. x and t are explicitly removed). Find the general solution and use this to find the corresponding
solution u(x, t). Is this the general solution of the PDE?
Exercise 42.2
With = x t, find such that for some function f , = f () is a solution of
t = a2 xx .
Find f () as well.
1158
42.2 Hints
Hint 42.1
Hint 42.2
1159
42.3 Solutions
Solution 42.1
We write the derivatives of u(x, t) in terms of derivatives of F ().
ut = xt1 F 0 = F 0
t
0
ux = t F
2 00
uxx = t2 F 00 = F
x2
We substitite these expressions into the heat equation.
2
F 0 = 2 F 00
t x
00 x2 1 0
F = F
t
We can write this equation in terms of F and only if = 1/2. We make this substitution and
solve the ordinary differential equation for F ().
F 00
0
=
F 2
0 2
log(F ) = +c
4 2
F 0 = c exp
4
Z 2
F = c1 exp d + c2
4
We can write F in terms of the error function.
F = c1 erf + c2
2
We write this solution in terms of x and t.
x
u(x, t) = c1 erf + c2
2 t
This is not the general solution of the heat equation. There are many other solutions. Note that
since x and t do not explicitly appear in the heat equation,
!
x x0
u(x, t) = c1 erf p + c2
2 (t t0 )
is a solution.
Solution 42.2
We write the derivatives of in terms of f .
t = f = x f 0 = t1 f 0
t
x = f = x1 tf 0
x
xx = f 0 x1 t + x1 tx1 t f 0
x
xx = 2 x22 t2 f 00 + ( 1)x2 tf 0
xx = x2 2 2 f 00 + ( 1)f 0
1160
We substitute these expressions into the diffusion equation.
f 0 = x2 t 2 2 f 00 + ( 1)f 0
In order for this equation to depend only on the variable , we must have = 2. For this choice
we obtain an ordinary differential equation for f ().
f 0 = 4 2 f 00 + 6f 0
f 00 1 3
0
= 2
f 4 2
0 1 3
log(f ) = log + c
4 2
0 3/2 1/(4)
f = c1 e
Z
f () = c1 t3/2 e1/(4t) dt + c2
Z 1/(2 )
2
f () = c1 et dt + c2
1
f () = c1 erf + c2
2
1161
1162
Chapter 43
Method of Characteristics
ut + cux = 0 (43.1)
Let x(t) be some path in the phase plane. Perhaps x(t) describes the position of an observer who is
noting the value of the solution u(x(t), t) at their current location. We differentiate with respect to
t to see how the solution varies for the observer.
d
u(x(t), t) = ut + x0 (t)ux (43.2)
dt
We note that if the observer is moving with velocity c, x0 (t) = c, then the solution at their current
location does not change because ut + cux = 0. We will examine this more carefully.
By comparing Equations 43.1 and 43.2 we obtain ordinary differential equations representing the
position of an observer and the value of the solution at that position.
dx du
= c, =0
dt dt
Let the observer start at the position x0 . Then we have an initial value problem for x(t).
dx
= c, x(0) = x0
dt
x(t) = x0 + ct
du
= 0, u(0) = f (x0 )
dt
u(x(t), t) = f (x0 )
Again we see that the solution is constant along the characteristics. We substitute the equation for
the characteristics into this expression.
Now we see that the solution of Equation 43.1 is a wave moving with velocity c. The solution at
time t is the initial condition translated a distance of ct.
1163
43.2 First Order Quasi-Linear Equations
Consider the following quasi-linear equation.
We will solve this equation with the method of characteristics. We differentiate the solution along
a path x(t).
d
u(x(t), t) = ut + x0 (t)ux (43.4)
dt
By comparing Equations 43.3 and 43.4 we obtain ordinary differential equations for the character-
istics x(t) and the solution along the characteristics u(x(t), t).
dx du
= a(x, t, u), =0
dt dt
Suppose an initial condition is specified, u(x, 0) = f (x). Then we have ordinary differential equation,
initial value problems.
dx
= a(x, t, u), x(0) = x0
dt
du
= 0, u(0) = f (x0 )
dt
We see that the solution is constant along the characteristics. The solution of Equation 43.3 is a
wave moving with velocity a(x, t, u).
We write down the differential equations for the solution along a characteristic.
dx
= u, x(0) = x0
dt
du
= 0, u(0) = f (x0 )
dt
First we solve the equation for u. u = f (x0 ). Then we solve for x. x = x0 + f (x0 )t. This gives us
an implicit solution of the Burger equation.
utt = c2 uxx .
We make the change of variables, a = ux , b = ut , to obtain a coupled system of first order equations.
at bx = 0
bt c2 ax = 0
1164
The eigenvalues and eigenvectors of the matrix are
1 1
1 = c, 2 = c, 1 = , 2 = .
c c
Now we left multiply by the inverse of the matrix of eigenvectors to obtain an uncoupled system
that we can solve directly.
c 0
+ = 0.
t 0 c x
(x, t) = p(x + ct), (x, t) = q(x ct),
a(x, t) = p(x + ct) + q(x ct), b(x, t) = cp(x + ct) cq(x ct)
Here F, G C 2 are arbitrary functions. We see that u(x, t) is the sum of a waves moving to the
right and left with speed c. This is the general solution of the one-dimensional wave equation. Note
that for any given problem, F and G are only determined to whithin an additive constant. For any
constant k, adding k to F and subtracting it from G does not change the solution.
We know that the solution is the sum of right-moving and left-moving waves.
1165
R
Here Q(x) = q(x) dx. We solve the system of equations for F and G.
Z Z
1 1 1 1
F (x) = f (x) g(x) dx k, G(x) = f (x) + g(x) dx + k
2 2c 2 2c
Note that the value of the constant k does not affect the solution, u(x, t). For simplicity we take
k = 0. We substitute F and G into Equation 43.5 to determine the solution.
Z x+ct Z xct
1 1
u(x, t) = (f (x ct) + f (x + ct)) + g(x) dx g(x) dx
2 2c
Z x+ct
1 1
u(x, t) = (f (x ct) + f (x + ct)) + g() d
2 2c xct
Z x+ct
1 1
u(x, t) = (u(x ct, 0) + u(x + ct, 0)) + ut (, 0) d
2 2c xct
In order to determine the solution u(x, t) for x, t > 0 we also need to determine F () for < 0.
To do this, we substitute the form of the solution into the boundary condition at x = 0.
u(0, t) = h(t), t > 0
F (ct) + G(ct) = h(t), t > 0
F () = G() + h(/c), < 0
Z
1 1
F () = f () g() d + h(/c), <0
2 2c
We determine the solution of the wave equation for x < ct.
u(x, t) = F (x ct) + G(x + ct)
Z x+ct Z x+ct
1 1 1 1
u(x, t) = f (x + ct) g() d + h(t x/c) + f (x + ct) + g() d, x < ct
2 2c 2 2c
Z x+ct
1 1
u(x, t) = (f (x + ct) + f (x + ct)) + g() d + h(t x/c), x < ct
2 2c x+ct
1166
Finally, we collect the solutions in the two domains.
( R x+ct
1 1
2 (f (x ct) + f (x + ct)) + 2c xct
g() d, x > ct
u(x, t) = 1 1
R x+ct
2 (f (x + ct) + f (x + ct)) + 2c x+ct g() d + h(t x/c), x < ct
If f (x) and g(x) are odd about x = 0, (f (x) = f (x), g(x) = g(x)), then u(x, t) is also odd
about x = 0. We can demonstrate this with DAlemberts solution.
Z x+ct
1 1
u(x, t) = (f (x ct) + f (x + ct)) + g() d
2 2c xct
Z x+ct
1 1
u(x, t) = (f (x ct) + f (x + ct)) g() d
2 2c xct
Z xct
1 1
= (f (x + ct) + f (x ct)) g() (d)
2 2c x+ct
Z x+ct
1 1
= (f (x ct) + f (x + ct)) + g() d
2 2c xct
= u(x, t)
Thus if the initial conditions f (x) and g(x) are odd about a point then the solution of the wave
equation u(x, t) is also odd about that point. The analogous result holds if the initial conditions are
even about a point. These results are useful in solving the wave equation on a finite domain.
We extend the domain of the problem to x ( . . . ). We form the odd periodic extensions f
and g which are odd about the points x = 0, L.
If a function h(x) is defined for positive x, then sign(x)h(|x|) is the odd extension of the function.
If h(x) is defined for x (L . . . L) then its periodic extension is
x+L
h x 2L .
2L
We combine these two formulas to form odd periodic extensions.
x+L x + L
f(x) = sign x 2L
f x 2L
2L 2L
x+L x + L
g(x) = sign x 2L g x 2L
2L 2L
Now we can write the solution for the vibrations of a string with fixed ends.
Z x+ct
1 1
u(x, t) = f (x ct) + f(x + ct) + g() d
2 2c xct
1167
43.7 Envelopes of Curves
Consider the tangent lines to the parabola y = x2 . The slope of the tangent at the point (x, x2 ) is
2x. The set of tangents form a one parameter family of lines,
The parabola and some of its tangents are plotted in Figure 43.1.
1
-1 1
-1
The parabola is the envelope of the family of tangent lines. Each point on the parabola is tangent
to one of the lines. Given a curve, we can generate a family of lines that envelope the curve. We
can also do the opposite, given a family of lines, we can determine the curve that they envelope.
More generally, given a family of curves, we can determine the curve that they envelope. Let the one
parameter family of curves be given by the equation F (x, y, t) = 0. For the example of the tangents
to the parabola this equation would be y 2tx + t2 = 0.
Let y(x) be the envelope of F (x, y, t) = 0. Then the points on y(x) must lie on the family of
curves. Thus y(x) must satisfy the equation F (x, y, t) = 0. The points that lie on the envelope have
the property,
F (x, y, t) = 0.
t
We can solve this equation for t in terms of x and y, t = t(x, y). The equation for the envelope is
then
F (x, y, t(x, y)) = 0.
Consider the example of the tangents to the parabola. The equation of the one-parameter family
of curves is
F (x, y, t) y 2tx + t2 = 0.
The condition Ft (x, y, t) = 0 gives us the constraint,
2x + 2t = 0.
Solving this for t gives us t(x, y) = x. The equation for the envelope is then,
y 2xx + x2 = 0,
y = x2 .
1168
Example 43.7.1 Consider the one parameter family of curves,
(x t)2 + (y t)2 1 = 0.
These are circles of unit radius and center (t, t). To determine the envelope of the family, we first
use the constraint Ft (x, y, t) to solve for t(x, y).
-3 -2 -1 1 2 3
-1
-2
-3
1169
43.8 Exercises
Exercise 43.1
Consider the small transverse vibrations of a composite string of infinite extent, made up of two
homogeneous strings of different densities joined at x = 0. In each region 1) x < 0, 2) x > 0 we have
and we require continuity of u and ux at x = 0. Suppose for t < 0 a wave approaches the junction
x = 0 from the left, i.e. as t approaches 0 from negative values:
(
F (x c1 t) x < 0, t 0
u(x, t) =
0 x > 0, t 0
As t increases further, the wave reaches x = 0 and gives rise to reflected and transmitted waves.
Exercise 43.2
Consider a semi-infinite string, x > 0. For all time the end of the string is displaced according to
u(0, t) = f (t). Find the motion of the string, u(x, t) with the method of characteristics and then
with a Fourier transform in time. The wave speed is c.
Exercise 43.3
Solve using characteristics:
x
uux + uy = 1, ux=y = .
2
Exercise 43.4
Solve using characteristics:
(y + u)ux + yuy = x y, uy=1 = 1 + x.
1170
43.9 Hints
Hint 43.1
Hint 43.2
1. Because the left end of the string is being displaced, there will only be right-moving waves.
Assume a solution of the form
u(x, t) = F (x ct).
2. Take a Fourier transform in time. Use that there are only outgoing waves.
Hint 43.3
Hint 43.4
1171
43.10 Solutions
Solution 43.1
1.
(
F (x), x < 0
u(x, 0) =
0, x>0
(
c1 F 0 (x), x < 0
ut (x, 0) =
0, x>0
2. Regardless of the initial condition, the solution has the following form.
(
f1 (x c1 t) + g1 (x + c1 t), x < 0
u(x, t) =
f2 (x c2 t) + g1 (x + c2 t), x > 0
For x < 0, the right-moving wave is F (x c1 t) and the left-moving wave is zero for x < c1 t.
For x > 0, there is no left-moving wave and the right-moving wave is zero for x > c2 t. We
apply these restrictions to the solution.
(
F (x c1 t) + g(x + c1 t), x < 0
u(x, t) =
f (x c2 t), x>0
F (x c1 t) + cc21 c
(
+c2 F(x c1 t)H(x + c1 t), x < 0
1
u(x, t) = 2c2 c1
c1 +c2 F c2 (x c2 t) H(c2 t x), x>0
1172
Solution 43.2
1. Method of characteristics. The problem is
Because the left end of the string is being displaced, there will only be right-moving waves.
The solution has the form
u(x, t) = F (x ct).
We substitute this into the boundary condition.
F (ct) = f (t)
F () = f
c
u(x, t) = f (t x/c)
2. Fourier transform. We take the Fourier transform in time of the wave equation and the
boundary condition.
2
uxx + u = 0, u(0, ) = f()
c2
The general solution of this ordinary differential equation is
The radiation condition, (u(x, t) must be a wave traveling in the positive direction), and the
boundary condition at x = 0 will determine the constants a and b. Consider the solution
u(x, t) we will obtain by taking the inverse Fourier transform of u.
Z
u(x, t) = a() ex/c +b() ex/c et d
Z
u(x, t) = a() e(t+x/c) +b() e(tx/c) d
The first and second terms in the integrand are left and right traveling waves, respectively. In
order that u is a right traveling wave, it must be a superposition of right traveling waves. We
conclude that a() = 0. We apply the boundary condition at x = 0, we solve for u.
u(x, t) = f (t x/c)
Solution 43.3
x
uux + uy = 1, ux=y = (43.6)
2
1173
du
We form dy .
du dx
= ux + uy
dy dy
We compare this with Equation 43.6 to obtain differential equations for x and u.
dx du
= u, = 1. (43.7)
dy dy
The initial data is
x(y = ) = , u(y = ) = . (43.8)
2
We solve the differenial equation for u (43.7) subject to the initial condition (43.8).
u(x(y), y) = y
2
The differential equation for x becomes
dx
=y .
dy 2
y 2 2x
=
y2
We substitute this value for into the solution for u.
y(y 4) + 2x
u(x, y) =
2(y 2)
This solution is defined for y 6= 2. This is because at (x, y) = (2, 2), the characteristic is parallel to
the line x = y. Figure 43.3 has a plot of the solution that shows the singularity at y = 2.
x -2
0
2
10
u 0
-10
-2
0
y 2
1174
Solution 43.4
(y + u)ux + yuy = x y, uy=1 = 1 + x (43.9)
We differentiate u with respect to s.
du dx dy
= ux + uy
ds ds ds
We compare this with Equation 43.9 to obtain differential equations for x, y and u.
dx dy du
= y + u, = y, =xy
ds ds ds
We parametrize the initial data in terms of s.
y(s) = es
x(s) = ( + 1) es es , u(s) = es + es .
2 + xy y 2
u(x, y) =
y
x(s) = ( + 1) es es , y(s) = es .
Hence we see that the characteristics satisfy y(s) 0 for all real s. Figure 43.4 shows some char-
acteristics in the (x, y) plane with starting points from (5, 1) to (5, 1) and a plot of the solution.
1175
2 -2
-1
1.75 x 0
1.5 1
1.25 2
15
1 10
0.75 u 5
0.5
0
0.25
0.5 1 1.5 2
-10-7.5 -5 -2.5 2.5 5 7.5 10 y
1176
Chapter 44
Transform Methods
Taking the Fourier transform in the x variable of the equation and the boundary condition,
2
u 2u
F + = 0, F [u(x, 0)] = F [f (x)]
x2 y 2
2
2 U (, y) + 2 U (, y) = 0, U (, 0) = F ().
y
The general solution to the equation is
U (, y) = a ey +b ey .
Remember that in solving the differential equation here we consider to be a parameter. Requiring
that the solution be bounded for y [0, ) yields
U (, y) = a e||y .
U (, y) = F () e||y .
1177
Applying the convolution theorem to the equation for U ,
Z
1 f (x )2y
u(x, y) = d
2 2 + y 2
f (x )
Z
y
u(x, y) = d.
2 + y2
Since we are given the position at x = 0 we apply the Fourier sine transform.
2
ut = 2 u + u(0, t)
ut = 2 u
2
u(, t) = c() e t
U
U + U = 0,
t
Z
1
U (, 0) = F () = f (x) ex dx.
2
Now we have a first order differential equation for U (, t) with the solution
U (, t) = F () e(1+)t .
1178
Now we apply the inverse Fourier transform.
Z
u(x, t) = F () e(1+)t ex d
Z
u(x, t) = et F () e(x+t) d
u(x, t) = et f (x + t)
1179
44.4 Exercises
Exercise 44.1
Find an integral representation of the solution u(x, y), of
uxx + uyy = 0 in < x < , 0 < y < ,
subject to the boundary conditions:
u(x, 0) = f (x), < x < ;
u(x, y) 0 as x2 + y 2 .
Exercise 44.2
Solve the Cauchy problem for the one-dimensional heat equation in the domain < x < , t > 0,
ut = uxx , u(x, 0) = f (x),
with the Fourier transform.
Exercise 44.3
Solve the Cauchy problem for the one-dimensional heat equation in the domain < x < , t > 0,
ut = uxx , u(x, 0) = f (x),
with the Laplace transform.
Exercise 44.4
1. In Exercise ?? above, let f (x) = f (x) for all x and verify that (x, t) so obtained is the
solution, for x > 0, of the following problem: find (x, t) satisfying
t = a2 xx
in 0 < x < , t > 0, with boundary condition (0, t) = 0 and initial condition (x, 0) = f (x).
This technique, in which the solution for a semi-infinite interval is obtained from that for an
infinite interval, is an example of what is called the method of images.
2. How would you modify the result of part (a) if the boundary condition (0, t) = 0 was replaced
by x (0, t) = 0?
Exercise 44.5
Solve the Cauchy problem for the one-dimensional wave equation in the domain < x < ,
t > 0,
utt = c2 uxx , u(x, 0) = f (x), ut (x, 0) = g(x),
with the Fourier transform.
Exercise 44.6
Solve the Cauchy problem for the one-dimensional wave equation in the domain < x < ,
t > 0,
utt = c2 uxx , u(x, 0) = f (x), ut (x, 0) = g(x),
with the Laplace transform.
Exercise 44.7
Consider the problem of determining (x, t) in the region 0 < x < , 0 < t < , such that
t = a2 xx , (44.1)
with initial and boundary conditions
(x, 0) = 0 for all x > 0,
(0, t) = f (t) for all t > 0,
where f (t) is a given function.
1180
1. Obtain the formula for the Laplace transform of (x, t), (x, s) and use the convolution theo-
rem for Laplace transforms to show that
Z t
x2
x 1
(x, t) = f (t ) 3/2 exp 2 d.
2a 0 4a
2. Discuss the special case obtained by setting f (t) = 1 and also that in which f (t) = 1 for
0 < t < T , with f (t) = 0 for t > T . Here T is some positive constant.
Exercise 44.8
Solve the radiating half space problem:
To do this, define
v(x, t) = ux (x, t) u(x, t)
and find the half space problem that v satisfies. Solve this problem and then show that
Z
u(x, t) = e(x) v(, t) d.
x
Exercise 44.9
Show that
Z
c 2 x x2 /(4c)
e sin(x) d = 3/2 e .
0 4c
Use the sine transform to solve:
Exercise 44.10
Use the Fourier sine transform to find the steady state temperature u(x, y) in a slab: x 0,
0 y 1, which has zero temperature on the faces y = 0 and y = 1 and has a given distribution:
u(y, 0) = f (y) on the edge x = 0, 0 y 1.
Exercise 44.11
Find a harmonic function u(x, y) in the upper half plane which takes on the value g(x) on the x-axis.
Assume that u and ux vanish as |x| . Use the Fourier transform with respect to x. Express the
solution as a single integral by using the convolution formula.
Exercise 44.12
Find the bounded solution of
Exercise 44.13
The left end of a taut string of length L is displaced according to u(0, t) = f (t). The right end is
fixed, u(L, t) = 0. Initially the string is at rest with no displacement. If c is the wave speed for the
string, find its motion for all t > 0.
Exercise 44.14
Let 2 = 0 in the (x, y)-plane region defined by 0 < y < l, < x < , with (x, 0) = (x ),
(x, l) = 0, and 0 as |x| . Solve for using Fourier transforms. You may leave your
1181
answer in the form of an integral but in fact it is possible to use techniques of contour integration
to show that
1 sin(y/l)
(x, y|) = .
2l cosh[(x )/l] cos(y/l)
Note that as l we recover the result derived in class:
1 y
,
(x )2 + y 2
1182
44.5 Hints
Hint 44.1 R
The desired solution form is: u(x, y) = K(x , y)f () d. You must find the correct K. Take
the Fourier transform with respect to x and solve for u(, y) recalling that uxx = 2 u. By uxx we
denote the Fourier transform with respect to x of uxx (x, y).
Hint 44.2
Use the Fourier convolution theorem and the table of Fourier transforms in the appendix.
Hint 44.3
Hint 44.4
Hint 44.5
Use the Fourier convolution theorem. The transform pairs,
Hint 44.7
Hint 44.8
v(x, t) satisfies the same partial differential equation. You can solve the problem for v(x, t) with the
Fourier sine transform. Use the convolution theorem to invert the transform.
To show that Z
u(x, t) = e(x) v(, t) d,
x
find the solution of
ux u = v
that is bounded as x .
Hint 44.9
Note that
Z Z
2 2
ec sin(x) d = ec cos(x) d.
0 x 0
Write the integral as a Fourier transform.
Take the Fourier sine transform of the heat equation to obtain a first order, ordinary differential
equation for u(, t). Solve the differential equation and do the inversion with the convolution
theorem.
Hint 44.10
Hint 44.11
1183
Hint 44.12
Hint 44.13
Hint 44.14
1184
44.6 Solutions
Solution 44.1
1. We take the Fourier transform of the integral equation, noting that the left side is the convo-
1
lution of u(x) and x2 +a2.
1 1
2u()F 2 =F 2
x + a2 x + b2
1
We find the Fourier transform of f (x) = x2 +c 2 . Note that since f (x) is an even, real-valued
function, f () is an even, real-valued function.
Z
1 1 1
F 2 = ex dx
x + c2 2 x2 + c2
For x > 0 we close the path of integration in the upper half plane and apply Jordans Lemma
to evaluate the integral in terms of the residues.
ex
1
= 2 Res , x = c
2 (x c)(x + c)
ec
=
2c
1 c
= e
2c
Since f() is an even function, we have
1 1 c||
F 2 = e .
x + c2 2c
2. We take the Fourier transform of the partial differential equation and the boundary condtion.
This is an ordinary differential equation for u in which is a parameter. The general solution
is
u = c1 ey +c2 ey .
We apply the boundary conditions that u(, 0) = f() and u 0 and y .
u(, y) = f() ey
1185
We take the inverse transform using the convolution theorem.
Z
1
u(x, y) = e(x)y f () d
2
Solution 44.2
ut = 2 u, u(, 0) = f()
This is a first order ordinary differential equation which has the solution,
2
u(, t) = f() e t .
Using a table of Fourier transforms we can write this in a form that is conducive to applying the
convolution theorem.
r
x2 /(4t)
u(, t) = f()F e
t
Z
1 2
u(x, t) = e(x) /(4t) f () d
2 t
Solution 44.3
We take the Laplace transform of the heat equation.
ut = uxx
su u(x, 0) = uxx
s f (x)
uxx u = (44.2)
The Green function problem for Equation 44.2 is
s
G00 G = (x ), G(; ) is bounded.
The homogeneous solutions that satisfy the left and right boundary conditions are, respectively,
sa sa
exp , exp .
x x
1186
Now we solve Equation 44.2 using the Green function.
Z
f ()
u(x, s) = G(x; ) d
Z r
1 s
u(x, s) = f () exp |x | d
2 s
Finally we take the inverse Laplace transform to obtain the solution of the heat equation.
(x )2
Z
1
u(x, t) = f () exp d
2 t 4t
Solution 44.4
1. Clearly the solution satisfies the differential equation. We must verify that it satisfies the
boundary condition, (0, t) = 0.
Z
(x )2
1
(x, t) = f () exp d
2a t 4a2 t
Z 0 Z
(x )2 (x )2
1 1
(x, t) = f () exp d + f () exp d
2a t 4a2 t 2a t 0 4a2 t
Z Z
(x + )2 (x )2
1 1
(x, t) = f () exp d + f () exp d
2a t 0 4a2 t 2a t 0 4a2 t
Z Z
(x + )2 (x )2
1 1
(x, t) = f () exp d + f () exp d
2a t 0 4a2 t 2a t 0 4a2 t
Z
(x )2 (x + )2
1
(x, t) = f () exp exp d
2a t 0 4a2 t 4a2 t
Z 2
x + 2
1 x x
(x, t) = f () exp exp exp d
2a t 0 4a2 t 2a2 t 2a2 t
Z 2
x + 2
1 x
(x, t) = f () exp sinh d
a t 0 4a2 t 2a2 t
Since the integrand is zero for x = 0, the solution satisfies the boundary condition there.
2. For the boundary condition x (0, t) = 0 we would choose f (x) to be even. f (x) = f (x). The
solution is
Z 2
x + 2
1 x
(x, t) = f () exp cosh d
a t 0 4a2 t 2a2 t
Since the integrand is zero for x = 0, the solution satisfies the boundary condition there.
Solution 44.5
t 1
= ct, = = , v(x, ) = u(x, t),
t c t
1187
the problem becomes
1
v = vxx , v(x, 0) = f (x), v (x, 0) =g(x).
c
(This change of variables isnt necessary, it just gives us fewer constants to carry around.) We
take the Fourier transform in x of the equation and the initial conditions, (we consider to be a
parameter).
1
v (, ) = 2 v(, ), v(, ) = f(), v (, ) = g()
c
Now we have an ordinary differential equation for v(, ), (now we consider to be a parameter).
The general solution of this constant coefficient differential equation is,
where a and b are constants that depend on the parameter . We applying the initial conditions to
obtain v(, ).
1
v(, ) = f() cos( ) + g() sin( )
c
With the Fourier transform pairs
1
v(, ) = F[f (x)]F[((x + ) + (x ))] + F[g(x)]F[(H(x + ) H(x ))]
c
Z
1
v(x, ) = f ()((x + ) + (x )) d
2
Z
1 1
+ g()(H(x + ) H(x )) d
c 2
Z x+
1 1
v(x, ) = (f (x + ) + f (x )) + g() d
2 2c x
Finally we make the change of variables t = /c, u(x, t) = v(x, ) to obtain DAlemberts solution
of the wave equation,
Z x+ct
1 1
u(x, t) = (f (x ct) + f (x + ct)) + g() d.
2 2c xct
Solution 44.6
With the change of variables
t 1
= ct, = = , v(x, ) = u(x, t),
t c t
the problem becomes
1
v = vxx , v(x, 0) = f (x), v (x, 0) =
g(x).
c
We take the Laplace transform in of the equation, (we consider x to be a parameter),
1188
1
Vxx (x, s) s2 V (x, s) = sf (x) g(x),
c
Now we have an ordinary differential equation for V (x, s), (now we consider s to be a parameter).
We impose the boundary conditions that the solution is bounded at x = . Consider the Greens
function problem
gxx (x; ) s2 g(x; ) = (x ), g(; ) bounded.
esx is a homogeneous solution that is bounded at x = . esx is a homogeneous solution that is
bounded at x = +. The Wronskian of these solutions is
esx
sx
e
W (x) = sx = 2s.
s esx
se
es||
Z Z
1 1
V (x, s) = es|| f (x ) d + g(x )) d.
2 2c s
Now we take the inverse Laplace transform and interchange the order of integration.
Z Z s||
1 1 1 e
v(x, ) = L es|| f (x ) d + L1 g(x )) d
2 2c s
es||
Z Z
1 1
h
s||
i 1 1
v(x, ) = L e f (x ) d + L g(x )) d
2 2c s
Z Z
1 1
v(x, ) = ( ||)f (x ) d + H( ||)g(x )) d
2 2c
Z
1 1
v(x, ) = (f (x ) + f (x + )) + g(x ) d
2 2c
Z x+
1 1
v(x, ) = (f (x ) + f (x + )) + g() d
2 2c x
Z x+
1 1
v(x, ) = (f (x ) + f (x + )) + g() d
2 2c x
Now we write make the change of variables t = /c, u(x, t) = v(x, ) to obtain DAlemberts solution
of the wave equation,
Z x+ct
1 1
u(x, t) = (f (x ct) + f (x + ct)) + g() d.
2 2c xct
1189
Solution 44.7
1. We take the Laplace transform of Equation 44.1.
s (x, 0) = a2 xx
s
xx 2 = 0 (44.3)
a
We take the Laplace transform of the initial condition, (0, t) = f (t), and use that (x, s)
vanishes as x to obtain boundary conditions for (x, s).
(0, s) = f(s), (, s) = 0
Now consider the case in which f (t) = 1 for 0 < t < T , with f (t) = 0 for t > T . For t < T ,
is the same as before.
x
(x, t) = erfc , for 0 < t < T
2a t
Consider t > T .
Z t
x2
x 1
(x, t) = exp d
2a tT 3/2 4a2
Z x/(2at)
2 2
(x, t) =
e d
x/(2a tT )
x x
(x, t) = erf erf
2a t T 2a t
1190
Solution 44.8
First we find the partial differential equation that v satisfies. We start with the partial differential
equation for u,
ut = uxx .
Differentiating this equation with respect to x yields,
utx = uxxx .
Thus v satisfies the same partial differential equation as u. This is because the equation for u is
linear and homogeneous and v is a linear combination of u and its derivatives. The problem for v is,
With this new boundary condition, we can solve the problem with the Fourier sine transform. We
take the sine transform of the partial differential equation and the initial condition.
1
vt (, t) = 2 v(, t) + v(0, t) ,
v(, 0) = Fs [f 0 (x) f (x)]
vt (, t) = 2 v(, t)
v(, 0) = Fs [f 0 (x) f (x)]
Now we have a first order, ordinary differential equation for v. The general solution is,
2
v(, t) = c e t .
Now we take the inverse sine transform to find v. We utilize the Fourier cosine transform pair,
h i r
1 2 t 2
Fc e = ex /(4t) ,
t
to write v in a form that is suitable for the convolution theorem.
r
x2 /(4t)
v(, t) = Fs [f 0 (x) f (x)] Fc e
t
Recall that the Fourier sine convolution theorem is,
Z
1
Fs f () (g(|x |) g(x + )) d = Fs [f (x)]Fc [g(x)].
2 0
1191
Thus v(x, t) is
Z
1 2 2
v(x, t) = (f 0 () f ()) e|x| /(4t) e(x+) /(4t) d.
2 t 0
ux u = v.
Solution 44.9
Z Z
2 2
ec sin(x) d = ec cos(x) d
0 x 0
Z
1 2
= ec cos(x) d
2 x
Z
1 2
= ec +x d
2 x
Z
1 2 2
= ec(+x/(2c)) ex /(4c) d
2 x
1 x2 /(4c) c2
Z
= e e d
2 x
r
1 x2 /(4c)
= e
2 c x
x 2
= 3/2 ex /(4c)
4c
We take the Fourier sine transform of the partial differential equation and the initial condition.
ut (, t) = 2 u(, t) + g(t), u(, 0) = 0
Now we have a first order, ordinary differential equation for u(, t).
2 t 2
e ut (, t) = g(t) e t
t
Z t
2 2 2
u(, t) = e t g( ) e d + c() e t
0
1192
The initial condition is satisfied for c() = 0.
Z t
2
u(, t) = g( ) e (t )
d
0
Solution 44.10
The problem is
We take the Fourier sine transform of the partial differential equation and the boundary conditions.
k
2 u(, y) +u(0, y) + uyy (, y) = 0
k
uyy (, y) 2 u(, y) = f (y), u(, 0) = u(, 1) = 0
This is an inhomogeneous, ordinary differential equation that we can solve with Green functions.
The homogeneous solutions are
{cosh(y), sinh(y)}.
The homogeneous solutions that satisfy the left and right boundary conditions are
1193
With some uninteresting grunge, you can show that,
Z
sinh() sinh((y 1)) sin() sin(y)
2 sin(x) d = 2 .
0 sinh() (cosh(x) cos((y )))(cosh(x) cos((y + )))
Taking the inverse Fourier sine transform of u(, y) and interchanging the order of integration yields,
Z y
2 sin() sin(y)
u(x, y) = f () d
0 (cosh(x) cos((y )))(cosh(x) cos((y + )))
2 1
Z
sin(y) sin()
+ f () d.
y (cosh(x) cos(( y)))(cosh(x) cos(( + y)))
Z 1
2 sin() sin(y)
u(x, y) = f () d
0 (cosh(x) cos((y )))(cosh(x) cos((y + )))
Solution 44.11
The problem for u(x, y) is,
We take the Fourier transform of the partial differential equation and the boundary condition.
This is an ordinary differential equation for u(, y). So far we only have one boundary condition.
In order that u is bounded we impose the second boundary condition u(, y) is bounded as y .
The general solution of the differential equation is
(
c1 () ey +c2 () ey , for 6= 0,
u(, y) =
c1 () + c2 ()y, for = 0.
Note that ey is the bounded solution for < 0, 1 is the bounded solution for = 0 and ey is
the bounded solution for > 0. Thus the bounded solution is
Now we take the inverse Fourier transform to obtain the solution for u(x, y). To do this we use the
Fourier transform pair,
2c
F 2 = ec|| ,
x + c2
and the convolution theorem,
Z
1
F f ()g(x ) d = f()g().
2
Z
1 2y
u(x, y) = g() d.
2 (x )2 + y 2
1194
Solution 44.12
Since the derivative of u is specified at x = 0, we take the cosine transform of the partial differential
equation and the initial condition.
1
ut (, t) = u(, t) ux (0, t) a2 u(, t), u(, 0) = 0
2
ut + 2 + a2 u = f (t), u(, 0) = 0
This first order, ordinary differential equation for u(, t) has the solution,
t (2 +a2 )(t )
Z
u(, t) = e f ( ) d.
0
We take the inverse Fourier cosine transform to find the solution u(x, t).
Z t
2 2
u(x, t) = Fc1 e( +a )(t ) f ( ) d
0
Z t
h 2
i 2
u(x, t) = Fc1 e (t ) ea (t ) f ( ) d
0
t
Z r
2 2
u(x, t) = ex /(4(t )) ea (t ) f ( ) d
0 (t )
2
t
ex /(4(t ))a2 (t )
r Z
u(x, t) = f ( ) d
0 t
Solution 44.13
Mathematically stated we have
We take the Laplace transform of the partial differential equation and the boundary conditions.
1195
then we can use the convolution theorem to write u in terms of a single integral. We proceed by
expanding this function in a sum.
sinh(s(L x)/c) es(Lx)/c es(Lx)/c
=
sinh(sL/c) esL/c esL/c
esx/c es(2Lx)/c
=
1 e2sL/c
X
= esx/c es(2Lx)/c e2nsL/c
n=0
X
X
= es(2nL+x)/c es(2(n+1)Lx)/c
n=0 n=0
X X
= es(2nL+x)/c es(2nLx)/c
n=0 n=1
We can simplify this a bit. First we determine which Dirac delta functions have their singularities
in the range (0..t). For the first sum, this condition is
0 < t (2nL + x)/c < t.
The right inequality is always satisfied. The left inequality becomes
(2nL + x)/c < t,
ct x
n< .
2L
For the second sum, the condition is
0 < t (2nL x)/c < t.
Again the right inequality is always satisfied. The left inequality becomes
ct + x
. n<
2L
We change the index range to reflect the nonzero contributions and do the integration.
ctx
b ct+x
Z t b 2L c 2L c
X X
u(x, t) = f ( ) (t (2nL + x)/c) (t (2nL x)/c) d.
0 n=0 n=1
b ctx
2L c b ct+x
2L c
X X
u(x, t) = f (t (2nL + x)/c) f (t (2nL x)/c)
n=0 n=1
1196
Solution 44.14
We take the Fourier transform of the partial differential equation and the boundary conditions.
1
2 + yy = 0, (, 0) = e , (, l) = 0
2
We solve this boundary value problem.
We take the inverse Fourier transform to obtain an expression for the solution.
sinh((l y))
Z
1
(x, y) = e(x) d
2 sinh(l)
1197
1198
Chapter 45
Green Functions
1199
If we find a Green function g(x; ) that satisfies
L[g(x; )] = 0, B[g(x; )] = (x )
then the solution to Equation 45.2 is
Z
u(x) = g(x; )h() d.
We verify that this solution satisfies the equation and boundary condition.
Z
L[u(x)] = L[g(x; )]h() d
Z
= 0 h() d
=0
Z
B[u(x)] = B[g(x; )]h() d
Z
= (x )h() d
= h(x)
Example 45.2.1 Consider the Cauchy problem for the homogeneous heat equation.
ut = uxx , < x < , t > 0
u(x, 0) = h(x), u(, t) = 0
We find a Green function that satisfies
gt = gxx , < x < , t > 0
g(x, 0; ) = (x ), g(, t; ) = 0.
Then we write the solution Z
u(x, t) = g(x, t; )h() d.
To find the Green function for this problem, we apply a Fourier transform to the equation and
boundary condition for g.
gt = 2 g, g(, 0; ) = F[(x )]
2
g(, t; ) = F[(x )] e t
x2
r
g(, t; ) = F[(x )]F exp
t 4t
We invert using the convolution theorem.
Z
(x )2
r
1
g(x, t; ) = ( ) exp d
2 t 4t
(x )2
1
= exp
4t 4t
(x )2
Z
1
u(x, t) = exp h() d.
4t 4t
1200
45.3 Eigenfunction Expansions for Elliptic Equations
Consider a Green function problem for an elliptic equation on a finite domain.
L[G] = (x ), x (45.3)
B[G] = 0, x
Let the set of functions {n } be orthonormal and complete on . (Here n is the multi-index
n = n1 , . . . , nd .) Z
n (x)m (x) dx = nm
In addition, let the n be eigenfunctions of L subject to the homogeneous boundary conditions.
L [n ] = n n , B [n ] = 0
We substitute the series expansions for the Green function and the Dirac Delta function into Equa-
tion 45.3. X X
gn n n (x) = n ()n (x)
n n
We equate coefficients to solve for the gn and hence determine the Green function.
n ()
gn =
n
X n ()n (x)
G(x; ) =
n
n
Example 45.3.1 Consider the Green function for the reduced wave equation, u k 2 u in the
rectangle, 0 x a, 0 y b, and vanishing on the sides.
First we find the eigenfunctions of the operator L = k 2 = 0. Note that = X(x)Y (y) is
2 2
an eigenfunction of L if X is an eigenfunction of x 2 and Y is an eigenfunction of y 2 . Thus we
X 00 = X, X(0) = X(a) = 0
Y 00 = Y, Y (0) = Y (b) = 0
1201
to make the eigenfunctions orthonormal.
2 mx ny
mn = sin sin , m, n Z+
ab a b
The mn are eigenfunctions of L.
m 2 n 2
L [mn ] = + + k 2 mn
a b
By expanding the Green function and the Dirac Delta function in the mn and substituting into the
differential equation we obtain the solution.
2 sin m sin n 2 sin mx sin ny
X ab a b ab a b
G= 2 2
m,n=1 m a + n b + k2
m ny n
sin mx
sin sin sin
X a a b b
G(x, y; , ) = 4ab 2 + (na)2 + (kab)2
m,n=1
(mb)
Example 45.3.2 Consider the Green function for Laplaces equation, u = 0 in the disk, |r| < a,
and vanishing at r = a.
First we find the eigenfunctions of the operator
2 1 1 2
= 2
+ + 2 2.
r r r r
We will look for eigenfunctions of the form = ()R(r). We choose the to be eigenfunctions of
d2
d 2 subject to the periodic boundary conditions in .
n2
1
R00 + R0 + 2 2 R = 0, R(0) bounded, R(a) = 0
r r
The general solution for R is
R = c1 Jn (r) + c2 Yn (r).
The left boundary condition demands that c2 = 0. The right boundary condition determines the
eigenvalues.
jn,m r jn,m
Rnm = Jn , nm =
a a
Here jn,m is the mth positive root of Jn . This leads us to the eigenfunctions
jn,m r
nm = ein Jn
a
1202
We use the orthogonality relations
Z 2
eim ein d = 2mn ,
0
Z 1
1 0 2
rJ (j,m r)J (j,n r) dr = (J (j,n )) mn
0 2
to make the eigenfunctions orthonormal.
1 jn,m r
nm = ein Jn , n Z, m Z+
a|J 0 n (jn,m )| a
The nm are eigenfunctions of L.
2
jn,m
nm = nm
a
By expanding the Green function and the Dirac Delta function in the nm and substituting into the
differential equation we obtain the solution.
1 in jn,m 1 in jn,m r
X X 0
a|J n (jn,m )|
e J n a
0
a|J n (jn,m )|
e J n a
G= 2
jn,m
n= m=1 a
X X 1 jn,m jn,m r
G(r, ; , ) = ein() Jn Jn
n= m=1
(jn,m J 0 n (jn,m ))2 a a
1203
We write the solution of Poissons equation using the Green function.
Z Z
u(x, y) = G(x, y|, )f (, ) d d
0
(x )2 + (y )2
Z Z
1
u(x, y) = ln f (, ) d d
0 4 (x )2 + (y + )2
1204
45.5 Exercises
Exercise 45.1
Consider the Cauchy problem for the diffusion equation with a source.
Find the Green function for this problem and use it to determine the solution.
Exercise 45.2
Consider the 2-dimensional wave equation
1. Determine the fundamental solution for this equation. (i.e. response to source at t = , x = ).
You may find the following information useful about the Bessel function:
1 x cos
Z
J0 (x) = e d,
0
Z (
0, 0<b<a
J0 (ax) sin(bx) dx = 1
0
b2 a2
, 0<a<b
Exercise 45.3
Consider the linear wave equation
utt = c2 uxx ,
with constant c, on the infinite domain < x < .
1. By using the Fourier transform find the solution of Gtt = c2 Gxx subject to initial conditions
G(x, 0) = 0, Gt (x, 0) = (x ).
Sketch the solution in x for fixed times t < 1 and t > 1 and also indicate on the x, t (t > 0)
plane the regions of qualitatively different behavior of u.
Exercise 45.4
Consider a generalized Laplace equation with non-constant coefficients of the form:
on a region V with u = 0 on the boundary S. Suppose we find a Green function which satisfies
2 G + A(x) G + h(x)G = (x ).
Use the divergence theorem to derive an appropriate generalized Greens identity and show that
Z
u() 6= G(x|)q(x) dx.
V
What equation should the Green function satisfy? Note: this equation is called the adjoint of the
original partial differential equation.
1205
Exercise 45.5
Consider Laplaces equation in the infinite three dimensional domain with two sources of equal
strength C, opposite sign and separated by a distance .
2 u = C(x + ) C(x ),
where = ( 2 , 0, 0).
2. Now consider the limit in which the distance between sources goes to zero ( 0) and the
strength increases in such a way that C = D remains fixed. Show that the solution can be
written
Dx
u= ,
4r3
where r = |x|. This is called the response to a dipole located at the origin, with strength D,
and oriented in the positive x direction.
3. Show that in general the response to a unit (D = 1) dipole at an arbitrary point 0 and
oriented in the direction of the unit vector a is
1 1
u(x) = ~a
4 |x | =
0
Exercise 45.6
Consider Laplaces equation
2 u = 0,
inside the unit circle with boundary condition u = f (). By using the Green function for the
Dirichlet problem on the circle:
1 |x |
G(x|) = ln ,
2 |||x |
Exercise 45.7
Consider an alternate derivation of the fundamental solution of Laplaces equation
2 u = (x),
1. Convert this equation to spherical coordinates. You may define a new delta function
Z (
1 if B contains the origin
3 (r) = (x)(y)(z) such that 3 (r) dV =
B 0 otherwise
2. Show, by symmetry, that this can be reduced to an ordinary differential equation. Solve to
find the general solution of the homogeneous equation. Now determine the constants by using
the constraint that u 0 as |x| , and by integrating the partial differential equation over
a small ball around the origin (and using Gauss theorem).
1206
3. Now use similar ideas to re-derive the fundamental solution in two dimensions. Can we still
say u 0 as |x| ? Use instead the constraint that u = 0 when |x| = 1.
4. Finally derive the 2-D solution from the 3-D one using the method of descent. Consider
Laplaces equation in three dimensions with a line source at x = 0, y = 0, < z < ,
Exercise 45.8
Consider the heat equation on the bounded domain 0 < x < L with fixed temperature at each end.
Use Laplace transforms to determine the Green Function which satisfies
Gt Gxx = (x )(t),
G(0, t) = 0 G(L, t) = 0,
G(x, 0 ) = 0.
and comment on the convergence of the respective formulations for small and large time.
Exercise 45.9
Consider the Green function for the 1-D heat equation
Gt Gxx = (x )(t ),
Gx (0, t) = 0, G 0 as x ,
1207
2. (15 points) Relate this to the fundamental solution on the infinite domain, and discuss in terms
of responses to real and image sources. Give the solution for x > 0 of
Exercise 45.10
Consider the heat equation
ut = uxx + (x )(t),
on the infinite domain < x < , where we assume u 0 as x and initially u(x, 0 ) = 0.
ut = uxx
3. Finally use the Laplace inversion formula and Cauchys Theorem on an appropriate contour
to compute u(x, t). Recall
Z
1 1
f (t) = L [F (s)] = F (s) est ds,
2
Exercise 45.11
Derive the causal Green function for the one dimensional wave equation on (..). That is, solve
Use the Green function to find the solution of the following wave equation with a source term.
Exercise 45.12
By reducing the problem to a series of one dimensional Green function problems, determine G(x, )
if
2 G = (x )
(a) on the rectangle 0 < x < L, 0 < y < H and
(b) on the box 0 < x < L, 0 < y < H, 0 < z < W with G = 0 on the boundary.
(c) on the semi-circle 0 < r < a, 0 < < with G = 0 on the boundary.
1208
(d) on the quarter-circle 0 < r < a, 0 < < /2 with G = 0 on the straight sides and Gr = 0 at
r = a.
Exercise 45.13
Using the method of multi-dimensional eigenfunction expansions, determine G(x, x0 ) if
2 G = (x x0 )
and
(a) on the rectangle (0 < x < L, 0 < y < H)
G
at x = 0, G=0 at y = 0, =0
y
G G
at x = L, =0 at y = H, =0
x y
(b) on the rectangular shaped box (0 < x < L, 0 < y < H, 0 < z < W ) with G = 0 on the six
sides.
(c) on the semi-circle (0 < r < a, 0 < < ) with G = 0 on the entire boundary.
(d) on the quarter-circle (0 < r < a, 0 < < /2) with G = 0 on the straight sides and G/r = 0
at r = a.
Exercise 45.14
Using the method of images solve
2 G = (x x0 )
in the first quadrant (x 0 and y 0) with G = 0 at x = 0 and G/y = 0 at y = 0. Use the
Green function to solve in the first quadrant
2 u = 0
u(0, y) = g(y)
u
(x, 0) = h(x).
y
Exercise 45.15
Consider the wave equation defined on the half-line x > 0:
2u 2
2 u
= c + Q(x, t),
t2 x2
u(x, 0) = f (x)
u
(x, 0) = g(x)
t
u(0, t) = h(t)
(a) Determine the appropriate Greens function using the method of images.
(b) Solve for u(x, t) if Q(x, t) = 0, f (x) = 0, and g(x) = 0.
(c) For what values of t does h(t) influence u(x1 , t1 ). Interpret this result physically.
Exercise 45.16
Derive the Green functions for the one dimensional wave equation on (..) for non-homogeneous
initial conditions. Solve the two problems
gtt c2 gxx = 0, g(x, 0; , ) = (x ), gt (x, 0; , ) = 0,
2
tt c xx = 0, (x, 0; , ) = 0, t (x, 0; , ) = (x ),
using the Fourier transform.
1209
Exercise 45.17
Use the Green functions from Problem 45.11 and Problem 45.16 to solve
Exercise 45.18
Show that the Green function for the reduced wave equation, u k 2 u = 0 in the rectangle,
0 x a, 0 y b, and vanishing on the sides is:
2 X sinh(n y< ) sinh(n (y> b)) nx n
G(x, y; , ) = sin sin ,
a n=1 n sinh(n b) a a
where r
n2 2
n = k2 + .
a2
Exercise 45.19
Find the Green function for the reduced wave equation u k 2 u = 0, in the quarter plane: 0 <
x < , 0 < y < subject to the mixed boundary conditions:
u(x, 0) = 0, ux (0, y) = 0.
Exercise 45.20
Show that in polar coordinates the Green function for u = 0 in the infinite sector, 0 < < ,
0 < r < , and vanishing on the sides is given by,
r
1 cosh ln cos ( )
G(r, , , ) = ln .
4 cosh ln r cos ( + )
Use this to find the harmonic function u(r, ) in the given sector which takes on the boundary values:
(
0 for r < c
u(r, ) = u(r, ) =
1 for r > c.
Exercise 45.21
The Green function for the initial value problem,
on < x < is
1 2
G(x, t; ) = e(x) /(4t) .
4t
Use the method of images to find the corresponding Green function for the mixed initial-boundary
problems:
1210
Exercise 45.22
Find the Green function (expansion) for the one dimensional wave equation utt c2 uxx = 0 on the
interval 0 < x < L, subject to the boundary conditions:
a) u(0, t) = ux (L, t) = 0,
b) ux (0, t) = ux (L, t) = 0.
Write the final forms in terms showing the propagation properties of the wave equation, i.e., with
arguments ((x ) (t )).
Exercise 45.23
Solve, using the above determined Green function,
1211
45.6 Hints
Hint 45.1
Hint 45.2
Hint 45.3
Hint 45.4
Hint 45.5
Hint 45.6
Hint 45.7
Hint 45.8
Hint 45.9
Hint 45.10
Hint 45.11
Hint 45.12
Take a Fourier transform in x. This will give you an ordinary differential equation Green function
problem for G. Find the continuity and jump conditions at t = . After solving for G, do the inverse
transform with the aid of a table.
Hint 45.13
Hint 45.14
Hint 45.15
Hint 45.16
Hint 45.17
1212
Hint 45.18
Use Fourier sine and cosine transforms.
Hint 45.19
The the conformal mapping z = w/ to map the sector to the upper half plane. The new problem
will be
Hint 45.20
Hint 45.21
Hint 45.22
1213
45.7 Solutions
Solution 45.1
The Green function problem is
Now we have an ordinary differential equation Green function problem for G. The homogeneous
solution of the ordinary differential equation is
2
e t
We write the solution of the diffusion equation using the Green function.
Z Z Z
u= G(x, t|, )s(, ) d d + G(x, t|, 0)f () d
0
Z t Z Z
1 (x)2 /(4(t )) 1 2
u= p e s(, ) d d + e(x) /(4t)
f () d
0 4(t ) 4t
Solution 45.2
1. We apply Fourier transforms in x and y to the Green function problem.
1214
We make the change of variables = cos , = sin and do the integration in polar
coordinates.
Z 2 Z ((x) cos +(y) sin )
1 e
G= 2
sin (c(t )) d dH(t )
4 c 0 0
Next we introduce polar coordinates for x and y.
x = r cos , y = r sin
Z Z 2
1
G= er(cos cos +sin sin ) d sin (c(t )) dH(t )
4 2 c 0 0
Z Z 2
1
G= er cos() d sin (c(t )) dH(t )
4 2 c 0 0
Z
1
G= J0 (r) sin (c(t )) dH(t )
2c 0
1 1
G= p H(c(t ) r)H(t )
2c (c(t ))2 r2
H(c(t ) |x |)
G(x, t|, ) = p
2c (c(t ))2 |x |2
2. To find the 1D Green function, we consider a line source, (x)(t). Without loss of generality,
we have taken the source to be at x = 0, t = 0. We use the 2D Green function and integrate
over space and time.
gtt c2 g = (x)(t)
p
Z Z Z H c(t ) (x )2 + (y )2
g= p ()( ) d d d
2 2 2
2c (c(t )) (x ) (y )
p
Z H ct x2 + 2
1
g= p d
2c (ct)2 x2 2
Z (ct)2 x2
1 1
g= p dH (ct |x|)
2c (ct)2 x2 (ct) x2 2
2
1
g(x, t|0, 0) = H (ct |x|)
2c
1
g(x, t|, ) = H (c(t ) |x |)
2c
Solution 45.3
1.
1215
2. We can write the solution of
Z
u= G(x, t|)f () d
Z x+t
1
u(x, t) = (1 ||)H(1 ||) d
2 xt
First we consider the case t < 1/2. We will use fact that the solution is symmetric in x.
0, x + t < 1
1 x+t
R
2 R1 (1 ||) d, x t < 1 < x + t
x+t
u(x, t) = 12 xt (1 ||) d, 1 < x t, x + t < 1
1 1
R
2 xt (1 ||) d, xt<1<x+t
0, 1<xt
0, x + t < 1
1 2
x t < 1 < x + t
4 (1 + t + x)
(1 + x)t 1 < x t, x + t < 0
u(x, t) = 12 (2t t2 x2 ) xt<0<x+t
(1 x)t
0 < x t, x + t < 1
1
(1 + t x)2
xt<1<x+t
4
0, 1<xt
0, x + t < 1
1 x+t
R
2 R1 (1 ||) d, x t < 1 < x + t
x+t
u(x, t) = 12 xt (1 ||) d, 1 < x t, x + t < 1
1 1
R
2 xt (1 ||) d, x t < 1 < x + t
0, 1<xt
0, x + t < 1
1 2
1
4 (1 + t + x) <x+t<0
1 2
4 (1 t + 2t(1 x) + x(2 x)) x t < 1, 0 < x + t
u(x, t) = 12 (2t t2 x2 ) 1 < x t, x + t < 1
1 (1 t2 + 2t(1 + x) x(2 + x)) x t < 0, 1 < x + t
4
1
(1 + t x)2
0<xt<1
4
0, 1<xt
1216
Finally we consider the case 1 < t.
0, x + t < 1
1 x+t
R
2 R1 (1 ||) d, 1 < x + t < 1
1
u(x, t) = 12 1 (1 ||) d, x t < 1, 1 < x + t
1 1
R
2 xt (1 ||) d, 1 < x t < 1
0, 1<xt
0, x + t < 1
1 (1 + t + x)2
1 < x + t < 0
41
4
(1 (t + x 2)(t + x)) 0<x+t<1
1
u(x, t) = 2 x t < 1, 1 < x + t
1
(1 (t x 2)(t x)) 1 < x t < 0
4
1 2
(1 + t x) 0<xt<1
4
0, 1<xt
0.5 0.5
0.4 0.4
0.3 0.3
0.2 0.2
0.1 0.1
-2 -1 1 2 -4 -2 2 4
Figure 45.2 shows the behavior of the solution in the phase plane. There are lines emanating
form x = 1, 0, 1 showing the range of influence of these points.
u=1
u=0
u=0
x
Solution 45.4
We define
L[u] 2 u + a(x) u + h(x)u.
1217
We use the Divergence Theorem to derive a generalized Greens Theorem.
Z Z
uL[v] dx = u(2 v + a v + hv) dx
V V
Z Z
uL[v] dx = (u2 v + (uva) v (au) + huv) dx
V V
Z Z Z
2
uL[v] dx = v( u (au) + hu) dx + (uv vu + uva) n dA
V V V
Z Z
(uL[v] vL [u]) dx = (uv vu + uva) n dA
V V
We define the adjoint operator L .
L [u] = 2 u (au) + hu
We substitute the solution u and the adjoint Green function G into the generalized Greens Theo-
rem.
Z Z
(G L[u] uL [G ]) dx = (G u uG + vG a) n dA
V V
Z
(G q uL [G ]) dx = 0
V
If the adjoint Green function satisfies L [G ] = (x ) then we can write u as an integral of the
adjoint Green function and the inhomegeneity.
Z
u() = G (x|)q(x) dx
V
Thus we see that the adjoint Green function problem is the appropriate one to consider. For
L[G] = (x ), Z
u() 6= G(x|)q(x) dx
V
Solution 45.5
1.
2 u = C(x + ) C(x )
C C
u= +
4|x + | 4|x |
1218
3. Let = 0 a/2.
1
2 u = lim
(x + ) (x )
0
1 1 1 1
u= lim
4 0 |x ( 0 + a/2)| |x ( 0 a/2)|
We note that this is the definition of a directional derivative.
1 1
u(x) = ~a
4 |x | =
0
Solution 45.6
The Green function is
1 |x |
G(x|) = ln .
2 |||x |
We write this in polar coordinates. Denote x = r e and = e . Let = be the difference
in angle between x and .
p !
1 r2 + 2 2r cos
G(x|) = ln p
2 r2 + 1/2 2(r/) cos
2
r + 2 2r cos
1
G(x|) = ln
4 r2 2 + 1 2r cos
We solve Laplaces equation with the Green function.
I
u(x) = f () G(x|) n ds
Z 2
u(r, ) = f ()G (r, |1, ) d
0
1 r4 + r(r2 1)(2 + 1) cos
G =
2 (r2 + 2 2r cos )(r2 2 + 1 2r cos )
1 1 r2
G (r, |1, ) =
2 1 + r2 2r cos
2 Z 2
1r f ()
u(r, ) = 2
d
2 0 1 + r 2r cos( )
Solution 45.7
1.
G = (x )(y )(z )
2G
1 G 1 G 1
r2 + 2 sin() + 2 = 3 (r)
r2 r r r sin r sin 2
2. Since the Green function has spherical symmetry, G = G = 0. This reduces the problem to
an ordinary differential equation.
1 2 G
r = 3 (r)
r2 r r
We find the homogeneous solutions.
2
urr + ur = 0
r
ur = c e2 ln r = cr2
c1
u= + c2
r
1219
We consider the solution that vanishes at infinity.
c
u=
r
Thus we see that G = c/r. We determine the constant by integrating G over a sphere about
the origin, R.
ZZZ
G dx = 1
R
ZZ
G n ds = 1
R
ZZ
Gr ds = 1
R
Z Z 2
c 2
r sin() dd = 1
0 0 r2
4c = 1
1
c=
4
1
G=
4r
G = (x )(y )
1 2G
1 G
r + 2 = 2 (r)
r r r r 2
Since the Green function has circular symmetry, G = 0. This reduces the problem to an
ordinary differential equation.
1 G
r = 2 (r)
r r r
1
urr + ur = 0
r
ur = c e ln r = cr1
u = c1 ln r + c2
There are no solutions that vanishes at infinity. Instead we take the solution that vanishes at
r = 1.
u = c ln r
Thus we see that G = c ln r. We determine the constant by integrating G over a ball about
1220
the origin, R.
ZZ
G dx = 1
R
Z
G n ds = 1
R
Z
Gr ds = 1
R
Z 2
c
r d = 1
0 r
2c = 1
1
G= ln r
2
4.
Z Z Z
1
u= ()() d d d
4(r )
Z Z Z
1 ()()
u= p d d d
4 (x )2 + (y )2 + (z )2
Z
1 1
u= p d
4 x2 + y 2 + (z )2
Z
1 1
u= p d
4 r + 2 2
Z
1 r
ur = d
4 (r2 + 2 )3/2
1 2
ur =
4 r
1
u= ln r
2
Solution 45.8
1. We take the Laplace transform of the differential equation and the boundary conditions in x.
Gt Gxx = (x )(t )
sG Gxx = (x )
s 1
Gxx G = (x ), G(0, t) = G(L, t) = 0
Now we have an ordinary differential equation Green function problem. We find homogeneous
solutions which respectively satisfy the left and right boundary conditions and compute their
Wronskian. r r
s s
y1 = sinh x , y2 = sinh (L x)
ps ps
sinh
x sinh (L x)
W = s
s
s cosh s
p p p p
cosh x (L x)
r r r r r
s s s s s
= 2 sinh x cosh (L x) + cosh x sinh (L x)
r r
s s
= 2 sinh L
1221
We write the Green function in terms of the homogeneous solutions of the Wronskian.
r r
1 1 s s
G = ps p s sinh x< sinh (L x> )
2 sinh L
ps ps
sinh x< sinh (L x> )
G = p s
2 s sinh L
ps ps
cosh (L x> + x< ) cosh (L x> x< )
G = ps
2 s sinh L
We use the expansion of the hyperbolic cosecant in our expression for the Green function.
e s/(Lx> +x< ) + e s/(Lx> +x< ) e s/(Lx> x< ) e s/(Lx> x< )
G = ps
4 s sinh L
1 /s(Lx> +x< )
G = e + e /s(Lx> +x< )
2 s
X
e /s(Lx> x< ) e /s(Lx> x< ) e(2n+1) s/L
n=0
1 X X
G = e s/(x> +x< 2nL) + e s/(x> x< 2(n+1)L)
2 s n=0 n=0
!
X X
s/(x> x< 2nL)
e e s/(x> +x< 2(n+1)L)
n=0 n=0
1
1 X X
G = e s/(x> +x< 2nL) + e s/(x> x< +2nL)
2 s n=0 n=
1
!
X X
s/(x> x< 2nL) s/(x> +x< +2nL)
e e
n=0 n=
!
1 X
s/|x< x> 2nL|
X
s/|x< +x> 2nL|
G = e e
2 s n= n=
!
1 X
s/|x2nL|
X
s/|x+2nL|
G = e e
2 s n= n=
1222
3. We take the inverse Laplace transform to find the Green function for the diffusion equation.
!
1 X 2 X 2
G= e(x2nL) /(4t) e(x+2nL) /(4t)
2 t n= n=
X
X
G= f (x 2nL, t) f (x + 2nL, t)
n= n=
On the interval (L . . . L), there is a real source at x = and a negative image source at
x = . This pattern is repeated periodically.
The above formula is useful when approximating the solution for small time, t 1. For such
small t, the terms decay very quickly away from n = 0. A small number of terms could be
used for an accurate approximation.
The alternate formula is useful when approximating the solution for large time, t 1. For
such large t, the terms in the sine series decay exponentially Again, a small number of terms
could be used for an accurate approximation.
Solution 45.9
1. We take the Fourier cosine transform of the differential equation.
Gt Gxx = (x )(t )
1
Gt 2 G Gx (0, t) = Fc [(x )](t )
Gt + 2 G = Fc [(x )](t )
2
G = Fc [(x )] e (t ) H(t )
r
x2 /(4(t ))
G = Fc [(x )]Fc e H(t )
(t )
Z tZ
1 1 2 2
u(x, t) = e(x) /(4(t )) + e(x+) /(4(t )) q(, ) d d
4 0 0 t
Z
1 2 2
+ e(x) /(4t) + e(x+) /(4t) f () d
4t 0
1223
Solution 45.10
1. We integrate the heat equation from t = 0 to t = 0+ to determine an initial condition.
ut = uxx + (x )(t)
u(x, 0+ ) u(x, 0 ) = (x )
su u(x, 0) = uxx
s 1
uxx u = (x ), u(, s) = 0
The solutions that satisfy the left and right boundary conditions are, respectively,
u1 = e s/x , u2 = e s/x
We compute the Wronskian of these solutions and then write the solution for u.
s/x
r
e s/x e s
W = p
= 2
s/ e s/x s/ e s/x
p
1 e s/x< e s/x>
u =
2 s
p
1
u = e s/|x|
2 s
ea/t
r
2as
L1 e = .
s t
We use this result to do the inverse Laplace transform.
1 2
u(x, t) = e(x) /(4t)
2 t
Solution 45.11
Now we have an ordinary differential equation Green function problem for G. We have written
the causality condition, the Green function is zero for t < , in terms of initial conditions. The
homogeneous solutions of the ordinary differential equation are
{cos(ct), sin(ct)}.
1224
It will be handy to use the fundamental set of solutions at t = :
1
cos(c(t )), sin(c(t )) .
c
G(, 0; , + ) = 0, Gt (, 0; , + ) = F[(x )]
We write the solution for G and invert using the convolution theorem.
1
G = F[(x )]H(t ) sin(c(t ))
c
h i
G = H(t )F[(x )]F H(c(t ) |x|)
Z c
1
G = H(t ) (y )H(c(t ) |x y|) dy
c 2
1
G = H(t )H(c(t ) |x |)
2c
1
G = H(c(t ) |x |)
2c
The Green function for = = 0 and c = 1 is plotted in Figure 45.3 on the domain x (1..1),
1
t (0..1). The Green function is a displacement of height 2c that propagates out from the point
x = in both directions with speed c. The Green function shows the range of influence of a
disturbance at the point x = and time t = . The disturbance influences the solution for all
ct < x < + ct and t > .
1
0.8
0.6
t
0.4
0.2
0.4
0.2
-1 -0.5 0 0.5 0
1
x
1225
Solution 45.12
1. We expand the Green function in eigenfunctions in x.
X (2n 1)x
G(x; ) = an (y) sin
n=1
2L
2 !r
X (2n 1) 2 (2n 1)x
a00n (y) an (y) sin
n=1
2L L 2L
r r
X 2 (2n 1) 2 (2n 1)x
= (y ) sin sin
n=1
L 2L L 2L
2 r
(2n 1) 2 (2n 1)
a00n (y) an (y) = sin (y )
2L L 2L
From the boundary conditions at y = 0 and y = H, we obtain boundary conditions for the
an (y).
a0n (0) = a0n (H) = 0.
The solutions that satisfy the left and right boundary conditions are
(2n 1)y (2n 1)(H y)
an1 = cosh , an2 = cosh .
2L 2L
2 2L (2n 1) (2n 1)y<
an (y) = csch cosh
(2n 1) 2 2L
(2n 1)(H y> ) (2n 1)
cosh sin .
2L 2L
1226
2. We seek a solution of the form
X 2 mx ny
G(x; ) = amn (z) sin sin .
m=1
LH L H
n=1
X m 2 n 2 2 mx ny
a00mn (z) + amn (z) sin sin
m=1
L H LH L H
n=1
X 2 m n 2 mx ny
= (z ) sin sin sin sin
m=1
LH L H LH L H
n=1
m 2 n 2 2 m n
a00mn (z) + amn (z) = sin sin (z )
L H LH L H
From the boundary conditions on G, we obtain boundary conditions for the amn .
amn (0) = amn (W ) = 0
The solutions that satisfy the left and right boundary conditions are
r ! r !
m 2 n 2 m 2 n 2
amn1 = sinh + z , amn2 = sinh + (W z) .
L H L H
The Wronskian of these solutions is
r r !
m 2 n 2 m 2 n 2
W = + sinh + W .
L H L H
Thus the solution for amn (z) is
2 m n
amn (z) = sin sin
LH L H
q q
m 2 n 2 m 2 n 2
sinh L + H z< sinh L + H (W z> )
q q
m 2 n 2 m 2 n 2
L + H sinh L + H W
2 m n
amn (z) = csch (mn W ) sin sin
mn LH L H
sinh (mn z< ) sinh (mn (W z> )) ,
where r
m 2 n 2
mn = + .
L H
This determines the Green function.
4 X 1 m mx
G(x; ) = csch (mn W ) sin sin
LH m=1 mn L L
n=1
n ny
sin sin sinh (mn z< ) sinh (mn (W z> ))
H H
1227
3. First we write the problem in circular coordinates.
2 G = (x )
1 1 1
Grr + Gr + 2 G = (r )( ),
r r r
G(r, 0; , ) = G(r, ; , ) = G(0, ; , ) = G(a, ; , ) = 0
Because the Green function vanishes at = 0 and = we expand it in a series of the form
X
G= gn (r) sin(n).
n=1
gn (0) = gn (a) = 0
The solutions that satisfy the left and right boundary conditions are
r n a n
gn1 = rn , gn2 = .
a r
The Wronskian of these solutions is
2nan
W = .
r
Thus the solution for gn (r) is
n
n r> n a
2 r< a r>
gn (r) = sin(n) 2nan
1 r n r n a n
< >
gn (r) = sin(n) .
n a a r>
1 1 1
Grr + Gr + 2 G = (r )( ),
r r r
G(r, 0; , ) = G(r, /2; , ) = G(0, ; , ) = Gr (a, ; , ) = 0
Because the Green function vanishes at = 0 and = /2 we expand it in a series of the form
X
G= gn (r) sin(2n).
n=1
1228
We substitute the series into the differential equation.
4n2
X
00 1 0 1 X 4
gn (r) + gn (r) 2 gn (r) sin(2n) = (r ) sin(2n) sin(2n)
n=1
r r r n=1
1 4n2 4
gn00 (r) + gn0 (r) 2 gn (r) = sin(2n)(r )
r r r
From the boundary conditions on G, we obtain boundary conditions for the gn .
The solutions that satisfy the left and right boundary conditions are
r 2n a 2n
gn1 = r2n , gn2 = + .
a r
The Wronskian of these solutions is
4na2n
W = .
r
Thus the solution for gn (r) is
2n
2n r> 2n a
r< a + r>
4
gn (r) = sin(2n) 2n
4na
r 2n r 2n 2n !
1 < > a
gn (r) = sin(2n) +
n a a r>
Solution 45.13
1. The set
(2m 1)x
{Xn } = sin
2L m=1
are eigenfunctions of 2 and satisfy the boundary conditions Xn (0) = Xn0 (L) = 0. The set
n ny o
{Yn } = cos
H n=0
are eigenfunctions of 2 and satisfy the boundary conditions Yn0 (0) = Yn0 (H) = 0. The set
(2m 1)x
ny
sin cos
2L H m=1,n=0
are eigenfunctions of 2 and satisfy the boundary conditions of this problem. We expand the
Green function in a series of these eigenfunctions.
r X
X 2 (2m 1)x 2 (2m 1)x ny
G= gm0 sin + gmn sin cos
m=1
LH 2L m=1
LH 2L H
n=1
G = (x )(y )
1229
2 r
X (2m 1) 2 (2m 1)x
gm0 sin
m=1
2L LH 2L
2 !
X (2m 1) ny 2 2 (2m 1)x ny
gmn + sin cos
m=1
2L H LH 2L H
n=1
r r
X2 (2m 1) 2 (2m 1)x
= sin sin
m=1
LH 2L LH 2L
X 2 (2m 1) n 2 (2m 1)x ny
+ sin cos sin cos
m=1
LH 2L H LH 2L H
n=1
2. Note that (r )
8 kx my nz
+
sin , sin , sin : k, m, n Z
LHW L H W
G = (x )(y )(z )
2 !r
X k m 2 n 2
8 kx my nz
gkmn + + sin sin sin
L H W LHW L H W
k,m,n=1
r
X 8 k m n
= sin sin sin
LHW L H W
k,m,n=1
r
8 kx my nz
sin sin sin
LHW L H W
1230
3. The Green function problem is
1 1 1
G Grr + Gr + 2 G = (r )( ).
r r r
We seek a set of functions {n ()Rnm (r)} which are orthogonal and complete on (0 . . . a)
(0 . . . ) and which are eigenfunctions of the laplacian. For the n we choose eigenfunctions
2
of 2.
00 = 2 , (0) = () = 0
n = n, n = sin(n), n Z+
1 1
(Rn )rr + (Rn )r + 2 (Rn ) = 2 Rn
r r
1 n2
R00 n + R0 n 2 Rn = 2 Rn
r 2
r
1 n
R00 + R0 + 2 2 R = 0, R(0) = R(a) = 0
r r
1 1 1
Grr + Gr + 2 G = (r )( )
r r r
1231
2
X jn,m 2 jn,m r
gnm Jn sin(n)
n,m=1
a a|Jn+1 (jn,m )| a
X 2 jn,m 2 jn,m r
= Jn sin(n) Jn sin(n)
n,m=1
a|Jn+1 (jn,m )| a a|Jn+1 (jn,m )| a
00 = 2 , (0) = (/2) = 0
n = 2n, n = sin(2n), n Z+
the solution that satisfies the left boundary condition is R = cJ2n (r). We use the right
boundary condition to determine the eigenvalues.
j 0 2n,m
0
j 2n,m r
m = , Rnm = J2n , m, n Z+
a a
1232
to make the functions orthonormal.
2j 0 2n,m
0
j 2n,m r
+
sin(2n)J : m, n Z
a j 0 2
q 2n
2 0 a
2n,m 4n |J2n (j 2n,m )|
0 2
j 2n,m 2j 0 2n,m j 0 2n,m r
X
gnm q J2n sin(2n)
a a j02 2 0 a
n,m=1 2n,m 4n |J2n (j 2n,m )|
2j 0 2n,m j 0 2n,m
X
= q J2n sin(2n)
n,m=1 a j 0 22n,m 4n2 |J2n (j 0 2n,m )| a
2j 0 2n,m j 0 2n,m r
q J2n sin(2n)
a j 0 22n,m 4n2 |J2n (j 0 2n,m )| a
Solution 45.14
We start with the equation
2 G = (x )(y ).
We do an odd reflection across the y axis so that G(0, y; , ) = 0.
2 G = (x )(y ) (x + )(y )
1 1
ln (x )2 + (y )2 ln (x + )2 + (y )2
G=
4 4
1 1
ln (x )2 + (y + )2 ln (x + )2 + (y + )2
+
4 4
!
1 (x )2 + (y )2 (x )2 + (y + )2
G= ln
4 ((x + )2 + (y )2 ) ((x + )2 + (y + )2 )
1233
Now we solve the boundary value problem.
Z Z
G u(x, y)
u(, ) = u(x, y) G dS + Gu dV
S n n V
Z 0 Z
u(, ) = u(0, y)(Gx (0, y; , )) dy + G(x, 0; , )(uy (x, 0)) dx
0
Z Z
u(, ) = g(y)Gx (0, y; , ) dy + G(x, 0; , )h(x) dx
0 0
Z
(x )2 + 2
Z
1 1 1
u(, ) = + g(y) dy + ln h(x) dx
0 2 + (y )2 2 + (y + )2 2 0 (x + )2 + 2
x
Z
(x )2 + y 2
Z
1 1 1
u(x, y) = + g() d + ln h() d
0 x2 + (y )2 x2 + (y + )2 2 0 (x + )2 + y 2
Solution 45.15
First we find the infinite space Green function.
We write the Green function problem and the inhomogeneous differential equation for u in
terms of and .
G c2 G = (x )(t ) (45.4)
2
u c u = Q(, ) (45.5)
We take the difference of u times Equation 45.4 and G times Equation 45.5 and integrate this
1234
over the domain (0, ) (0, t+ ).
Z t+ Z Z t+ Z
uG u G c2 (uG u G) d d
(u(x )(t ) GQ) d d =
0 0 0 0
Z t+ Z Z t+ Z
2
u(x, t) = GQ d d + (uG u G) c (uG u G) d d
0 0 0 0
Z t+ Z Z Z t+
t+
u(x, t) = GQ d d + [uG u G]0 d c2 [uG u G]0 d
0 0 0 0
Z t+ Z Z Z t+
u(x, t) = GQ d d [uG u G] =0 d + c2 [uG ]=0 d
0 0 0 0
We calculate G .
1
G= (H(c(t ) |x |) H(c(t ) |x + |))
2c
1
G = ((c(t ) |x |)(1) sign(x )(1) (c(t ) |x + |)(1) sign(x + ))
2c
1
G (x, t; 0, ) = (c(t ) |x|) sign(x)
c
We are interested in x > 0.
1
G (x, t; 0, ) = (c(t ) x)
c
Now we can calculate the solution u.
Z t+
1
u(x, t) = c2 h( ) (c(t ) x) d
0 c
Z t+ x
u(x, t) = h( ) (t ) d
0 c
x
u(x, t) = h t
c
3. The boundary condition influences the solution u(x1 , t1 ) only at the point t = t1 x1 /c. The
contribution from the boundary condition u(0, t) = h(t) is a wave moving to the right with
speed c.
Solution 45.16
1235
tt c2 xx = 0, (x, 0; , ) = 0, t (x, 0; , ) = (x )
2 2
tt + c xx = 0, (x, 0; , ) = 0, t (x, 0; , ) = F[(x )]
1
= F[(x )] sin(ct)
h c i
= F[(x )]F (H(x + ct) + H(x ct))
Z c
1
= ( ) (H(x + ct) + H(x ct)) d
2 c
1
(x, t; ) = (H(x + ct) + H(x ct))
2c
Solution 45.17
Z Z Z Z
u(x, t) = G(x, t; , )f (, ) d d + g(x, t; )p() d + (x, t; )q() d
0
Z Z
1
u(x, t) = H(t )(H(x + c(t )) H(x c(t )))f (, ) d d
2c 0
1
Z Z
1
+ ((x + ct) + (x ct))p() d + (H(x + ct) + H(x ct))q() d
2 2c
Z tZ
1
u(x, t) = (H(x + c(t )) H(x c(t )))f (, ) d d
2c 0
Z x+ct
1 1
+ (p(x + ct) + p(x ct)) + q() d
2 2c xct
Z tZ x+c(t ) Z x+ct
1 1 1
u(x, t) = f (, ) d d + (p(x + ct) + p(x ct)) + q() d
2c 0 xc(t ) 2 2c xct
This solution demonstrates the domain of dependence of the solution. The first term is an integral
over the triangle domain {(, ) : 0 < < t, x c < < x + c }. The second term involves only
the points (x ct, 0). The third term is an integral on the line segment {(, 0) : x ct < < x + ct}.
In totallity, this is just the triangle domain. This is shown graphically in Figure 45.4.
x,t
Domain of
Dependence
x-ct x+ct
1236
Solution 45.18
Single Sum Representation. First we find the eigenfunctions of the homogeneous problem u
k 2 u = 0. We substitute the separation of variables, u(x, y) = X(x)Y (y) into the partial differential
equation.
X 00 Y + XY 00 k 2 XY = 0
X 00 Y 00
= k2 = 2
X Y
We have the regular Sturm-Liouville eigenvalue problem,
X 00 = 2 X, X(0) = X(a) = 0,
which has the solutions,
n nx
n = , Xn = sin , n N.
a a
We expand the solution u in a series of these eigenfunctions.
X nx
G(x, y; , ) = cn (y) sin
n=1
a
We substitute this series into the partial differential equation to find equations for the cn (y).
X n 2 nx
cn (y) + c00n (y) k 2 cn (y) sin = (x )(y )
n=1
a a
1237
Solution 45.19
We take the Fourier cosine transform in x of the partial differential equation and the boundary
condition along y = 0.
1
2 G(, ) + G(, 0) (k 2 + 2 )G(, ) = 2 cos() sin()
cos() sin()
G = 2 2
(k + 2 + 2 )
We take two inverse transforms to find the solution. For one integral representation of the Green
function we take the inverse sine transform followed by the inverse cosine transform.
sin() 1
G = cos()
(k 2 + 2 + 2 )
1 2 2
G = cos()Fs [(y )]Fc e k + y
k 2 + 2
Z
1 1 p p
G(, y) = cos() (z ) exp k 2 + 2 |y z| exp k 2 + 2 (y + z) dz
2 0 k 2 + 2
cos() p p
G(, y) = exp k 2 + 2 |y | exp k 2 + 2 (y + )
2 k 2 + 2
1 cos()
Z p p
G(x, y; , ) = exp k 2 + 2 |y | exp k 2 + 2 (y + ) d
0 k 2 + 2
For another integral representation of the Green function, we take the inverse cosine transform
followed by the inverse sine transform.
cos() 1
G(, ) = sin()
(k 2 + 2 + 2 )
" #
1 k2 + 2 x
G(, ) = sin()Fc [(x )]Fc p e
k2 + 2
1
Z
1 2 2 2 2
G(x, ) = sin() (z ) p e k + |xz| + e k + (x+z) dz
2 0 k2 + 2
1 1 2 2 2 2
G(x, ) = sin() p e k + |x| + e k + (x+)
2 k 2 + 2
sin(y) sin() k2 + 2 |x| 2 2
Z
1
G(x, y; , ) = p e + e k + (x+) d
0 k2 + 2
1238
Solution 45.20
The problem is:
1 1 (r )( )
Grr + Gr + 2 G = , 0 < r < , 0 < < ,
r r r
G(r, 0, , ) = G(r, , , ) = 0,
G(0, , , ) = 0
G(r, , , ) 0 as r .
Let w = r ei and z = x + iy. We use the conformal mapping, z = w/ to map the sector to the
upper half z plane. The problem is (x, y) space is
We will solve this problem with the method of images. Note that the solution of,
satisfies the condition, G(x, 0, , ) = 0. Since the infinite space Green function for the Laplacian in
two dimensions is
1
ln (x )2 + (y )2 ,
4
the solution of this problem is,
1 1
ln (x )2 + (y )2 ln (x )2 + (y + )2
G(x, y, , ) =
4 4
1 (x )2 + (y )2
= ln .
4 (x )2 + (y + )2
z = w/ = (r ei )/
x + iy = r/ (cos(/) + i sin(/))
x = r/ cos(/), y = r/ sin(/)
1239
Now recall that the solution of
u = f (x),
subject to the boundary condition,
u(x) = g(x),
is Z Z I
u(x) = f ()G(x; ) dA + g() G(x; ) n ds .
The normal directions along the lower and upper edges of the sector are and , respectively. The
gradient in polar coordinates is
= + .
We only need to compute the component of the gradient of G. This is
Along = 0, this is
1 sin(/)
G (r, , , 0) = .
2 cosh
ln r cos(/)
Along = , this is
1 sin(/)
G (r, , , ) = .
2 cosh ln r + cos(/)
Solution 45.21
First consider the Green function for
Gt = Gxx , G(x, 0; ) = (x ).
1240
The Green function is a solution of the homogeneous heat equation for the initial condition of a
unit amount of heat concentrated at the point x = . You can verify that the Green function is a
solution of the heat equation for t > 0 and that it has the property:
Z
G(x, t; ) dx = 1, for t > 0.
This property demonstrates that the total amount of heat is the constant 1. At time t = 0 the heat
is concentrated at the point x = . As time increases, the heat diffuses out from this point.
The solution for u(x, t) is the linear combination of the Green functions that satisfies the initial
condition u(x, 0) = f (x). This linear combination is
Z
u(x, t) = G(x, t; )f () d.
G(x, t; 1) and G(x, t; 1) are plotted in Figure 45.5 for the domain t [1/100..1/4], x [2..2] and
= 1.
2 2
1
1
0
0
0.1
-1
0.2
-2
satisfies the boundary condition G(0, t; ) = 0. We write the solution as the difference of infinite
space Green functions.
1 2 1 2
G(x, t; ) = e(x) /(4t) e(x+) /(4t)
4t 4t
1 (x)2 /(4t) 2
= e e(x+) /(4t)
4t
1241
1 2 2 x
G(x, t; ) = e(x + )/(4t) sinh
4t 2t
Next we consider the problem
satisfies the boundary condition Gx (0, t; ) = 0. We write the solution as the sum of infinite space
Green functions.
1 2 1 2
G(x, t; ) = e(x) /(4t) + e(x+) /(4t)
4t 4t
1 (x2 + 2 )/(4t) x
G(x, t; ) = e cosh
4t 2t
The Green functions for the two boundary conditions are shown in Figure 45.6.
2 1 2 1
1 0.8 1 0.
0 0.6 0 0.
0.05 0.4 0.05 0.4
0.1 0.2 0.1 0.2
0.15 0.15
0.2 0 0.2 0
0.25 0.25
Figure 45.6: Green functions for the boundary conditions u(0, t) = 0 and ux (0, t) = 0.
Solution 45.22
The condition that G is zero for t < makes this a causal Green function. We solve this problem
by expanding G in a series of eigenfunctions of the x variable. The coefficients in the expansion will
1242
be functions of t. First we find the eigenfunctions of x in the homogeneous problem. We substitute
the separation of variables u = X(x)T (t) into the homogeneous partial differential equation.
XT 00 = c2 X 00 T
T 00 X 00
= = 2
c2 T X
The eigenvalue problem is
X 00 = 2 X, X(0) = X 0 (L) = 0,
which has the solutions,
(2n 1) (2n 1)x
n = , Xn = sin , n N.
2L 2L
X (2n 1)x
G(x, t; , ) = gn (t) sin .
n=1
2L
We determine the coefficients by substituting the expansion into the Green function differential
equation.
We need to expand the right side of the equation in the sine series
X (2n 1)x
(x )(t ) = dn (t) sin
n=1
2L
Z L
2 (2n 1)x
dn (t) = (x )(t ) sin dx
L 0 2L
2 (2n 1)
dn (t) = sin (t )
L 2L
By equating coefficients in the sine series, we obtain ordinary differential equation Green function
problems for the gn s.
2
(2n 1)c 2 (2n 1)
gn00 (t; ) + gn (t; ) = sin (t )
2L L 2L
From the causality condition for G, we have the causality conditions for the gn s,
1243
Since the continuity and jump conditions are given at the point t = , a handy set of solutions to
use for this problem is the fundamental set of solutions at that point:
(2n 1)c(t ) 2L (2n 1)c(t )
cos , sin
2L (2n 1)c 2L
The solution that satisfies the causality condition and the continuity and jump conditions is,
4 (2n 1) (2n 1)c(t )
gn (t; ) = sin sin H(t ).
(2n 1)c 2L 2L
1 X 1 (2n 1)((x ) c(t ))
G(x, t; , ) = H(t ) sin
c n=1
2n 1 2L
(2n 1)((x ) + c(t )) (2n 1)((x + ) c(t ))
+ sin sin
2L 2L
!
(2n 1)((x + ) + c(t ))
sin
2L
ux (0, t) = ux (L, t) = 0.
First we find the eigenfunctions in x of the homogeneous problem. The eigenvalue problem is
X 00 = 2 X, X 0 (0) = X 0 (L) = 0,
0 = 0, X0 = 1,
n nx
n = , Xn = cos , n = 1, 2, . . . .
L L
The series expansion of the Green function for t > has the form,
1 X nx
G(x, t; , ) = g0 (t) + gn (t) cos .
2 n=1
L
(Note the factor of 1/2 in front of g0 (t). With this, the integral formulas for all the coefficients are
the same.) We determine the coefficients by substituting the expansion into the partial differential
equation.
1244
We expand the right side of the equation in the cosine series.
1 X nx
(x )(t ) = d0 (t) + dn (t) cos
2 n=1
L
Z L
2 nx
dn (t) = (x )(t ) cos dx
L 0 L
2 n
dn (t) = cos (t )
L L
By equating coefficients in the cosine series, we obtain ordinary differential equations for the gn .
nc 2
2 n
gn00 (t; ) + gn (t; ) = cos (t ), n = 0, 1, 2, . . .
L L L
From the causality condition for G, we have the causality condiions for the gn ,
gn (t; ) = gn0 (t; ) = 0 for t < .
The continuity and jump conditions for the gn are
2 n
+
gn ( ; ) = 0, gn0 ( + ; ) = cos .
L L
The homogeneous solutions of the ordinary differential equation for n = 0 and n > 0 are respectively,
nct nct
{1, t}, cos , sin .
L L
Since the continuity and jump conditions are given at the point t = , a handy set of solutions to
use for this problem is the fundamental set of solutions at that point:
nc(t ) L nc(t )
{1, t }, cos , sin .
L nc L
The solutions that satisfy the causality condition and the continuity and jump conditions are,
2
g0 (t) = (t )H(t ),
L
2 n nc(t )
gn (t) = cos sin H(t ).
nc L L
Substituting this into the sum yields,
!
t 2 X1 n nc(t ) nx
G(x, t; , ) = H(t ) + cos sin cos .
L c n=1 n L L L
t 1 X 1 n((x ) c(t ))
G(x, t; , ) = H(t ) + H(t ) sin
L 2c n=1
n 2L
n((x ) + c(t )) n((x + ) c(t ))
+ sin sin
2L 2L
!
n((x + ) + c(t ))
+ sin
2L
1245
Solution 45.23
First we derive Greens identity for this problem. We consider the integral of uL[v] L[u]v on the
domain 0 < x < 1, 0 < t < T .
Z T Z 1
(uL[v] L[u]v) dx dt
0 0
Z T Z 1
u(vtt c2 vxx (utt c2 uxx )v dx dt
0 0
Z T Z 1
c2 (uvx ux v), uvt ut v
, dx dt
0 0 x t
Now we can use the divergence theorem to write this as an integral along the boundary of the
domain. I
c2 (uvx ux v), uvt ut v n ds
The domain and the outward normal vectors are shown in Figure 45.7.
n=(0,1)
t=T
n=(-1,0) n=(1,0)
t=0
x=0 x=1
n=(0,-1)
Writing out the boundary integrals, Greens identity for this problem is,
Z T Z 1 Z 1
u(vtt c2 vxx ) (utt c2 uxx )v dx dt =
(uvt ut v)t=0 dx
0 0 0
Z 0 Z T Z 1
2 2
+ (uvt ut v)t=T dx c (uvx ux v)x=1 dt + c (uvx ux v)x=0 dt
1 0 T
G c2 G = (x )(t ),
G (x, t; 0, ) = G (x, t; 1, ) = 0, > 0, G(x, t; , ) = 0 for > t.
1246
Now we apply Greens identity for u = u(, ), (the solution of the wave equation), and v =
G(x, t; , ), (the Green function), and integrate in the (, ) variables. The left side of Greens
identity becomes:
Z TZ 1
u(G c2 G ) (u c2 u )G d d
0 0
Z T Z 1
(u((x )(t )) (0)G) d d
0 0
u(x, t).
Since the normal derivative of u and G vanish on the sides of the domain, the integrals along = 0
and = 1 in Greens identity vanish. If we take T > t, then G is zero for = T and the integral
along = T vanishes. The one remaining integral is
Z 1
(u(, 0)G (x, t; , 0) u (, 0)G(x, t; , 0) d.
0
Thus Greens identity allows us to write the solution of the inhomogeneous problem.
Z 1
u(x, t) = (u (, 0)G(x, t; , 0) u(, 0)G (x, t; , 0)) d.
0
Now we substitute in the Green function that we found in the previous exercise. The Green function
and its derivative are,
X 2
G(x, t; , 0) = t + cos(n) sin(nct) cos(nx),
n=1
nc
X
G (x, t; , 0) = 1 2 cos(n) cos(nct) cos(nx).
n=1
1247
Note that the summand is nonzero only for even terms.
53 3 X 1
u(3/4, 7/2) = cos(3n) cos(14n)
15 16 4 n=1 n4
53 3 X (1)n
=
15 16 4 n=1 n4
53 3 7 4
=
15 16 4 720
12727
u(3/4, 7/2) =
3840
1248
Chapter 46
Conformal Mapping
1249
46.1 Exercises
Exercise 46.1
Use an appropriate conformal map to find a non-trivial solution to Laplaces equation
uxx + uyy = 0,
on the wedge bounded by the x-axis and the line y = x with boundary conditions:
1. u = 0 on both sides.
du
2. = 0 on both sides (where n is the inward normal to the boundary).
dn
Exercise 46.2
Consider
uxx + uyy = (x )(y ),
on the quarter plane x, y > 0 with u(x, 0) = u(0, y) = 0 (and , > 0).
1. Use image sources to find u(x, y; , ).
2. Compare this to the solution which would be obtained using conformal maps and the Green
function for the upper half plane.
3. Finally use this idea and conformal mapping to discover how image sources are arrayed when
the domain is now the wedge bounded by the x-axis and the line y = x (with u = 0 on both
sides).
Exercise 46.3
= + is an analytic function of z, = (z). We assume that 0 (z) is nonzero on the domain of
interest. u(x, y) is an arbitrary smooth function of x and y. When expressed in terms of and ,
u(x, y) = (, ). In Exercise 8.15 we showed that
2 2
2 2 d u 2u
+ 2 = + 2 .
2 dz x2 y
1. Show that if u satisfies Laplaces equation in the z-plane,
uxx + uyy = 0,
then satisfies Laplaces equation in the -plane,
+ = 0,
1250
4. Show that if in the z-plane, u satisfies the Green function problem,
+ = ( 0 )( 0 ).
Exercise 46.4
A semi-circular rod of infinite extent is maintained at temperature T = 0 on the flat side and at
T = 1 on the curved surface:
x2 + y 2 = 1, y > 0.
Use the conformal mapping
1+z
w = + = , z = x + y,
1z
to formulate the problem in terms of and . Solve the problem in terms of these variables. This
problem is solved with an eigenfunction expansion in Exercise ??. Verify that the two solutions
agree.
Exercise 46.5
Consider Laplaces equation on the domain < x < , 0 < y < , subject to the mixed boundary
conditions,
u = 1 on y = 0, x > 0,
u = 0 on y = , x > 0,
uy = 0 on y = 0, y = , x < 0.
Because of the mixed boundary conditions, (u and uy are given on separate parts of the same
boundary), this problem cannot be solved with separation of variables. Verify that the conformal
map,
= cosh1 (ez ),
with z = x + y, = + maps the infinite interval into the semi-infinite interval, > 0, 0 < < .
Solve Laplaces equation with the appropriate boundary conditions in the plane by inspection.
Write the solution u in terms of x and y.
1251
46.2 Hints
Hint 46.1
Hint 46.2
Hint 46.3
Hint 46.4
Show that w = (1 + z)/(1 z) maps the semi-disc, 0 < r < 1, 0 < < to the first quadrant of the
w plane. Solve the problem for v(, ) by taking Fourier sine transforms in and .
To show that the solution for v(, ) is equivalent to the series expression for u(r, ), first find an
analytic function g(w) of which v(, ) is the imaginary part. Change variables to z to obtain the
analytic function f (z) = g(w). Expand f (z) in a Taylor series and take the imaginary part to show
the equivalence of the solutions.
Hint 46.5
To see how the boundary is mapped, consider the map,
z = log(cosh ).
1252
46.3 Solutions
Solution 46.1
We map the wedge to the upper half plane with the conformal transformation = z 4 .
1. We map the wedge to the upper half plane with the conformal transformation = z 4 . The
new problem is
u + u = 0, u(, 0) = 0.
This has the solution u = . We transform this problem back to the wedge.
u(x, y) = = z 4
2. We dont need to use conformal mapping to solve the problem with Neumman boundary
conditions. u = c is a solution to
du
uxx + uyy = 0, =0
dn
on any domain.
Solution 46.2
1. We add image sources to satisfy the boundary conditions.
1 p p
u= ln (x )2 + (y )2 ln (x + )2 + (y )2
2 p p
ln (x )2 + (y + )2 + ln (x + )2 + (y + )2
!
1 (x )2 + (y )2 (x + )2 + (y + )2
u= ln
4 ((x + )2 + (y )2 ) ((x )2 + (y + )2 )
c = z2, c = a + b.
2 2
a=x y , b = 2xy
1253
We transform the problem to the upper half plane, solve the problem there, and then transform
back to the first quadrant.
3. First consider
u = (x )(y ), u(x, 0) = u(x, x) = 0.
Enforcing the boundary conditions will require 7 image sources obtained from 4 odd reflections.
Refer to Figure 46.1 to see the reflections pictorially. First we do a negative reflection across the
line y = x, which adds a negative image source at the point (, ) This enforces the boundary
condition along y = x.
Now we take the negative image of the reflection of these two sources across the line y = 0 to
enforce the boundary condition there.
The point sources are no longer odd symmetric about y = x. We add two more image sources
to enforce that boundary condition.
Now sources are no longer odd symmetric about y = 0. Finally we add two more image sources
to enforce that boundary condition. Now the sources are odd symmetric about both y = x
and y = 0.
Solution 46.3
2 2
2 2 d u 2u
+ = + .
2 2 dz x2 y 2
1254
Figure 46.1: Odd reflections to enforce the boundary conditions.
1.
uxx + uyy = 0
2
d
( + ) = 0
dz
+ = 0
2.
uxx + uyy = u
2
d
( + ) =
dz
2
dz
+ =
d
3.
uxx + uyy = f (x, y)
2
d
( + ) = (, )
dz
2
dz
+ = (, )
d
1255
2
Next we show that |dz/d| has the same value as the Jacobian.
2
dz
= (x + y )(x y ) = x2 + y2
d
+ = ( 0 )( 0 )
Solution 46.4
The mapping,
1+z
w= ,
1z
maps the unit semi-disc to the first quadrant of the complex plane.
We write the mapping in terms of r and .
1 + r e 1 r2 + 2r sin
+ =
=
1re 1 + r2 2r cos
1 r2
=
1 + r2 2r cos
2r sin
=
1 + r2 2r cos
Consider a semi-circle of radius r. The image of this under the conformal mapping is a semi-circle
2r 1+r 2
of radius 1r 2 and center 1r 2 in the first quadrant of the w plane. This semi-circle intersects the
axis at 1r 1+r
1+r and 1r . As r ranges from zero to one, these semi-circles cover the first quadrant of
the w plane. (See Figure 46.2.)
1 5
4
3
2
1
-1 1 1 2 3 4 5
1+z
Figure 46.2: The conformal map, w = 1z .
We also note how the boundary of the semi-disc is mapped to the boundary of the first quadrant
of the w plane. The line segment = 0 is mapped to the real axis > 1. The line segment = is
mapped to the real axis 0 < < 1. Finally, the semi-circle r = 1 is mapped to the positive imaginary
axis.
The problem for v(, ) is,
v + v = 0, > 0, > 0,
v(, 0) = 0, v(0, ) = 1.
1256
We will solve this problem with the Fourier sine transform. We take the Fourier sine transform of
the partial differential equation, first in and then in .
2 v(, ) + v(0, ) + v(, ) = 0, v(, 0) = 0
2 v(, ) + + v(, ) = 0, v(, 0) = 0
2 ) + v(, 0) = 0
v(, ) + 2 2 v(,
v(, ) = 2
(2 + 2 )
Now we utilize the Fourier sine transform pair,
/
Fs ecx =
,
2 + c2
to take the inverse sine transform in .
1
v(, ) = e
With the Fourier sine transform pair,
h x i 1
Fs 2 arctan = ec ,
c
we take the inverse sine transform in to obtain the solution.
2
v(, ) = arctan
Since v is harmonic, it is the imaginary part of an analytic function g(w). By inspection, we see
that this function is
2
g(w) = log(w).
We change variables to z, f (z) = g(w).
2 1+z
f (z) = log
1z
We expand f (z) in a Taylor series about z = 0,
4 X zn
f (z) = ,
n=1 n
oddn
This demonstrates that the solutions obtained with conformal mapping and with an eigenfunction
expansion in Exercise ?? agree.
1257
Solution 46.5
Instead of working with the conformal map from the z plane to the plane,
= cosh1 (ez ),
z = log(cosh ),
which maps the semi-infinite strip to the infinite one. We determine how the boundary of the domain
is mapped so that we know the appropriate boundary conditions for the semi-infinite strip domain.
A { : > 0, = 0}
7 {log(cosh()) : > 0} = {z : x > 0, y = 0}
B { : > 0, = } 7 {log( cosh()) : > 0} = {z : x > 0, y = }
C { : = 0, 0 < < /2} 7 {log(cos()) : 0 < < /2} = {z : x < 0, y = 0}
D { : = 0, /2 < < } 7 {log(cos()) : /2 < < } = {z : x < 0, y = }
From the mapping of the boundary, we see that the solution v(, ) = u(x, y), is 1 on the bottom of
the semi-infinite strip, 0 on the top. The normal derivative of v vanishes on the vertical boundary.
See Figure 46.3.
y
B D B
D z=log(cosh( ))
C x
A C A
y
v=0 uy=0 u=0
v=0
x
v=1 uy=0 u=1
1258
where z = x + y. We will find the imaginary part of cosh1 (ez ) in order to write this explicitly in
terms of x and y. Recall that we can write the cosh1 in terms of the logarithm.
p
cosh1 (w) = log w + w2 1
p
cosh1 (ez ) = log ez + e2z 1
p
= log ez 1 + 1 e2z
p
= z + log 1 + 1 e2z
Now we need to find the imaginary part. Well work from the inside out. First recall,
r
p y p y
x + y = x2 + y 2 exp tan1 = 4 x2 + y 2 exp tan1 ,
x 2 x
so that we can write the innermost factor as,
p p
1 e2z = 1 e2x cos(2y) + e2x sin(2y)
2x
p
4 2x 2 2x 2
1 e sin(2y)
= (1 e cos(2y)) + (e sin(2y)) exp tan
2 1 e2x cos(2y)
p
4 2x 4x
1 sin(2y)
= 1 2e cos(2y) + e exp tan
2 e2x cos(2y)
1259
1260
Chapter 47
Non-Cartesian Coordinates
x = r cos sin
y = r sin sin
z = r cos .
The Jacobian is
cos sin r sin sin r cos cos
sin sin r cos sin r sin cos
cos 0 r sin
cos sin sin cos cos
= r2 sin sin sin cos
sin cos
cos 0 sin
= r sin ( cos2 sin2 sin2 cos2 cos2 cos2 sin2 sin2 )
2
1 2u
1 u
r + = 0, 0r1
r r r r2 2
1. u(1, ) = f ()
2. ur (1, ) = g().
1261
We separate variables with u(r, ) = R(r)T ().
1 0 1
(R T + rR00 T ) + 2 RT 00 = 0
r r
R00 R0 T 00
r2 +r = =
R R T
Thus we have the two ordinary differential equations
(I chose T0 = 1/2 so that all the eigenfunctions have the same norm.)
R = c1 + c2 log r.
R0 = 1.
R = c1 rn + c2 rn .
Rn = r n .
a0 X n
u(r, ) = + r [an cos(n) + bn sin(n)] .
2 n=1
1262
For the boundary condition ur (1, ) = g() we have the equation
X
g() = n [an cos(n) + bn sin(n)] .
n=1
1 2u
2 1 u
u= r + = 0, 0 r < a, < ,
r r r r2 2
So far this problem only has one boundary condition. By requiring that the solution be finite,
we get the boundary condition
|u(0, )| < .
By specifying that the solution be C 1 , (continuous and continuous first derivative) we obtain
u u
u(r, ) = u(r, ) and (r, ) = (r, ).
We will use the method of separation of variables. We seek solutions of the form
u(r, ) = R(r)().
2 u 1 u 1 2u
+ + =0
r2 r r r2 2
1 1
R00 + R0 = 2 R00
r r
r2 R00 rR0 00
+ = =
R R
Now we have the boundary value problem for ,
00 () + () = 0, < ,
subject to
() = () and 0 () = 0 ()
1263
< 0. No linear combination of the solutions, = exp( ), exp( ), can satisfy the
boundary conditions. Thus there are no negative eigenvalues.
0 = 0, A0 = 1.
> 0. The general solution is = a cos( ) + b sin( ). Applying the boundary conditions
yields the eigenvalues
n = n2 , n = 1, 2, 3, . . .
with the associated eigenfunctions
1
R00 = R0
r
a
R0 =
r
R = a log r + b
Requiring that the solution be bounded at r = 0 yields (to within a constant multiple)
R0 = 1.
For n = n2 , n 1, we have
r2 R00 + rR0 n2 R = 0
( 1) + n2 = 0
= n
R = arn + brn .
requiring that the solution be bounded at r = 0 we obtain (to within a constant multiple)
Rn = r n
The general solution to the partial differential equation is a linear combination of the eigenfunc-
tions
X
u(r, ) = c0 + [cn rn cos n + dn rn sin n] .
n=1
1264
We note that the eigenfunctions 1, cos n, and sin n are orthogonal on . Integrating
the boundary condition from to yields
Z Z
2
d = c0 d
2
c0 = .
3
Multiplying the boundary condition by cos m and integrating gives
Z Z
2 m
cos m d = cm a cos2 m d
(1)m 8
cm = .
m2 am
We multiply by sin m and integrate to get
Z Z
2
sin m d = dm am
sin2 m d
dm = 0
1265
1266
Part VI
Calculus of Variations
1267
Chapter 48
Calculus of Variations
1269
48.1 Exercises
Exercise 48.1 R
Discuss the problem of minimizing 0 ((y 0 )4 6(y 0 )2 ) dx, y(0) = 0, y() = . Consider both C 1 [0, ]
and Cp1 [0, ], and comment (with reasons) on whether your answers are weak or strong minima.
Exercise 48.2
Consider
Rx
1. x01 (a(y 0 )2 + byy 0 + cy 2 ) dx, y(x0 ) = y0 , y(x1 ) = y1 , a 6= 0,
Rx
2. x01 (y 0 )3 dx, y(x0 ) = y0 , y(x1 ) = y1 .
Can these functionals have broken extremals, and if so, find them.
Exercise 48.3
Discuss finding a weak extremum for the following:
R1
1. 0 (y 00 )2 2xy dx, y(0) = y 0 (0) = 0, y(1) = 1
120
R1
2. 0 12 (y 0 )2 + yy 0 + y 0 + y dx
Rb
3. a (y 2 + 2xyy 0 ) dx, y(a) = A, y(b) = B
R1
4. 0 (xy + y 2 2y 2 y 0 ) dx, y(0) = 1, y(1) = 2
Exercise 48.4
Find the natural boundary conditions associated with the following functionals:
RR
1. D F (x, y, u, ux , uy ) dx dy
2. D p(x, y)(u2x + u2y ) q(x, y)u2 dx dy + (x, y)u2 ds
RR R
Here D represents a closed boundary domain with boundary , and ds is the arc-length differential.
p and q are known in D, and is known on .
Exercise 48.5
The equations for water waves with free surface y = h(x, t) and bottom y = 0 are
where the fluid motion is described by (x, y, t) and g is the acceleration of gravity. Show that all
these equations may be obtained by varying the functions (x, y, t) and h(x, t) in the variational
principle
ZZ Z h(x,t) !
1 2 1 2
t + x + y + gy dy dx dt = 0,
R 0 2 2
where R is an arbitrary region in the (x, t) plane.
Exercise 48.6 Rb
Extremize the functional a F (x, y, y 0 ) dx, y(a) = A, y(b) = B given that the admissible curves can
not penetrate the interior of a given region R in the (x, y) plane. Apply your results to find the
R 10
curves which extremize 0 (y 0 )3 dx, y(0) = 0, y(10) = 0 given that the admissible curves can not
penetrate the interior of the circle (x 5)2 + y 2 = 9.
1270
Exercise 48.7 R p
Consider the functional y ds where ds is the arc-length differential (ds = (dx)2 + (dy)2 ). Find
the curve or curves from a given vertical line to a given fixed point B = (x1 , y1 ) which minimize this
functional. Consider both the classes C 1 and Cp1 .
Exercise 48.8
A perfectly flexible uniform rope of length L hangs in equilibrium with one end fixed at (x1 , y1 ) so
that it passes over a frictionless pin at (x2 , y2 ). What is the position of the free end of the rope?
Exercise 48.9
The drag on a supersonic airfoil of chord c and shape y = y(x) is proportional to
Z c 2
dy
D= dx.
0 dx
Find the shape for minimum drag if the moment of inertia of the contour with respect to the x-axis
is specified; that is, find the shape for minimum drag if
Z c
y 2 dx = A, y(0) = y(c) = 0, (c, A given).
0
Exercise 48.10
The deflection y of a beam executing free (small) vibrations of frequency satisfies the differential
equation
d2
dy
EI 2 y = 0,
dx2 dx
where EI is the flexural rigidity and is the linear mass density. Show that the deflection modes
are extremals of the problem
RL !
2 0
EI(y 00 )2 dx
RL = 0, (L = length of beam)
0
y 2 dx
when appropriate homogeneous end conditions are prescribed. Show that stationary values of the
ratio are the squares of the natural frequencies.
Exercise 48.11
A boatman wishes to steer his boat so as to minimize the transit time required to cross a river of
width l. The path of the boat is given parametrically by
x = X(t), y = Y (t),
for 0 t T . The river has no cross currents, so the current velocity is directed downstream in
the y-direction. v0 is the constant boat speed relative to the surrounding water, and w = w(x, y, t)
denotes the downstream river current at point (x, y) at time t. Then,
where (t) is the steering angle of the boat at time t. Find the steering control function (t) and
the final time T that will transfer the boat from the initial state (X(0), Y (0)) = (0, 0) to the final
state at X(t) = l in such a way as to minimize T .
Exercise 48.12
Two particles of equal mass m are connected by an inextensible string which passes through a hole in
p
a smooth horizontal table. The first particle is on the table moving with angular velocity = g/
in a circular path, of radius , around the hole. The second particle is suspended vertically and is
in equilibrium. At time t = 0, the suspended mass is pulled downward a short distance and released
while the first mass continues to rotate.
1271
1. If x represents the distance of the second mass below its equilibrium at time t and represents
the angular position of the first particle at time t, show that the Lagrangian is given by
2 1 2 2
L = m x + ( x) + gx
2
2. In the case where the displacement of the suspended mass from equilibrium is small, show that
the suspended mass performs small vertical oscillations and find the period of these oscillations.
Exercise 48.13
A rocket is propelled vertically upward so as to reach a prescribed height h in minimum time while
using a given fixed quantity of fuel. The vertical distance x(t) above the surface satisfies,
mx = mg + mU (t), x(0) = 0,
(x)(0) = 0,
where U (t) is the acceleration provided by engine thrust. We impose the terminal constraint x(T ) =
h, and we wish to find the particular thrust function U (t) which will minimize T assuming that the
total thrust of the rocket engine over the entire thrust time is limited by the condition,
Z T
U 2 (t) dt = k 2 .
0
Here k is a given positive constant which measures the total amount of fuel available.
Exercise 48.14
A space vehicle moves along a straight path in free space. x(t) is the distance to its docking pad,
and a, b are its position and speed at time t = 0. The equation of motion is
where the control function V (t) is related to the rocket acceleration U (t) by U = M sin V , M = const.
We wish to dock the vehicle in minimum time; that is, we seek a thrust function U (t) which will
minimize the final time T while bringing the vehicle to rest at the origin with x(T ) = 0, x(T ) = 0.
Find U (t), and in the (x, x)-plane plot the corresponding trajectory which transfers the state of the
system from (a, b) to (0, 0). Account for all values of a and b.
Exercise 48.15 Rm p
Find a minimum for the functional I(y) = 0 y + h 1 + (y 0 )2 dx in which h > 0, y(0) = 0,
y(m) = M > h. Discuss the nature of the minimum, (i.e., weak, strong, . . . ).
Exercise 48.16 R p
Show that for the functional n(x, y) 1 + (y 0 )2 dx, where n(x, y) 0 in some domain D, the
Weierstrass E function E(x, y, q, y 0 ) is non-negative for arbitrary finite p and y 0 at any point of D.
What is the implication of this for Fermats Principle?
Exercise 48.17 2
Consider the integral 1+y
R
(y 0 )2 dx between fixed limits. Find the extremals, (hyperbolic sines), and
discuss the Jacobi, Legendre, and Weierstrass conditions and their implications regarding weak and
strong extrema. Also consider the value of the integral on any extremal compared with its value on
the illustrated strong variation. Comment!
Pi Qi are vertical segments, and the lines Qi Pi+1 are tangent to the extremal at Pi+1 .
Exercise 48.18
Rx
Consider I = x01 y 0 (1 + x2 y 0 ) dx, y(x0 ) = y0 , y(x1 ) = y1 . Can you find continuous curves which will
1272
minimize I if
(i) x0 = 1, y0 = 1, x1 = 2, y1 = 4,
(ii) x0 = 1, y0 = 3, x1 = 2, y1 = 5,
(iii) x0 = 1, y0 = 1, x1 = 2, y1 = 1.
Exercise 48.19
Starting from ZZ Z
(Qx Py ) dx dy = (P dx + Q dy)
D
prove that
ZZ ZZ Z
(a) xx dx dy = xx dx dy + (x x ) dy,
Z ZD Z ZD Z
(b) yy dx dy = yy dx dy
(y y ) dx,
D D
ZZ ZZ Z Z
1 1
(c) xy dx dy = xy dx dy (x x ) dx + (y y ) dy.
D D 2 2
Then, consider
Z t1 ZZ
(uxx + uyy )2 + 2(1 )(uxx uyy u2xy ) dx dy dt.
I(u) =
t0 D
Show that
Z t1 ZZ Z t1 Z
(u)
I = (4 u)u dx dy dt + P (u)u + M (u) ds dt,
t0 D t0 n
where P and M are the expressions we derived in class for the problem of the vibrating plate.
Exercise 48.20
For the following functionals use the Rayleigh-Ritz method to find an approximate solution of the
problem of minimizing the functionals and compare your answers with the exact solutions.
Z 1
(y 0 )2 y 2 2xy dx,
y(0) = 0 = y(1).
0
y = x(1 x) (a0 + a1 x + + an xn ) ,
2
x2 1 2
Z
0 2
x(y ) y 2x2 y dx, y(1) = 0 = y(2)
1 x
Exercise 48.21
Let K(x) belong to L1 (, ) and define the operator T on L2 (, ) by
Z
T f (x) = K(x y)f (y) dy.
1273
1. Show that the spectrum of T consists of the range of the Fourier transform K of K, (that is,
the set of all values K(y) with < y < ), plus 0 if this is not already in the range. (Note:
From the assumption on K it follows that K is continuous and approaches zero at .)
2. For in the spectrum of T , show that is an eigenvalue if and only if K takes on the value
on at least some interval of positive length and that every other in the spectrum belongs to
the continuous spectrum.
3. Find an explicit representation for (T I)1 f for not in the spectrum, and verify directly
that this result agrees with that givenby the Neumann series if is large enough.
Exercise 48.22
Let U be the space of twice continuously differentiable functions f on [1, 1] satisfying f (1) =
d2
f (1) = 0, and W = C[1, 1]. Let L : U 7 W be the operator dx 2 . Call in the spectrum of L
if the following does not occur: There is a bounded linear transformation T : W 7 U such that
(L I)T f = f for all f W and T (L I)f = f for all f U . Determine the spectrum of L.
Exercise 48.23
Solve the integral equations
Z 1
x2 y y 2 (y) dy
1. (x) = x +
0
Z x
2. (x) = x + K(x, y)(y) dy
0
where (
sin(xy) for x 1 and y 1,
K(x, y) =
0 otherwise
In both cases state for which values of the solution obtained is valid.
Exercise 48.24
1. Suppose that K = L1 L2 , where L1 L2 L2 L1 = I. Show that if x is an eigenvector of
K corresponding to the eigenvalue , then L1 x is an eigenvector of K corresponding to the
eigenvalue 1, and L2 x is an eigenvector corresponding to the eigenvalue + 1.
2
d
2. Find the eigenvalues and eigenfunctions of the operator K dt + t4 in the space of functions
2
u L2 (, ). (Hint: L1 = 2t + dtd
, L2 = 2t dt
d
. et /4 is the eigenfunction corresponding
to the eigenvalue 1/2.)
Exercise 48.25
Prove that if the value of = 1 is in the residual spectrum of T , then 1 is in the discrete spectrum
of T .
Exercise 48.26
Solve
1. Z 1
u00 (t) + sin(k(s t))u(s) ds = f (t), u(0) = u0 (0) = 0.
0
2. Z
u(x) = K(x, s)u(s) ds
0
where
sin x+s
1 2
X sin nx sin ns
K(x, s) = log =
2 sin xs n
2 n=1
1274
3.
2
1 h2
Z
1
(s) = (t) dt, |h| < 1
0 2 1 2h cos(s t) + h2
4. Z
(x) = cosn (x )() d
Exercise 48.27
Let K(x, s) = 2 2 6|x s| + 3(x s)2 .
1. Find the eigenvalues and eigenfunctions of
Z 2
(x) = K(x, s)(s) ds.
0
2. Do the eigenfunctions form a complete set? If not, show that a complete set may be obtained
by adding a suitable set of solutions of
Z 2
K(x, s)(s) ds = 0.
0
Exercise 48.28
Let K(x, s) be a bounded self-adjoint kernel on the finite interval (a, b), and let T be the integral
operator on L2 (a, b) with kernel K(x, s). For a polynomial p(t) = a0 + a1 t + + an tn we define
the operator p(T ) = a0 I + a1 T + + an T n . Prove that the eigenvalues of p(T ) are exactly the
numbers p() with an eigenvalue of T .
Exercise 48.29
Show that if f (x) is continuous, the solution of
Z
(x) = f (x) + cos(2xs)(s) ds
0
is R
f (x) + 0
f (s) cos(2xs) ds
(x) = .
1 2 /4
Exercise 48.30
Consider
Lu = 0 in D, u = f on C,
where
Lu uxx + uyy + aux + buy + cu.
Here a, b and c are continuous functions of (x, y) on D + C. Show that the adjoint L is given by
and that Z Z
(vLu uL v) = H(u, v), (48.1)
D C
1275
where
where u satisfies Lu = 0, (G = in D, G = 0 on C). Show that (48.2) can be put into the forms
Z
u+ (c ax by )G aGx bGy u dx dy = U (48.3)
D
and Z
u+ (aux + buy + cu)G dx dy = U, (48.4)
D
where U is the known harmonic function in D with assumes the boundary values prescribed for
u. Finally, rigorously show that the integrodifferential equation (48.4) can be solved by successive
approximations when the domain D is small enough.
Exercise 48.31
Find the eigenvalues and eigenfunctions of the following kernels on the interval [0, 1].
1.
K(x, s) = min(x, s)
2.
K(x, s) = emin(x,s)
(Hint: 00 + 0 + ex = 0 can be solved in terms of Bessel functions.)
Exercise 48.32
Use Hilbert transforms to evaluate
Z
sin(kx) sin(lx)
1. dx
x2 z 2
Z
cos(px) cos(qx)
2. dx
x2
Z
(x2 ab) sin x + (a + b)x cos x
3. dx
x(x2 + a2 )(x2 + b2 )
Exercise 48.33
Show that
Z
(1 t2 )1/2 log(1 + t)
dt = x log 2 1 + (1 x2 )1/2 arcsin(x) .
tx 2
1276
Exercise 48.34
Let C be a simple closed contour. Let g(t) be a given function and consider
Z
1 f (t) dt
= g(t0 ) (48.5)
C t t0
Note that the left side can be written as F + (t0 ) + F (t0 ). Define a function W (z) such that
W (z) = F (z) for z inside C and W (z) = F (z) for z outside C. Proceeding in this way, show that
the solution of (48.5) is given by Z
1 g(t) dt
f (t0 ) = .
C t t0
Exercise 48.35
If C is an arc with endpoints and , evaluate
Z
1 1
(i) d, where 0 < < 1
C ( )1 ( ) ( )
n
Z
1
(ii) d, where 0 < < 1, integer n 0.
C
Exercise 48.36
Solve
Z 1
(y)
dy = f (x).
1 y 2 x2
Exercise 48.37
Solve
Z 1
1 f (t)
dt = f (x), where 1 < < 1.
0 t x
Are there any solutions for > 1? (The operator on the left is self-adjoint. Its spectrum is
1 1.)
Exercise 48.38
Show that the general solution of
tan(x) 1 f (t)
Z
dt = f (x)
0 tx
is
k sin(x)
f (x) = .
(1 x)1x/ xx/
Exercise 48.39
Show that the general solution of
Z
0 f (t)
f (x) + dt = 1
C x
t
is given by
1
f (x) = + k ex ,
(k is a constant). Here C is a simple closed contour, a constant and f (x) a differentiable function
on C. Generalize the result to the case of an arbitrary function g(x) on the right side, where g(x)
is analytic inside C.
1277
Exercise 48.40
Show that the solution of
Z
1
+ P (t x) f (t) dt = g(x)
C tx
is given by Z Z
1 g( ) 1
f (t) = d g( )P ( t) d.
2 C t 2 C
Here C is a simple closed curve, and P (t) is a given entire function of t.
Exercise 48.41
Solve
Z 1 Z 3
f (t) f (t)
dt + dt = x
0 t x 2 t x
where this equation is to hold for x in either (0, 1) or (2, 3).
Exercise 48.42
Solve
Z x Z 1
f (t) f (t)
dt + A dt = 1
0 xt x tx
where A is a real positive constant. Outline briefly the appropriate method of A is a function of x.
1278
48.2 Hints
Hint 48.1
Hint 48.2
Hint 48.3
Hint 48.4
Hint 48.5
Hint 48.6
Hint 48.7
Hint 48.8
Hint 48.9
Hint 48.10
Hint 48.11
Hint 48.12
Hint 48.13
Hint 48.14
Hint 48.15
Hint 48.16
Hint 48.17
Hint 48.18
1279
Hint 48.19
Hint 48.20
Hint 48.21
Hint 48.22
Hint 48.23
Hint 48.24
Hint 48.25
Hint 48.26
Hint 48.27
Hint 48.28
Hint 48.29
Hint 48.30
Hint 48.31
Hint 48.32
Hint 48.33
Hint 48.34
Hint 48.35
Hint 48.36
1280
Hint 48.37
Hint 48.38
Hint 48.39
Hint 48.40
Hint 48.41
Hint 48.42
1281
48.3 Solutions
Solution 48.1
C 1 [0, ] Extremals
Admissible Extremal. First we consider continuously differentiable extremals. Because the
Lagrangian is a function of y 0 alone, we know that the extremals are straight lines. Thus the
admissible extremal is
y = x.
Legendre Condition.
Fy0 y0 = 12(y 0 )2 12
2 !
= 12 1
< 0 for |/| < 1
= 0 for |/| = 1
> 0 for |/| > 1
Thus we see that x may be a minimum for |/| 1 and may be a maximum for |/| 1.
Jacobi Condition. Jacobis accessory equation for this problem is
(F,y0 y0 h0 )0 = 0
2 ! !0
12 1 h0 = 0
h00 = 0
The problem h00 = 0, h(0) = 0, h(c) = 0 has only the trivial solution for c > 0. Thus we
see that there are no conjugate points and the admissible extremal satisfies the strengthened
Legendre condition.
A Weak Minimum. For |/| > 1 the admissible extremal x is a solution of the Euler
equation, and satisfies the strengthened Jacobi and Legendre conditions. Thus it is a weak
minima. (For |/| < 1 it is a weak maxima for the same reasons.)
We can find the stationary points of the excess function by examining its derivative. (Let
= /.)
2
E 0 (w) = 4w3 12w + 4 () 3 = 0
1 p 1 p
w1 = , w2 = 4 2 w3 = + 4 2
2 2
1282
The excess function evaluated at these points is
E(w1 ) = 0,
3 4
E(w2 ) = 3 62 6 3(4 2 )3/2 ,
2
3 4
E(w3 ) = 3 62 6 + 3(4 2 )3/2 .
2
E(w2 ) is negative for 1 < < 3 and E(w3 ) is negative for 3 < < 1. This implies that
the weak minimum y = x/ is not a strong local minimum for || < 3|. Since E(w1 ) = 0,
we cannot use the Weierstrass
excess function to determine if y = x/ is a strong local
minima for |/| > 3.
Cp1 [0, ] Extremals
Erdmanns Corner Conditions. Erdmanns corner conditions require that
and
F y 0 F,y0 = (y 0 )4 6(y 0 )2 y 0 (4(y 0 )3 12y 0 )
are continuous at corners. Thus the quantities
(y 0 )3 3y 0 and (y 0 )4 2(y 0 )2
0 0
are continuous. Denoting p = y and q = y+ , the first condition has the solutions
1 p
p = q, p = q 3 4 q 2 .
2
The second condition has the solutions,
p
p = q, p = 2 q2
Case 1, = 3. Notice the the Lagrangian is minimized point-wise if y 0 = 3. For
this case the unique, strong global minimum is
y = 3 sign()x.
Case 2, || < 3||. For this case there are an infinite number
of strong minima. Any
0 0
piecewise linear curve satisfying y (x) = 3 and y+ (x) = 3 and y(0) = 0, y() = is
a strong minima.
Case 3, || > 3||. First note that the extremal cannot have corners. Thus the unique
extremal is y = x. We know that this extremal is a weak local minima.
Solution 48.2
1.
Z x1
(a(y 0 )2 + byy 0 + cy 2 ) dx, y(x0 ) = y0 , y(x1 ) = y1 , a 6= 0
x0
Erdmanns First Corner Condition. Fy0 = 2ay 0 + by must be continuous at a corner. This
implies that y must be continuous, i.e., there are no corners.
1283
The functional cannot have broken extremals.
2. Z x1
(y 0 )3 dx, y(x0 ) = y0 , y(x1 ) = y1
x0
Erdmanns First Corner Condition. Fy0 = 3(y 0 )2 must be continuous at a corner. This
0 0
implies that y = y+ .
Erdmanns Second Corner Condition. F y 0 Fy0 = (y 0 )3 y 0 3(y 0 )2 = 2(y 0 )3 must be
continuous at a corner. This implies that y is continuous at a corner, i.e. there are no corners.
Solution 48.3
1.
Z 1
1
(y 00 )2 2xy dx, y(0) = y 0 (0) = 0,
y(1) =
0 120
Eulers Differential Equation. We will consider C 4 extremals which satisfy Eulers DE,
From the given boundary conditions we have y(0) = y 0 (0) = y(1) = 0. Using Eulers DE,
we have,
Z 1
J = ((Fy0 (F,y00 )0 )0 y + F,y0 y 0 + Fy00 y 00 ) dx.
0
In order that the first variation vanish, we need the natural boundary condition F,y00 (1) = 0.
For the given Lagrangian, this condition is
y 00 (1) = 0.
1284
The general solution of the differential equation is
1 5
y = c0 + c1 x + c2 x2 + c3 x3 + x .
120
Applying the boundary conditions, we see that the unique admissible extremal is
x2 3
y = (x 5x + 5).
120
This may be a weak extremum for the problem.
Legendres Condition. Since
F,y00 y00 = 2 > 0,
the strengthened Legendre condition is satisfied.
Jacobis Condition. The second variation for F (x, y, y 00 ) is
Z b
d2 J
2
= F,y00 y00 (h00 )2 + 2F,yy00 hh00 + F,yy h2 dx
d =0
a
(h00 )00 = 0
Since the boundary value problem,
has only the trivial solution for all c > 0 the strengthened Jacobi condition is satisfied.
x2 3
y = (x 5x + 5),
120
satisfies the strengthened Legendre and Jacobi conditions, we conclude that it is a weak
minimum.
2. Z 1
1 0 2
(y ) + yy 0 + y 0 + y dx
0 2
Boundary Conditions. Since no boundary conditions are specified, we have the Euler
boundary conditions,
F,y0 (0) = 0, F,y0 (1) = 0.
The derivatives of the integrand are,
F,y = y 0 + 1, F,y0 = y 0 + y + 1.
1285
must be continuous at a corner. This implies that y 0 (x) is continuous at corners, which means
that there are no corners.
Eulers Differential Equation. Eulers DE is
0
(F,y0 ) = Fy ,
y 00 + y 0 = y 0 + 1,
y 00 = 1.
The general solution is
1
y = c0 + c1 x + x2 .
2
The boundary conditions give us the constraints,
c0 + c1 + 1 = 0,
5
c0 + 2c1 + = 0.
2
The extremal that satisfies the Euler DE and the Euler BCs is
1 2
y = x 3x + 1 .
2
0
(h0 ) ((1)0 ) h = 0, h(0) = h(c) = 0,
00
h = 0, h(0) = h(c) = 0,
Since this has only trivial solutions for c > 0 we conclude that there are no conjugate points.
The extremal satisfies the strengthened Jacobi condition.
3. Z b
(y 2 + 2xyy 0 ) dx, y(a) = A, y(b) = B
a
(F,y0 )0 = Fy ,
(2xy)0 = 2y + 2xy 0 ,
2y + 2xy 0 = 2y + 2xy 0 ,
is trivial. Every C 1 function satisfies the Euler DE.
1286
Erdmanns Corner Conditions. The expressions,
are continuous at a corner. The conditions are trivial and do not restrict corners in the
extremal.
Extremal. Any piecewise smooth function that satisfies the boundary conditions y(a) = A,
y(b) = B is an admissible extremal.
An Exact Derivative. At this point we note that
Z b Z b
2 0 d
(y + 2xyy ) dx = (xy 2 ) dx
a a dx
2 b
= xy a
= bB 2 aA2 .
The integral has the same value for all piecewise smooth functions y that satisfy the boundary
conditions.
Since the integral has the same value for all piecewise smooth functions that satisfy the
boundary conditions, all such functions are weak extrema.
4. Z 1
(xy + y 2 2y 2 y 0 ) dx, y(0) = 1, y(1) = 2
0
F y 0 F,y0 = xy + y 2 2y 2 y 0 y 0 (2y 2 ) = xy + y 2
is continuous. This condition is also trivial. Thus the extremal may have corners at any point.
Eulers Differential Equation. Eulers DE is
(F,y0 )0 = F,y ,
(2y 2 )0 = x + 2y 4yy 0
x
y=
2
Extremal. There is no piecewise smooth function that satisfies Eulers differential equation
on its smooth segments and satisfies the boundary conditions y(0) = 1, y(1) = 2. We
conclude that there is no weak extremum.
Solution 48.4
1. We require that the first variation vanishes
ZZ
Fu h + Fux hx + Fuy hy dx dy = 0.
D
1287
ZZ ZZ
Fu (Fux )x (Fuy )y h dx dy + (Fux h)x + (Fuy h)y dx dy = 0.
D D
Using the Divergence theorem, we obtain,
ZZ Z
Fu (Fux )x (Fuy )y h dx dy + (Fux , Fuy ) n h ds = 0.
D
In order that the line integral vanish we have the natural boundary condition,
Fu (Fux )x (Fuy )y = 0.
pu n + u = 0 for (x, y) .
We can also denote this as
u
p + u = 0 for (x, y) .
n
Solution 48.5
First we vary .
!
ZZ Z h(x,t)
1 1
() = t + t + (x + x )2 + (y + y )2 + gy dy dx dt
R 0 2 2
!
ZZ Z h(x,t)
0
(0) = (t + x x + y y ) dy dx dt = 0
R 0
1288
Since vanishes on the boundary of R, we have
ZZ Z h(x,t)
0 (0) = [(ht x hx y )]y=h(x,t) [y ]y=0 (xx + yy ) dy dx dt = 0.
R 0
2 = 0.
ht x hx y = 0 on y = h(x, t).
Finally we have
y = 0 on y = 0.
1 1
t + 2x + 2y + gy = 0 on y = h(x, t).
2 2
Solution 48.6
The parts of the extremizing curve which lie outside the boundary of the region R must be extremals,
(i.e., solutions of Eulers equation) since if we restrict our variations to admissible curves outside
of R and its boundary, we immediately obtain Eulers equation. Therefore an extremum can be
reached only on curves consisting of arcs of extremals and parts of the boundary of region R.
Thus, our problem is to find the points of transition of the extremal to the boundary of R. Let
the boundary of R be given by (x). Consider an extremum that starts at the point (a, A), follows
an extremal to the point (x0 , (x0 )), follows the R to (x1 , (x1 )) then follows an extremal to the
point (b, B). We seek transversality conditions for the points x0 and x1 . We will extremize the
expression,
Z x0 Z x1 Z b
I(y) = F (x, y, y 0 ) dx + F (x, , 0 ) dx + F (x, y, y 0 ) dx.
a x0 x1
Let c be any point between x0 and x1 . Then extremizing I(y) is equivalent to extremizing the two
functionals, Z x0 Z c
0
I1 (y) = F (x, y, y ) dx + F (x, , 0 ) dx,
a x0
Z x1 Z b
I2 (y) = F (x, , 0 ) dx + F (x, y, y 0 ) dx,
c x1
I = 0 I1 = I2 = 0.
1289
We will extremize I1 (y) and then use the derived transversality condition on all points where the
extremals meet R. The general variation of I1 is,
Z x0
d x x
I1 (y) = Fy Fy0 dx + [Fy0 y]a0 + [(F y 0 Fy0 )x]a0
a dx
c c
+ [F0 (x)]x0 + [(F 0 F0 )x]x0 = 0
Note that x = y = 0 at x = a, c. That is, x = x0 is the only point that varies. Also note that
(x) is not independent of x. (x) 0 (x)x. At the point x0 we have y 0 (x)x.
Z x0
d
Fy0 dx + (Fy0 0 x) + ((F y 0 Fy0 )x)
I1 (y) = Fy
a dx
x0 x0
Z x0
d
dx + ((F (x, y, y 0 ) F (x, , 0 ) + (0 y 0 )Fy0 )x)
I1 (y) = Fy Fy 0 =0
a dx x0
Since I1 vanishes for those variations satisfying x0 = 0 we obtain the Euler differential equation,
d
Fy Fy0 = 0.
dx
Then we have
((F (x, y, y 0 ) F (x, , 0 ) + (0 y 0 )Fy0 )x)
=0
x0
Transversality condition. If Fy0 is not identically zero, the extremal must be tangent to
R at the points of contact.
R 10
Now we apply this result to to find the curves which extremize 0 (y 0 )3 dx, y(0) = 0, y(10) = 0
given that the admissible curves can not penetrate the interior of the circle (x 5)2 + y 2 = 9. Since
the Lagrangian is a function of y 0 alone, the extremals are straight lines.
The Erdmann corner conditions require that
are continuous at corners. This implies that y 0 is continuous. There are no corners.
We see that the extrema are
3
p
4 x, for 0 x 16
5 ,
y(x) = 9 (x 5) , for 5 x 34
2 16
5 ,
3
for 34
4 x, 5 x 10.
Note that the extremizing curves neither minimize nor maximize the integral.
1290
Solution 48.7
C1 Extremals. Without loss of generality, we take the vertical line to be the y axis. We will
p
consider x1 , y1 > 1. With ds = 1 + (y 0 )2 dx we extremize the integral,
Z x1
p
y 1 + (y 0 )2 dx.
0
Since the Lagrangian is independent of x, we know that the Euler differential equation has a first
integral.
d
Fy0 Fy = 0
dx
y 0 Fy0 y + y 00 Fy0 y0 Fy = 0
d 0
(y Fy0 F ) = 0
dx
y 0 Fy0 F = const
For the given Lagrangian, this is
y0 p
y0 y p y 1 + (y 0 )2 = const,
0
1 + (y ) 2
p
(y 0 )2 y y(1 + (y 0 )2 ) = const 1 + (y 0 )2 ,
p
y = const 1 + (y 0 )2
y = const is one solution. To find the others we solve for y 0 and then solve the differential equation.
y = a(1 + (y 0 )2 )
r
0 ya
y =
a
r
a
dx = dy
ya
p
x + b = 2 a(y a)
x2 bx b2
y= + +a
4a 2a 4a
The natural boundary condition is
0
yy
Fy0 x=0 = p
= 0,
1 + (y 0 )2 x=0
y 0 (0) = 0
The extremal that satisfies this boundary condition is
x2
y= + a.
4a
Now we apply y(x1 ) = y1 to obtain
1
q
a= 2 2
y1 y1 x1
2
for y1 x1 . The value of the integral is
Z x1 s 2
x1 (x21 + 12a2 )
x 2
x
+a 1+ dx = .
0 4a 2a 12a3/2
1291
By denoting y1 = cx1 , c 1 we have
1 p
a= cx1 x1 c2 1
2
The values of the integral for these two values of a are
+ 3c2 3c c2 1
3/2 1
2(x1 ) .
3(c c2 1)3/2
The values are equal only when c = 1. These values, (divided by x1 ), are plotted in Figure 48.1 as
a function of c. The former and latter are fine and coarse dashed lines, respectively. The extremal
with
1
q
a= y1 + y12 x21
2
has the smaller performance index. The value of the integral is
p
x1 (x21 + 3(y1 + y12 x21 )2
p .
3 2(y1 + y12 x21 )3
The function y = y1 is an admissible extremal for all x1 . The value of the integral for this
extremal is x1 y1 which is larger than the integral of the quadratic we analyzed before for y1 > x1 .
3.5
2.5
Figure 48.1:
is the extremal with the smaller integral and is the minimizing curve in C 1 for y1 x1 . For y1 < x1
the C 1 extremum is,
y = y1 .
d d
fx0 fx = 0 and fy0 fy = 0.
dt dt
1292
If one of the equations is satisfied, then the other is automatically satisfied, (or the extremal is
straight). With either of these equations we could derive the quadratic extremal and the y = const
extremal that we found previously. We will find one more extremal by considering the first parametric
Euler differential equation.
d
fx0 fx = 0
dt
p !
d y(t)x0 (t)
p =0
dt (x0 (t))2 + (y 0 (t))2
p
y(t)x0 (t)
p = const
(x0 (t))2 + (y 0 (t))2
Note that x(t) = const is a solution. Thus the extremals are of the three forms,
x = const,
y = const,
x2 bx b2
y= + + + a.
4a 2a 4a
The Erdmann corner conditions require that
yy 0
Fy 0 = p ,
1 + (y 0 )2
0 2
p y(y ) y
F y 0 Fy0 = y 1 + (y 0 )2 p =p
1 + (y ) 0 2 1 + (y 0 )2
Equating the performance indices of the quadratic extremum and the piecewise smooth extremum,
p
x1 (x21 + 3(y1 + y12 x21 )2 2
p = (y1 )3/2 ,
2
3 2(y1 + y1 x1 ) 2 3 3
p
32 3
y1 = x1 .
3
The only real positive solution is
p
3+2 3
y1 = x1 1.46789 x1 .
3
The piecewise smooth extremal has the smaller performance index for y1 smaller than this value
and the quadratic extremal has the smaller performance index for y1 greater than this value.
p
The Cp1 extremum is the piecewise smooth extremal for y1 x1 3 + 2 3/ 3 and is the
p
quadratic extremal for y1 x1 3 + 2 3/ 3.
1293
Solution 48.8
The shape of the rope will be a catenary between x1 and x2 and be a vertically hanging segment after
that. Let the length of the vertical segment be z. Without loss of generality we take x1 = y2 = 0.
The potential energy, (relative to y = 0), of a length of rope ds in 0 x x2 is mgy = gy ds. The
total potential energy of the vertically hanging rope is m(center of mass)g = z(z/2)g. Thus we
seek to minimize, Z x2
1
g y ds gz 2 , y(0) = y1 , y(x2 ) = 0,
0 2
subject to the isoperimetric constraint,
Z x2
ds z = L.
0
p
Writing the arc-length differential as ds = 1 + (y 0 )2 dx we minimize
Z x2 p
1
g y 1 + (y 0 )2 ds gz 2 , y(0) = y1 , y(x2 ) = 0,
0 2
subject to, Z x2 p
1 + (y 0 )2 dx z = L.
0
Consider the more general problem of finding functions y(x) and numbers z which extremize
Rb Rb
I a F (x, y, y 0 ) dx + f (z) subject to J a G(x, y, y 0 ) dx + g(z) = L.
Suppose y(x) and z are the desired solutions and form the comparison families, y(x) + 1 1 (x) +
2 2 (x), z + 1 1 + 2 2 . Then, there exists a constant such that
(I + J) , =0 = 0
1 1 2
(I + J)1 ,2 =0 = 0.
2
These equations are
Z b
d
H,y0 Hy 1 dx + h0 (z)1 = 0,
a dx
and Z b
d
H,y0 Hy 2 dx + h0 (z)2 = 0,
a dx
where H = F + G and h = f + g. From this we conclude that
d
H,y0 Hy = 0, h0 (z) = 0
dx
with determined by
Z b
J= G(x, y, y 0 ) dx + g(z) = L.
a
Now we apply these results to our problem. Since f (z) = 12 gz 2 and g(z) = z we have
gz = 0,
z= .
g
It was shown in class that the solution of the Euler differential equation is a family of catenaries,
x c2
y = + c1 cosh .
g c1
1294
One can find c1 and c2 in terms of by applying the end conditions y(0) = y1 and y(x2 ) = 0. Then
the expression for y(x) and z = /g are substituted into the isoperimetric constraint to determine
.
Consider the special case that (x1 , y1 ) = (0, 0) and (x2 , y2 ) = (1, 0). In this case we can use the
fact that y(0) = y(1) to solve for c2 and write y in the form
x 1/2
y = + c1 cosh .
g c1
which we cant solve in closed form. Since we ran into a dead end in applying the boundary condition,
we turn to the isoperimetric constraint.
Z 1p
1 + (y 0 )2 dx z = L
0
1
x 1/2
Z
cosh dx z = L
0 c1
1
2c1 sinh z =L
2c1
With the isoperimetric constraint, the algebraic-transcendental equation and z = /g we now
have
1
z = c1 cosh ,
2c1
1
z = 2c1 sinh L.
2c1
For any fixed L, we can numerically solve for c1 and thus obtain z. You can derive that there are
no solutions unless L is greater than about 1.9366. If L is smaller than this, the rope would slip off
the pin. For L = 2, c1 has the values 0.4265 and 0.7524. The larger value of c1 gives the smaller
potential energy. The position of the end of the rope is z = 0.9248.
Solution 48.9 Rc
Using the method of Lagrange multipliers, we look for stationary values of 0 ((y 0 )2 + y 2 ) dx,
Z c
((y 0 )2 + y 2 ) dx = 0.
0
y 00 y = 0, y(0) = y(c) = 0,
1295
Now we determine the constants an with the moment of inertia constraint.
Z c nx ca2
a2n sin2 dx = n = A
0 c 2
Thus we have the extremals,
r
2A nx
yn = sin , n Z+ .
c c
The drag for these extremals is
c
An2 2
Z n 2
2A nx
D= cos2 dx = .
c 0 c c c2
We see that the drag is minimum for n = 1. The shape for minimum drag is
r
2A nx
y = sin .
c c
Solution 48.10
Consider the general problem of determining the stationary values of the quantity 2 given by
Rb
2 F (x, y, y 0 , y 00 ) dx I
= Rab .
G(x, y, y 0 , y 00 ) dx J
a
2
The variation of is
JI IJ
2 = 2
J
1 I
= I J
J J
1
I 2 J .
=
J
The the values of y and y 0 are specified on the boundary, then the variations of I and J are
Z b 2 Z b 2
d d d d
I = F,y
00 F,y + F,y y dx,
0 J = G,y
00 G,y + G,y y dx
0
a dx2 dx a dx2 dx
d2 d
2
H,y00 H,y0 + H,y = 0 where H F 2 G.
dx dx
For our problem we have F = EI(y 00 )2 and G = y so that the extremals are solutions of
d2
dy
EI 2 y = 0,
dx2 dx
With homogeneous boundary conditions we have an eigenvalue problem with deflections modes yn (x)
and corresponding natural frequencies n .
1296
Solution 48.11
We assume that v0 > w(x, y, t) so that the problem has a solution for any end point. The crossing
time is Z l Z l
1 1
T = X(t) dx = sec (t) dx.
0 v0 0
Note that
dy w + v0 sin
=
dx v0 cos
w
= sec + tan
v0
w p
= sec + sec2 1.
v0
We solve this relation for sec .
2
w
y0 sec = sec2 1
v0
w 0 w2
(y 0 )2 2 y sec + 2 sec2 = sec2 1
v0 v0
(v02 w2 ) sec2 + 2v0 wy 0 sec v02 ((y 0 )2 + 1) = 0
p
2v0 wy 0 4v02 w2 (y 0 )2 + 4(v02 w2 )v02 ((y 0 )2 + 1)
sec =
2(v02 w2 )
p
wy 0 v02 ((y 0 )2 + 1) w2
sec = v0
(v02 w2 )
Since the steering angle satisfies /2 /2 only the positive solution is relevant.
p
wy 0 + v02 ((y 0 )2 + 1) w2
sec = v0
(v02 w2 )
Time Independent Current. If we make the assumption that w = w(x, y) then we can write
the crossing time as an integral of a function of x and y.
Z l p
wy 0 + v02 ((y 0 )2 + 1) w2
T (y) = dx
0 (v02 w2 )
A necessary condition for a minimum is T = 0. The Euler differential equation for this problem is
d
F,y0 F,y = 0
dx
!! !
d 1 v02 y 0 wy w(v 2 (1 + 2(y 0 )2 ) w2 ) 0 2 2
2 w + p 2 2 y (v0 + w )
v0 w2 (v0 w2 )2
p
dx v0 ((y 0 )2 + 1) w2 v02 ((y 0 )2 + 1) w2
By solving this second order differential equation subject to the boundary conditions y(0) = 0,
y(l) = y1 we obtain the path of minimum crossing time.
Current w = w(x). If the current is only a function of x, then the Euler differential equation
can be integrated to obtain,
!
1 v02 y 0
w + p 2 = c0 .
v02 w2 v0 ((y 0 )2 + 1) w2
1297
Solving for y 0 ,
w + c0 (v02 w2 )
y0 = p .
v0 1 2c0 w c20 (v02 w2 )
Since y(0) = 0, we have
x
w() + c0 (v02 (w())2 )
Z
y(x) = p .
0 v0 1 2c0 w() c20 (v02 (w())2 )
For any given w(x) we can use the condition y(l) = y1 to solve for the constant c0 .
Constant Current. If the current is constant then the Lagrangian is a function of y 0 alone.
The admissible extremals are straight lines. The solution is then
y1 x
y(x) = .
l
Solution 48.12
1. The kinetic energy of the first particle is 12 m(( x))2 . Its potential energy, relative to the
table top, is zero. The kinetic energy of the second particle is 12 mx2 . Its potential energy,
relative to its equilibrium position is mgx. The Lagrangian is the difference of kinetic and
potential energy.
2 1 2 2
L = m x + ( x) + gx
2
The Euler differential equations are the equations of motion.
d d
L,x Lx = 0, L L = 0
dt dt ,
d d
(2mx) + m( x)2 mg = 0, m( x)2 2 = 0
dt dt
2x + ( x)2 g = 0, ( x)2 2 = const
p
When x = 0, = = g/. This determines the constant in the equation of motion for .
g
=
( x)2
Now we substitute the expression for into the equation of motion for x.
3 g
2x + ( x) g =0
( x)4
3
2x + 1 g =0
( x)3
1
2x + 1 g=0
(1 x/)3
2. For small oscillations, x 1. Recall the binomial expansion,
a
X a
(1 + z) = zn, for |z| < 1,
n=0
n
1298
We make the approximation,
1 x
3
1+3 ,
(1 x/)
to obtain the linearized equation of motion,
3g
2x + x = 0.
This is the equation of a harmonic oscillator with solution
p
x = a sin 3g2(t b) .
Solution 48.13
We write the equation of motion and boundary conditions,
x = 0, x(0) = 0, x(T ) = h,
y = U (t) g, y(0) = 0.
We seek to minimize,
Z T
T = dt,
0
subject to the constraints,
x y = 0,
y U (t) + g = 0,
Z T
U 2 (t) dt = k 2 .
0
(T ) = 0.
The first Euler differential equation is
d
H,x H,x = 0,
dt
d
(t) = 0.
dt
We see that (t) = is constant. The next Euler DE is
d
H,y H,y = 0,
dt
1299
d
(t) + = 0.
dt
(t) = t + const
With the natural boundary condition, (T ) = 0, we have
(t) = (T t).
2 T 3
= k2 ,
12 2
3k
U (t) = 3/2 (T t).
T
The equation of motion for x is
3k
x = U (t) g = (T t).
T 3/2
Integrating and applying the initial conditions x(0) = x(0) = 0 yields,
kt2 (3T t) 1 2
x(t) = gt .
2 3T 3/2 2
k 1
T 3/2 gT 2 = h,
3 2
1 2 4 k 3
g T T + ghT 2 + h2 = 0.
4 3
p
If k 4 2/3g 3/2 h then this fourth degree polynomial has positive, real solutions for Tp
. With strict
inequality, the minimum time is the smaller of the two positive, real solutions. If k < 4 2/3g 3/2 h
then there is not enough fuel to reach the target height.
Solution 48.14
We have x = U (t) where U (t) is the acceleration furnished by the thrust of the vehicles engine. In
practice, the engine will be designed to operate within certain bounds, say M U (t) M , where
M is the maximum forward/backward acceleration. To account for the inequality constraint we
write U = M sin V (t) for some suitable V (t). More generally, if we had (t) U (t) (t), we could
write this as U (t) = +
2 + 2 sin V (t).
We write the equation of motion as a first order system,
x = y, x(0) = a, x(T ) = 0,
y = M sin V, y(0) = b, y(T ) = 0.
1300
Thus we minimize Z T
T = dt
0
subject to the constraints,
x y = 0
y M sin V = 0.
Consider
H = 1 + (t)(x y) + (t)(y M sin V ).
The Euler differential equations are
d d
H,x H,x = 0 (t) = 0 (t) = const
dt dt
d d
H,y H,y = 0 (t) + = 0 (t) = t + const
dt dt
d
H H,V = 0 (t)M cos V (t) = 0 V (t) = + n.
dt ,V 2
Thus we see that
U (t) = M sin + n = M.
2
Therefore, if the rocket is to be transferred from its initial state to is specified final state in
minimum time with a limited source of thrust, (|U | M ), then the engine should operate at full
power at all times except possibly for a finite number of switching times. (Indeed, if some power
were not being used, we would expect the transfer would be speeded up by using the additional
power suitably.)
To see how this bang-bang process works, well look at the phase plane. The problem
x = y, x(0) = c,
y = M, y(0) = d,
Solution 48.15
Since the integrand does not explicitly depend on x, the Euler differential equation has the first
integral,
F y 0 Fy0 = const.
0
0 y y+h
p p
0 2
y + h 1 + (y ) y p = const
1 + (y 0 )2
y+h
p = const
1 + (y 0 )2
1301
Figure 48.2:
y + h = c21 (1 + (y 0 )2 )
q
y + h c21 = c1 y 0
c1 dy
p = dx
y + h c21
q
2c1 y + h c21 = x c2
4c21 (y + h c21 ) = (x c2 )2
Introduce as a parameter the slope of the extremal at the origin; that is, y 0 (0) = . Then differenti-
h
ating (48.6) at x = 0 yields 4c21 = 2c2 . Together with c22 = 4c21 (h c21 ) we obtain c21 = 1+ 2 and
2h
c2 = 1+2 . Thus the equation of the pencil (48.6) will have the form
1 + 2 2
y = x + x . (48.7)
4h
2
To find the envelope of this family we differentiate ( 48.7) with respect to to obtain 0 = x + 2h x
and eliminate between this and ( 48.7) to obtain
x2
y = h + .
4h
See Figure 48.3 for a plot of some extremals and the envelope.
All extremals (48.7) lie above the envelope which in ballistics is called the parabola of safety. If
2
(m, M ) lies outside the parabola, M < h + m 4h , then it cannot be joined to (0, 0) by an extremal.
If (m, M ) is above the envelope then there are two candidates. Clearly we rule out the one that
touches the envelope because of the occurrence of conjugate points. For the other extremal, problem
2 shows that E 0 for all y 0 . Clearly we can embed this extremal in an extremal pencil, so Jacobis
test is satisfied. Therefore the parabola that does not touch the envelope is a strong minimum.
1302
y
x
h 2h
-h
Solution 48.16
Since E 0, light traveling on extremals follow the time optimal path as long as the extremals do
not intersect.
Solution 48.17
Extremals. Since the integrand does not depend explicitly on x, the Euler differential equation
has the first integral,
F y 0 F,y0 = const.
1 + y2 2(1 + y 2 )
0
y0 = const
(y )2 (y 0 )3
dy
p = const dx
1 + (y 0 )2
arcsinh(y) = c1 x + c2
y = sinh(c1 x + c2 )
Jacobi Test. We can see by inspection that no conjugate points exist. Consider the central
field through (0, 0), sinh(cx), (See Figure 48.4).
1303
3
-3 -2 -1 1 2 3
-1
-2
-3
We can also easily arrive at this conclusion analytically as follows: Solutions u1 and u2 of the
Jacobi equation are given by
y
u1 = = cosh(c1 x + c2 ),
c2
y
u2 = = x cosh(c1 x + c2 ).
c1
For p = p(x, y) bounded away from zero, E is one-signed for values of y 0 close to p. However, since
the factor (p + 2y 0 ) can have any sign for arbitrary values of y 0 , the conditions for a strong minimum
are not satisfied.
Furthermore, since the extremals are y = sinh(c1 x + c2 ), the slope function p(x, y) will be of one
sign only if the range of integration is such that we are on a monotonic piece of the sinh. If we span
both an increasing and decreasing section, E changes sign even for weak variations.
Legendre Condition.
6(1 + y 2 )
F,y0 y0 = >0
(y 0 )4
Note that F cannot be represented in a Taylor series for arbitrary values of y 0 due to the presence
of a discontinuity in F when y 0 = 0. However, F,y0 y0 > 0 on an extremal implies a weak minimum
is provided by the extremal.
2
Strong Variations. Consider 1+y
R
(y 0 )2 dx on both an extremal and on the special piecewise
2
continuous variation in the figure. On P Q we have y 0 = with implies that 1+y (y 0 )2 = 0 so that there
is no contribution to the integral from P Q.
On QR the value of y 0 is greater than its value along the extremal P R while the value of y on
2
QR is less than the value of y along P R. Thus on QR the quantity 1+y (y 0 )2 is less than it is on the
1304
extremal P R.
1 + y2 1 + y2
Z Z
dx < dx
QR (y 0 )2 PR (y 0 )2
Thus the weak minimum along the extremal can be weakened by a strong variation.
Solution 48.18
The Euler differential equation is
d
F,y0 F,y = 0.
dx
d
(1 + 2x2 y 0 ) = 0
dx
1 + 2x2 y 0 = const
1
y 0 = const
x2
c1
y= + c2
x
(ii) The continuous extremal that satisfies the boundary conditions is y = 7 x4 . Since F,y0 y0 =
2x2 0 has a Taylor series representation for all y 0 , this extremal provides a strong minimum.
(iii) The continuous extremal that satisfies the boundary conditions is y = 1. This is a strong
minimum.
Solution 48.19
For identity (a) we take P = 0 and Q = x x . For identity (b) we take P = y y and
Q = 0. For identity (c) we take P = 12 (x x ) and Q = 12 (y y ).
ZZ Z
1 1 1 1
(y y )x (x x )y dx dy = (x x ) dx + (y y ) dy
D 2 2 2 2
ZZ
1 1
(x y + xy x y xy ) + (y x xy y x xy ) dx dy
D 2 2
Z Z
1 1
= (x x ) dx + (y y ) dy
2 2
ZZ ZZ Z Z
1 1
xy dx dy = xy dx dy (x x ) dx + (y y ) dy
D D 2 2
The variation of I is
Z t1 Z Z
I = (2(uxx + uyy )(uxx + uyy ) + 2(1 )(uxx uyy + uyy uxx 2uxy uxy )) dx dy dt.
t0 D
1305
From (b) we have
ZZ ZZ
2(uxx + uyy )uyy dx dy = 2(uxx + uyy )yy u dx dy
D D
Z
2((uxx + uyy )uy (uxx + uyy )y u) dy.
Using c gives us
ZZ ZZ
2(1 )(2uxy uxy ) dx dy = 2(1 )(2uxyxy u) dx dy
D D
Z
+ 2(1 )(uxy ux uxyx u) dx
Z
2(1 )(uxy uy uxyy u) dy.
Note that
u
ds = ux dy uy dx.
n
Using the above results, we obtain
Z t1 Z Z t1
(2 u)
Z Z
4 (u)
I = 2 ( u)u dx dy dt + 2 u + (2 u) ds dt
t0 D t0 n n
Z t1 Z
+ 2(1 ) (uyy ux uxy uy ) dy + (uxy ux uxx uy ) dx dt.
t0
Solution 48.20
1. Exact Solution. The Euler differential equation is
d
F,y0 = F,y
dx
d
[2y 0 ] = 2y 2x
dx
y 00 + y = x.
y = c1 cos x + c2 sin x x.
sin x
y= x.
sin 1
The value of the integral for this extremal is
sin x 2
J x = cot(1) 0.0245741.
sin 1 3
1306
n = 0. We consider an approximate solution of the form y(x) = ax(1 x). We substitute this
into the functional.
Z 1
3 2 1
(y 0 )2 y 2 2xy dx =
J(a) = a a
0 10 6
The only stationary point is
3 1
J 0 (a) = a =0
5 6
5
a= .
18
Since
00 5 3
J = > 0,
18 5
we see that this point is a minimum. The approximate solution is
5
y(x) = x(1 x).
18
This one term approximation and the exact solution are plotted in Figure 48.5. The value of
the functional is
5
J = 0.0231481.
216
0.07
0.06
0.05
0.04
0.03
0.02
0.01
1307
Since the Hessian matrix
3 3
Jaa Jab 5 10
H= = 3 26 ,
Jba Jbb 10 105
is positive definite,
3 41
> 0, det(H) = ,
5 700
we see that this point is a minimum. The approximate solution is
71 7
y(x) = x(1 x) + x .
369 41
This two term approximation and the exact solution are plotted in Figure 48.6. The value of
the functional is
136
J = 0.0245709.
5535
0.07
0.06
0.05
0.04
0.03
0.02
0.01
y = c1 cosh x + c2 sinh x x.
2 sinh x
y= x.
sinh 2
2(e4 13)
J = 0.517408.
3(e4 1)
1308
Polynomial Approximation. Consider an approximate solution of the form
5
y(x) = x(2 x).
14
This one term approximation and the exact solution are plotted in Figure 48.7. The value of
the functional is
10
J = 0.47619.
21
0.5 1 1.5 2
-0.05
-0.1
-0.15
-0.2
-0.25
-0.3
-0.35
1309
0.5 1 1.5 2
-0.05
-0.1
-0.15
-0.2
-0.25
-0.3
-0.35
0.5 1 1.5 2
-0.05
-0.1
-0.15
-0.2
-0.25
-0.3
-0.35
Figure 48.9: One Term Sine Series Approximation and Exact Solution.
0.5 1 1.5 2
-0.1
-0.2
-0.3
Figure 48.10: Two Term Sine Series Approximation and Exact Solution.
This two term approximation and the exact solution are plotted in Figure 48.10. The value of
the functional is
4(17 2 + 20)
J = 2 4 0.504823.
( + 5 2 + 4)
1310
3. Exact Solution. The Euler differential equation is
d
F,y0 = F,y
dx
d x2 1
[2xy 0 ] = 2 y 2x2
dx x
1 1
y 00 + y 0 + 1 2 y = x
x x
y = c1 J1 (x) + c2 Y1 (x) x
J 0.310947
23
y(x) = (x 1)(2 x)
6(40 log 2 23)
This one term approximation and the exact solution are plotted in Figure 48.11. The one term
approximation is a surprisingly close to the exact solution. The value of the functional is
529
J = 0.310935.
360(40 log 2 23)
0.2
0.15
0.1
0.05
1311
Solution 48.21
1. The spectrum of T is the set,
{ : (T I) is not invertible.}
(T I)f = g
Z
K(x y)f (y) dy f (x) = g
We may not be able to solve for f(), (and hence invert T I), if = K(). Thus all values
of K() are in the spectrum. If K() is everywhere nonzero we consider the case = 0. We
have the equation, Z
K(x y)f (y) dy = 0
Since there are an infinite number of L2 (, ) functions which satisfy this, (those which
are nonzero on a set of measure zero), we cannot invert the equation. Thus = 0 is in the
spectrum. The spectrum of T is the range of K() plus zero.
2. Let be a nonzero eigenvalue with eigenfunction .
(T I) = 0, x
Z
K(x y)(y) dy (x) = 0, x
If (x) is absolutely integrable, then () is continous. Since (x) is not identically zero, ()
is not identically zero. Continuity implies that () is nonzero on some interval of positive
length, (a, b). From the above equation we see that K() = for (a, b).
Now assume that K() = in some interval (a, b). Any function () that is nonzero only
for (a, b) satisfies
K() () = 0, .
By taking the inverse Fourier transform we obtain an eigenfunction (x) of the eigenvalue .
3. First we use the Fourier transform to find an explicit representation of u = (T I)1 f .
u = (T I)1 f (T I)u = f
Z
K(x y)u(y) dy u = f
2 K u u = f
f
u =
2 K
1 f
u =
1 2 K/
1312
For || > |2 K| we can expand the denominator in a geometric series.
!n
1 X 2 K
u = f
n=0
Z
1X 1
u= Kn (x y)f (y) dy
n=0 n
Here Kn is the nth iterated kernel. Now we form the Neumann series expansion.
1
u = (T I) f
1
1 1
= I T f
1X 1 n
= T f
n=0 n
1X 1 n
= T f
n=0 n
Z
1X 1
= Kn (x y)f (y) dy
n=0 n
The Neumann series is the same as the series we derived with the Fourier transform.
Solution 48.22
We seek a transformation T such that
(L I)T f = f.
This problem has a unique solution if and only if the homogeneous adjoint problem has only the
trivial solution.
u00 u = 0, u(1) = u(1) = 0.
This homogeneous problem has the eigenvalues and eigenfunctions,
n 2 n
n = , un = sin (x + 1) , n N.
2 2
The inhomogeneous problem has the unique solution
Z 1
u(x) = G(x, ; )f () d
1
where
sin( (x< +1)) sin( (1x> ))
sin(2 )
, < 0,
1
G(x, ; ) = 2 (x< + 1)(1 x> ), = 0,
sinh( (x< +1)) sinh( (1x> ))
, > 0,
sinh(2 )
1313
and note that since the kernel is continuous this is a bounded linear transformation. If f W , then
Z 1
(L I)T f = (L I) G(x, ; )f () d
1
Z 1
= (L I)[G(x, ; )]f () d
1
Z 1
= (x )f () d
1
= f (x).
If f U then
Z 1
G(x, ; ) f 00 () f () d
T (L I)f =
1
Z 1 Z 1
1
= [G(x, ; )f 0 ()]1 G0 (x, ; )f 0 () d G(x, ; )f () d
1 1
Z 1 Z 1
1
= [G0 (x, ; )f ()]1 + G00 (x, ; )f () d G(x, ; )f () d
1 1
Z 1
G00 (x, ; ) G(x, ; ) f () d
=
1
Z1
= (x )f () d
1
= f (x).
L has the point spectrum n = (n/2)2 , n N.
Solution 48.23
1. We see that the solution is of the form (x) = a + x + bx2 for some constants a and b. We
substitute this into the integral equation.
Z 1
x2 y y 2 (y) dy
(x) = x +
0
Z 1
a + x + bx2 = x + x2 y y 2 (a + x + bx2 ) dy
0
2
(15 + 20a + 12b) + (20 + 30a + 15b)x2
a + bx =
60
By equating the coefficients of x0 and x2 we solve for a and b.
( + 60) 5( 60)
a= , b=
4(2 + 5 + 60) 6(2 + 5 + 60)
Thus the solution of the integral equation is
5( 24) 2 + 60
(x) = x 2 x + .
+ 5 + 60 6 4
(x) = x.
1314
We could solve this problem by writing down the Neumann series. Instead we will use an
eigenfunction expansion. Let {n } and {n } be the eigenvalues and orthonormal eigenfunctions
of Z 1
(x) = sin(xy)(y) dy.
0
We expand (x) and x in terms of the eigenfunctions.
X
(x) = an n (x)
n=1
X
x= bn n (x), bn = hx, n (x)i
n=1
We determine the coefficients an by substituting the series expansions into the Fredholm
equation and equating coefficients of the eigenfunctions.
Z 1
(x) = x + sin(xy)(y) dy
0
X
X Z 1
X
an n (x) = bn n (x) + sin(xy) an n (y) dy
n=1 n=1 0 n=1
X X X 1
an n (x) = bn n (x) + an n (x)
n=1 n=1 n=1
n
an 1 = bn
n
If is not an eigenvalue then we can solve for the an to obtain the unique solution.
bn n bn bn
an = = = bn +
1 /n n n
X bn
(x) = x + n (x), for x 1.
n=1 n
If = m , and hx, m i =
6 0 then there is no solution.
Solution 48.24
1.
Kx = L1 L2 x = x
1315
2.
d t d t d t d t
L1 L2 L2 L1 = + + + +
dt 2 dt 2 dt 2 dt 2
2
t2
d t d 1 t d t d t d 1 t d
= + + I + I I+ + I
dt 2 dt 2 2 dt 4 dt 2 dt 2 2 dt 4
=I
d 1 t2 1
L1 L2 =
+ I+ I=K+ I
dt 2 4 2
2
We note that et /4 is an eigenfunction corresponding to the eigenvalue = 1/2. Since
2
L1 et /4 = 0 the result of this problem does not produce any negative eigenvalues. However,
2 2
Ln2 et /4 is the product of et /4 and a polynomial of degree n in t. Since this function is
square integrable it is and eigenfunction. Thus we have the eigenvalues and eigenfunctions,
n1
1 t d 2
n = n , n = et /4 , for n N.
2 2 dt
Solution 48.25
Since 1 is in the residual spectrum of T , there exists a nonzero y such that
h(T 1 I)x, yi = 0
for all x. Now we apply the definition of the adjoint.
hx, (T 1 I) yi = 0, x
hx, (T 1 I)yi = 0, x
(T 1 I)y = 0
y is an eigenfunction of T corresponding to the eigenvalue 1 .
Solution 48.26
1.
Z 1
00
u (t) + sin(k(s t))u(s) ds = f (t), u(0) = u0 (0) = 0
0
Z 1 Z 1
00
u (t) + cos(kt) sin(ks)u(s) ds sin(kt) cos(ks)u(s) ds = f (t)
0 0
00
u (t) + c1 cos(kt) c2 sin(kt) = f (t)
u00 (t) = f (t) c1 cos(kt) + c2 sin(kt)
The solution of
u00 (t) = g(t), u(0) = u0 (0) = 0
using Green functions is Z t
u(t) = (t )g( ) d.
0
Thus the solution of our problem has the form,
Z t Z t Z t
u(t) = (t )f ( ) d c1 (t ) cos(k ) d + c2 (t ) sin(k ) d
0 0 0
t
1 cos(kt) kt sin(kt)
Z
u(t) = (t )f ( ) d c1 2
+ c2
0 k k2
We could determine the constants by multiplying in turn by cos(kt) and sin(kt) and integrating
from 0 to 1. This would yields a set of two linear equations for c1 and c2 .
1316
2.
Z
X
sin nx sin ns
u(x) = u(s) ds
0 n=1
n
! !
Z
X X sin nx sin ns X
an sin nx = am sin ms ds
n=1 0 n=1
n m=1
Z
X X sin nx X
an sin nx = am sin ns sin ms ds
n=1 n=1
n m=1 0
X X sin nx X
an sin nx = am mn
n=1 n=1
n m=1 2
X X sin nx
an sin nx = an
n=1
2 n=1 n
2n
n = , un = sin nx, n N.
3.
2
1 r2
Z
1
() = (t) dt, |r| < 1
0 2 1 2r cos( t) + r2
where u(r, ) is harmonic in the unit disk and satisfies, u(1, ) = (). For a solution we need
= 1 and that u(r, ) is independent of r. In this case u() satisfies
= 1, = c1 + c2 .
4.
Z
(x) = cosn (x )() d
We expand the kernel in a Fourier series. We could find the expansion by integrating to find
the Fourier coefficients, but it is easier to expand cosn (x) directly.
n
1 x
cosn (x) = (e + ex )
2
1 n nx n (n2)x n n nx
= n e + e + + e(n2)x + e
2 0 1 n1 n
1317
If n is odd,
"
1 n nx n
n
cos (x) = n nx
(e + e )+ (e(n2)x + e(n2)x ) +
2 0 1
#
n x x
+ (e + e )
(n 1)/2
1 n n n
= n 2 cos(nx) + 2 cos((n 2)x) + + 2 cos(x)
2 0 1 (n 1)/2
(n1)/2
1 X n
= n1 cos((n 2m)x)
2 m=0
m
n
1 X n
= n1 cos(kx).
2 (n k)/2
k=1
odd k
If n is even,
"
1 n nx n
cosn (x) = n nx
(e + e )+ (e(n2)x + e(n2)x ) +
2 0 1
#
n i2x i2x n
+ (e + e )+
n/2 1 n/2
1 n n n n
= n 2 cos(nx) + 2 cos((n 2)x) + + 2 cos(2x) +
2 0 1 n/2 1 n/2
(n2)/2
1 n 1 X n
= n + n1 cos((n 2m)x)
2 n/2 2 m=0
m
n
1 n 1 X n
= n + n1 cos(kx).
2 n/2 2 (n k)/2
k=2
even k
We will denote,
n
a0 X
cosn (x ) = ak cos(k(x )),
2
k=1
where
1 + (1)nk 1
n
ak = .
2 2n1 (n k)/2
We substitute this into the integral equation.
Z n
!
a0 X
(x) = ak cos(k(x )) () d
2
k=1
n
a0
Z X Z Z
(x) = () d + ak cos(kx) cos(k)() d + sin(kx) sin(k)() d
2
k=1
For even n, substituting (x) = 1 yields = a1 0 . For n and m both even or odd, substituting
(x) = cos(mx) or (x) = sin(mx) yields = a1m . For even n we have the eigenvalues and
eigenvectors,
1
0 = , 0 = 1,
a0
1
m = , (1)
m = cos(2mx), (2)
m = sin(2mx), m = 1, 2, . . . , n/2.
a2m
1318
For odd n we have the eigenvalues and eigenvectors,
1
m = , (1)
m = cos((2m1)x), (2)
m = sin((2m1)x), m = 1, 2, . . . , (n+1)/2.
a2m1
Solution 48.27
1. First we shift the range of integration to rewrite the kernel.
Z 2
2 2 6|x s| + 3(x s)2 (s) ds
(x) =
0
Z x+2
2 2 6|y| + 3y 2 (x + y) dy
(x) =
x
n2 1 1
n = , (1)
n = cos(nx), (2)
n = sin(nx), n N.
12
2. The set of eigenfunctions do not form a complete set. Only those functions with a vanishing
integral on [0, 2] can be represented. We consider the equation
Z 2
K(x, s)(s) ds = 0
0
Z 2 X !
12
cos(nx) cos(ns) + sin(nx) sin(ns) (s) ds = 0
0 n=1
n2
1 1 1
0 = , (1)
n = cos(nx), (2)
n = sin(nx), n N,
2
1
n = enx , n Z.
2
1319
3. We consider the problem
u T u = f.
For 6= , ( not an eigenvalue), we can obtain a unique solution for u.
Z 2
u(x) = f (x) + (x, s, )f (s) ds
0
X cos(n(x s))
(x, s, ) = 12
n=1
n2 12
Solution 48.28
First assume that is an eigenvalue of T , T = .
n
X
p(T ) = an T n
k=0
Xn
= an n
k=0
= p()
p(T ) =
X X
p(T ) cn n = cn n
X X
cn p(n )n = cn n
p(n ) = , n such that cn 6= 0
Thus all eigenvalues of p(T ) are of the form p() with an eigenvalue of T .
Solution 48.29
The Fourier cosine transform is defined,
1
Z
f() = f (x) cos(x) dx,
0
Z
f (x) = 2 f() cos(x) d.
0
1320
We can write the integral equation in terms of the Fourier cosine transform.
Z
(x) = f (x) + cos(2xs)(s) ds
0
4 4
(x) = f(2x) + (2x) (48.9)
We eliminate between (48.8) and (48.9).
2
1 (x) = f (x) + f(2x)
4
R
f (x) + 0
f (s) cos(2xs) ds
(x) =
1 2 /4
Solution 48.30
Z Z
vLu dx dy = v(uxx + uyy + aux + buy + cu) dx dy
D
ZD
= (v2 u + avux + bvuy + cuv) dx dy
D
Z Z
2
= (vu uv) n ds
(u v + avux + bvuy + cuv) dx dy +
ZD Z C
Z
2 x y u v
= (u v auvx buvy uvax uvby + cuv) dx dy + auv + buv ds + v u d
D C n n C n n
where
L v = vxx + vyy avx bvy + (c ax by )v
and
u v x y
H(u, v) = v u + auv + buv .
n n n n
G = in D, G = 0 on C.
1321
Let u satisfy Lu = 0.
Z Z
(GLu uL G) dx dy = H(u, G) ds
D C
Z Z
uL G dx dy = H(u, G) ds
D C
Z Z Z
uG dx dy u(L )G dx dy = H(u, G) ds
D D C
Z Z Z
u(x )(y ) dx dy u(L )G dx dy = H(u, G) ds
D D C
Z Z
u(, ) u(L )G dx dy = H(u, G) ds
D C
Solution 48.31
1. First we differentiate to obtain a differential equation.
Z 1 Z x Z 1
(x) = min(x, s)(s) ds = es (s) ds + ex (s) ds
0 0 x
Z 1 Z 1
0 (x) = x(x) + (s) ds x(x) = (s) ds
x x
00
(x) = (x)
00 + = 0, (0) = 0 (1) = 0.
1322
The general solution of the differential equation is
a + bx for = 0
(x) = a cos x + b sin x for > 0
a cosh x + b sinh x
for < 0
We see that for = 0 and < 0 only the trivial solution satisfies the homogeneous boundary
conditions. For positive the left boundary condition demands that a = 0. The right boundary
condition is then
b cos =0
00 0 + ex = 0.
Thus we see that there are only positive eigenvalues. The differential equation has the general
solution
(x) = ex/2 aJ1 2 ex/2 + bY1 2 ex/2
1323
We define the functions,
u(x; ) = ex/2 J1 2 ex/2 , v(x; ) = ex/2 Y1 2 ex/2 .
We write the solution to automatically satisfy the right boundary condition, 0 (1) = 0,
We determine the eigenvalues from the left boundary condition, (0) 0 (0) = 0. The first
few are
1 0.678298
2 7.27931
3 24.9302
4 54.2593
5 95.3057
Solution 48.32
1. First note that
sin(kx) sin(lx) = sign(kl) sin(ax) sin(bx)
where
a = max(|k|, |l|), b = min(|k|, |l|).
Consider the analytic function,
e(ab)x e(a+b)
= sin(ax) sin(bx) cos(ax) sin(bx).
2
Z Z
sin(kx) sin(lx) sin(ax) sin(bx)
2 z2
dx = sign(kl) dx
x x2 z 2
Z
1 sin(ax) sin(bx) sin(ax) sin(bx)
= sign(kl) dx
2z xz x+z
1
= sign(kl) ( cos(az) sin(bz) + cos(az) sin(bz))
2z
Z
sin(kx) sin(lx)
2 z2
dx = sign(kl) cos(az) sin(bz)
x z
1324
3. We use the analytic function,
(x a)(x b) ex (x2 ab) sin x + (a + b)x cos x + ((x2 ab) cos x + (a + b)x sin x)
=
(x2 + a2 )(x2 + b2 ) (x2 + a2 )(x2 + b2 )
Z
(x2 ab) sin x + (a + b)x cos x (x2 ab) cos x + (a + b)x sin x
= lim
x(x2 + a2 )(x2 + b2 ) x0 (x2 + a2 )(x2 + b2 )
ab
= 2 2
a b
Z
(x2 ab) sin x + (a + b)x cos x
2 2 2 2
=
(x + a )(x + b ) ab
Solution 48.33
We consider the function
G(z) = (1 z 2 )1/2 + z log(1 + z).
so that there is a branch cut on the interval (1, 1). With this choice of branch, G(z) vanishes at
infinity. For the logarithm we choose the principal branch,
p
G+ (t) G (t) = 2 1 t2 log(1 + t),
1
G+ (t) + G (t) = t log(1 + t).
2
For t (, 1),
p
G+ (t) = 1 t2 + t (log(t 1) + ) ,
p
G (t) = 1 t2 + t (log(t 1) ) ,
p
G+ (t) G (t) = 2 t2 1 + t .
1
G+ (x) + G (x)
G(x) =
2
= x log(1 + x)
Z 1
2 2 1 t2 log(1 + t)
1 2( t 1 + t)
Z
1 1
= dt + dt
2 tx 2 1 tx
1325
From this we have
1
1 t2 log(1 + t)
Z
dt
1 tx
t2 1t
Z
= x log(1 + x) + dt
1 t+x
p p
= x log(1 + x) 1 + 1 x2 1 x2 arcsin(x) + x log(2) + x log(1 + x)
2
1
1 t2 log(1 + t)
Z p
dt = x log x 1 + 1 x2 arcsin(x)
1 tx 2
Solution 48.34
Let F (z) denote the value of the integral.
Z
1 f (t) dt
F (z) =
C t z
Z
1 f (t)
F + (t0 ) + F (t0 ) = dt,
C t t0
f (t0 ) = F + (t0 ) F (t0 ).
and also
W + (t) W (t)
Z
+ 1
W (t0 ) + W (t0 ) = dt
C t t0
F + (t) + F (t)
Z
1
= dt
t t0
ZC
1 g(t)
= dt.
C t0
t
Z
1 g(t)
f (t0 ) = dt.
C t t0
1326
Solution 48.35
(i)
1
G( ) = ( )
+ 1
G () = ( )
G () = e2 G+ ()
G+ () G () = (1 e2 )( )1
G+ () + G () = (1 + e2 )( )1
2
(1 e
Z
1 ) d
G+ () + G () =
C ( )1 ( ) ( )
( )1
Z
1 d
1
= cot()
C ( ) ( ) ( ) ( )
z
z
that tends to unity as z . We find a series expansion of this function about infinity.
z
= 1 1
z z z
j !
k k
X X
j
= (1) (1)
j=0
j z k z
k=0
j !
X X jk k
= (1) j
z j
j=0
j k k
k=0
j
n !
X X
j jk k
Q(z) = (1) z nj .
j=0
jk k
k=0
z
G(z) = z n Q(z)
z
1327
vanishes at infinity.
G+ () = n Q()
G () = e2 n Q()
+
n 1 e2
G () G () =
+
n 1 + e2 2Q()
G () + G () =
Z
1 2
1
n
n 1 + e2 2Q()
1e d =
i C
n
Z
1
d = cot() n (1 cot())Q()
i C
n
Z
1 n
d = cot() Q() Q()
i C
Solution 48.36
Z 1 Z 1 Z 1
(y) 1 (y) 1 (y)
dy = dy dy
1 y 2 x2 2x 1 yx 2x 1 y+x
Z 1 Z 1
1 (y) 1 (y)
= dy + dy
2x 1 yx 2x 1 yx
Z 1
1 (y) + (y)
= dy
2x 1 yx
Z 1
1 (y) + (y)
dy = f (x)
2x 1 yx
Z 1
1 (y) + (y) 2x
dy = f (x)
1 yx
Z 1
1 2y p 1 k
(x) + (x) = f (y) 1 y 2 dy +
1 x 1 2 y x 1 x2
Z 1 p
1 2yf (y) 1 y 2 k
(x) + (x) = dy +
2
1 x 1 2 yx 1 x2
Z 1 p
1 yf (y) 1 y 2 k
(x) = dy + + g(x)
2
1 x 1 2 y x 1 x2
Solution 48.37
We define
Z 1
1 f (t)
F (z) = dt.
2 0 t z
The Plemelj formulas and the integral equation give us,
F + (x) F (x) = f (x)
F + (x) + F (x) = f (x).
1328
We solve for F + and F .
By writing
F + (x) +1
=
F (x) 1
we seek to determine F to within a multiplicative constant.
+1
log F + (x) log F (x) = log
1
+ 1 +
log F (x) log F (x) = log +
1
log F + (x) log F (x) = +
We have left off the additive term of 2n in the above equation, which will introduce factors of z k
and (z 1)m in F (z). We will choose these factors so that F (z) has integrable algebraic singularites
and vanishes at infinity. Note that we have defined to be the real parameter,
1+
= log .
1
By the discontinuity theorem,
Z 1
1 +
log F (z) = dz
2 0 t z
1 1z
= log
2 2 z
1/2/(2) !
z1
= log
z
1/2/(2)
z1
F (z) = z k (z 1)m
z
/(2)
1 z1
F (z) = p
z(z 1) z
e(/(2)) 1 x /(2)
F (x) = p
x(1 x) x
/(2)
e/2
1x
F (x) = p
x(1 x) x
Define /(2)
1 1x
f (x) = p .
x(1 x) x
We apply the Plemelj formulas.
Z 1
1 f (t)
e/2 e/2 dt = e/2 + e/2 f (x)
0 tx
Z 1
1 f (t)
dt = tanh f (x)
0 t x 2
1329
Thus we see that the eigenfunctions are
tanh1 ()/
1 1x
(x) = p
x(1 x) x
Solution 48.38
Z 1
1 f (t)
dt = f (x)
0 t x tan(x)
We define the function, Z 1
1 f (t)
F (z) = dt.
2 0 t z
The Plemelj formula are,
We replace e1/ by a multiplicative constant and multiply by (z 1)1 to give F (z) the desired
properties.
c
F (z) = 1z/
(z 1) z z/
We evaluate F (z) above and below the branch cut.
c c ex
F (x) = =
e(x) (1 x)1x/ xx/ (1 x)1x/ xx/
Finally we use the Plemelj formulas to determine f (x).
k sin(x)
f (x) = F + (x) F (x) =
(1 x)1x/ xx/
1330
Solution 48.39
Consider the equation,
Z
f (t)
f 0 (z) + dt = 1.
C tz
Since the integral is an analytic function of z off C we know that f (z) is analytic off C. We use
Cauchys theorem to evaluate the integral and obtain a differential equation for f (x).
Z
f (t)
f 0 (x) + dt = 1
C tx
f 0 (x) + f (x) = 1
1
f (x) = + c ex
Solution 48.40
Z
1
+ P (t x) f (t) dt = g(x)
C tx
Z Z
1 f (t) 1 1
dt = g(x) P (t x)f (t) dt
C t x C
We know that if Z
1 f ( )
d = g()
C
then Z
1 g( )
f () = d.
C
We apply this theorem to the integral equation.
Z Z Z
1 g(t) 1 1
f (x) = dt + P ( t)f ( ) d dt
2 C tx 2 C C t x
Z Z
P ( t)
Z
1 g(t) 1
= 2 dt + dt f ( ) d
C tx 2 C C t x
Z Z
1 g(t) 1
= 2 dt P (t x)f (t) dt
C tx C
1331
Now we substitute the non-analytic part of f (t) into the integral. (The analytic part integrates to
zero.)
Z Z Z
1 g(t) 1 1 g( )
= dt P (t x) 2 d dt
2 C tx C C t
Z
P (t x)
Z Z
1 g(t) 1 1
= 2 dt dt g( ) d
C tx 2 C C t
Z Z
1 g(t) 1
= 2 dt P ( x)g( ) d
C tx 2 C
Z Z
1 g(t) 1
f (x) = dt P (t x)g(t) dt
2 C t x 2 C
Solution 48.41
Solution 48.42
1332
Part VII
1333
Chapter 49
1335
49.1 Exercises
Exercise 49.1
A model set of equations to describe an epidemic, in which x(t) is the number infected, y(t) is the
number susceptible, is
dx dy
= rxy x, = rxy + ,
dt dt
where r > 0, 0, 0. Initially x = x0 , y = y0 at t = 0. Directly from the equations, without
using the phase plane:
2. Show for the case = 0, 6= 0 that x(t) first decreases or increases according as ry0 < or
ry0 > . Show that x(t) 0 as t in both cases. Find x as a function of y.
3. In the phase plane: Find the position of the singular point and its type when > 0, > 0.
Exercise 49.2
Find the singular points and their types for the system
du
= ru + v(1 v)(p v), r > 0, 0 < p < 1,
dx
dv
= u,
dx
which comes from one of our nonlinear diffusion problems. Note that there is a solution with
u = (1 v)
for special values of and r. Find v(x) for this special case.
Exercise 49.3
Check that r = 1 is a limit cycle for
dx
= y + x(1 r2 )
dt
dy
= x + y(1 r2 )
dt
Exercise 49.4
Consider
y = f (y) x
x = y
x = R cos
1
y = R sin
and obtain the exact differential equations for R(t), (t). Show that R(t) continually increases with
t when R 6= 0. Show that (t) continually decreases when R > 1.
1336
Exercise 49.5
One choice of the Lorenz equations is
x = 10x + 10y
y = Rx y xz
8
z = z + xy
3
Where R is a positive parameter.
1. Invistigate the nature of the sigular point at (0, 0, 0) by finding the eigenvalues and their
behavior for all 0 < R < .
2. Find the other singular points when R > 1.
3. Show that the appropriate eigenvalues for these other singular points satisfy the cubic
33 + 412 + 8(10 + R) + 160(R 1) = 0.
4. There is a special value of R, call it Rc , for which the cubic has two pure imaginary roots,
say. Find Rc and ; then find the third root.
Exercise 49.6
In polar coordinates (r, ), Einsteins equations lead to the equation
d2 v 1
+ v = 1 + v 2 , v= ,
d2 r
for planetary orbits. For Mercury, = 8 108 . When = 0 (Newtonian theory) the orbit is given
by
v = 1 + A cos , period 2.
Introduce = and use perturbation expansions for v() and in powers of to find the corrections
proportional to .
[A is not small; is the small parameter].
Exercise 49.7
Consider the problem
x + 02 x + x2 = 0, x = a, x = 0 at t = 0
Use expansions
x = a cos + a2 x2 () + a3 x3 () + , = t
= 0 + a2 2 + ,
to find a periodic solution and its natural frequency .
Note that, with the expansions given, there are no secular term troubles in the determination
of x2 (), but x2 () is needed in the subsequent determination of x3 () and .
Show that a term a1 in the expansion for would have caused trouble, so 1 would have to be
taken equal to zero.
Exercise 49.8
Consider the linearized traffic problem
dpn (t)
= [pn1 (t) pn (t)] , n 1,
dt
pn (0) = 0, n 1,
p0 (t) = aet , t > 0.
(We take the imaginary part of pn (t) in the final answers.)
1337
1. Find p1 (t) directly from the equation for n = 1 and note the behavior as t .
2. Find the generating function
X
G(s, t) = pn (t)sn .
n=1
3. Deduce that
pn (t) An et , as t ,
and find the expression for An . Find the imaginary part of this pn (t).
Exercise 49.9
1. For the equation modified with a reaction time, namely
d
pn (t + ) = [pn1 (t) pn (t)] n 1,
dt
find a solution of the form in 1(c) by direct substitution in the equation. Again take its
imaginary part.
2. Find a condition that the disturbance is stable, i.e. pn (t) remains bounded as n .
3. In the stable case show that the disturbance is wave-like and find the wave velocity.
1338
49.2 Hints
Hint 49.1
Hint 49.2
Hint 49.3
Hint 49.4
Hint 49.5
Hint 49.6
Hint 49.7
Hint 49.8
Hint 49.9
1339
49.3 Solutions
Solution 49.1
1. When = = 0 the equations are
dx dy
= rxy, = rxy.
dt dt
Adding these two equations we see that
dx dy
= .
dt dt
Integrating and applying the initial conditions x(0) = x0 and y(0) = y0 we obtain
x = x0 + y0 y
dy
= r(x0 + y0 y)y
dt
dy
= r(x0 + y0 )y + ry 2 .
dt
dy
y 2 = r(x0 + y0 )y 1 r
dt
du
= r(x0 + y0 )u r
dt
d
er(x0 +y0 )t u = rer(x0 +y0 )t
dt
Z t
u = er(x0 +y0 )t rer(x0 +y0 )t dt + cer(x0 +y0 )t
1
u= + cer(x0 +y0 )t
x0 + y0
1
1
y= + cer(x0 +y0 )t
x0 + y0
1340
2. For = 0, 6= 0, the equation for x is
x = rxy x.
At t = 0,
x(0) = x0 (ry0 ).
Thus we see that if ry0 < , x is initially decreasing. If ry0 > , x is initially increasing.
Now to show that x(t) 0 as t . First note that if the initial conditions satisfy x0 , y0 > 0
then x(t), y(t) > 0 for all t 0 because the axes are a seqaratrix. y(t) is is a strictly decreasing
function of time. Thus we see that at some time the quantity x(ry ) will become negative.
Since y is decreasing, this quantity will remain negative. Thus after some time, x will become
a strictly decreasing quantity. Finally we see that regardless of the initial conditions, (as long
as they are positive), x(t) 0 as t .
Taking the ratio of the two differential equations,
dx
= 1 + .
dy ry
x = y + ln y + c
r
Applying the intial condition,
x0 = y0 + ln y0 + c
r
c = x0 + y0 ln y0 .
r
Thus the solution for x is
y
x = x0 + (y0 y) + ln .
r y0
r
u = v + ruv
r
v = u v ruv
1341
The linearized system is
r
u = v
r
v = u v
Finding the eigenvalues of the linearized system,
r
2 r
r = + + r = 0
+
q
r
( r 2
) 4r
=
2
Since both eigenvalues have negative real part, we see that the singular point is asymptotically
stable. A plot of the vector field for r = = = 1 is attached. We note that there appears to
be a stable singular point at x = y = 1 which corroborates the previous results.
Solution 49.2
The singular points are
u = 0, v = 0, u = 0, v = 1, u = 0, v = p.
du
= ru + (1 p)w
dx
dw
=u
dx
r (p 1)
= 2 r + p 1 = 0
1
p
r r2 4(p 1)
=
2
Thus we see that this point is a saddle point. The critical point is unstable.
1342
The point u = 0, v = p. Linearizing the system about u = 0, v = p, we make the substitution
w = v p.
du
= ru + (w + p)(1 p w)(w)
dx
dw
=u
dx
du
= ru + p(p 1)w
dx
dw
=u
dx
r p(1 p)
= 2 r + p(1 p) = 0
1
p
r r2 4p(1 p)
=
2
Thus we see that this point is a source. The critical point is unstable.
The solution of for special values of and r. Differentiating u = v(1 v),
du
= 2v.
dv
Taking the ratio of the two differential equations,
du v(1 v)(p v)
=r+
dv u
v(1 v)(p v)
=r+
v(1 v)
(p v)
=r+
Equating these two expressions,
p v
2v = r + .
Equating coefficients of v, we see that = 1 .
2
1
= r + 2p
2
Thus we have the solution u = 1 v(1 v) when r = 1 2p. In this case, the differential equation
2 2
for v is
dv 1
= v(1 v)
dx 2
dv 1 1
v 2 = v 1 +
dx 2 2
We make the change of variablles y = v 1 .
dy 1 1
= y +
dx 2 2
d x/2 ex/ 2
e y =
dx 2
Z x x/2
e
y = ex/ 2 dx + cex/ 2
2
y = 1 + cex/ 2
1343
The solution for v is
1
v(x) = .
1 + cex/ 2
Solution 49.3
We make the change of variables
x = r cos
y = r sin .
x = r cos r sin
y = r sin + r cos .
Multiplying the equations by cos and sin and taking their sum and difference yields
r = r(1 r2 )
r = r.
r = r(1 r2 )
= t + 0
At this point we could note that r > 0 in (0, 1) and r < 0 in (1, ). Thus if r is not initially zero,
then the solution tends to r = 1.
Alternatively, we can solve the equation for r exactly.
r = r r3
r 1
= 2 1
r3 r
1
u = u 1
2
u + 2u = 2
Z t
u = e2t 2e2t dt + ce2t
u = 1 + ce2t
1
r=
1 + ce2t
1344
Solution 49.4
The set of differential equations is
y = f (y) x
x = y.
x = R cos
1
y = R sin
Differentiating x and y,
x = R cos R sin
1 1
y = R sin + R cos .
The pair of differential equations become
1
R sin + R cos = f R sin R cos
1
R cos R sin = R sin .
1 1 1
R sin + R cos = R cos f R sin
1
R cos R sin = R sin .
Multiplying by cos and sin and taking the sum and difference of these differential equations yields
1 1
R = sin f R sin
1 1 1
R = R + cos f R sin .
Dividing by R in the second equation,
1 1
R = sin f R sin
1 1 cos 1
= + f R sin .
R
We make the assumptions that 0 < < 1 and that f (y) is an odd function that is nonnegative
for positive y and satisfies |f (y)| 1 for all y.
Since sin is odd,
1
sin f R sin
is nonnegative. Thus R(t) continually increases with t when R 6= 0.
If R > 1 then
cos 1 f 1 R sin
R f R sin
1.
1345
Thus the value of ,
1 1 cos 1
+ f R sin ,
R
is always nonpositive. Thus (t) continually decreases with t.
Solution 49.5
1. Linearizing the Lorentz equations about (0, 0, 0) yields
x 10 10 0 x
y = R 1 0 y
z 0 0 8/3 z
There are three cases for the eigenvalues of the linearized system.
R < 1. There are three negative, real eigenvalues. In the linearized and also the nonlinear
system, the origin is a stable, sink.
R = 1. There are two negative, real eigenvalues and one zero eigenvalue. In the linearized
system the origin is stable and has a center manifold plane. The linearized system does
not tell us if the nonlinear system is stable or unstable.
R > 1. There are two negative, real eigenvalues, and one positive, real eigenvalue. The origin
is a saddle point.
yields
10 10 q 0
X X
8
1 1 3 (R 1) Y
Y =
q q
Z 8
(R 1) 8
(R 1) 8 Z
3 3 3
41 2 8(10 + R) 160
3 + + + (R 1).
3 3 3
Thus the eigenvalues of the matrix satisfy the polynomial,
1346
Linearizing about the point
r r !
8 8
(R 1), (R 1), R 1
3 3
yields
10 10 q 0
X X
8
1 1 (R 1)
Y
Y = 3
q q
Z 8 8
3 (R 1) 3 (R 1) 38 Z
4. If the characteristic polynomial has two pure imaginary roots and one real root, then it
has the form
( r)(2 + 2 ) = 3 r2 + 2 r2 .
Equating the 2 and the term with the characteristic polynomial yields
r
41 8
r= , = (10 + R).
3 3
Equating the constant term gives us the equation
41 8 160
(10 + Rc ) = (Rc 1)
3 3 3
which has the solution
470
Rc = .
19
For this critical value of R the characteristic polynomial has the roots
41
1 =
3
4
2 = 2090
19
4
3 = 2090.
19
Solution 49.6
The form of the perturbation expansion is
1347
Substituting these expressions into the differential equation for v(),
1
u00 + u = 1 + 2(1 + 1 )A cos + A2 (cos 2 + 1)
2
1 1
u00 + u = (1 + A2 ) + 2(1 + 1 )A cos + A2 cos 2.
2 2
To avoid secular terms, we must have 1 = 1. A particular solution for u is
1 1
u = 1 + A2 A2 cos 2.
2 6
The the solution for v is
1 2 1 2
v() = 1 + A cos((1 )) + 1 + A A cos(2(1 )) + O(2 ).
2 6
Solution 49.7
Substituting the expressions for x and into the differential equations yields
2 2
2 2 d x2 2 3 2 d x3
a 0 + x2 + cos + a 0 + x3 20 2 cos + 2x2 cos + O(a4 ) = 0
d2 d2
d2 x2
+ x2 = 2 (1 + cos 2).
d2 20
2
x3 = (48 + 29 cos + 16 cos 2 + 3 cos 3).
14404
2
2 3
x(t) = a cos + a (3 + 2 cos + cos 2) + a (48 + 29 cos + 16 cos 2 + 3 cos 3) + O(a4 )
602 14404
1348
52
where = 0 a2 120
t.
Now to see why we didnt need an a1 term. Assume that
x = a cos + a2 x2 () + O(a3 ); = t
2
= 0 + a1 + O(a ).
1
x002 + x2 = 2 cos (1 + cos 2).
0 202
In order to eliminate secular terms, we need 1 = 0.
Solution 49.8
1. The equation for p1 (t) is
dp1 (t)
= [p0 (t) p1 (t)].
dt
dp1 (t)
= [aet p1 (t)]
dt
d t
e p1 (t) = aet et
dt
a t
p1 (t) = e + cet
+
a
et et
p1 (t) =
+
dpn (t)
= [pn1 (t) pn (t)]
dt
Multiply by sn and sum from n = 1 to .
X
X
p0n (t)sn = [pn1 (t) pn (t)]sn
n=1 n=1
G(s, t) X
= pn sn+1 G(s, t)
t n=0
G(s, t) X
= sp0 + pn sn+1 G(s, t)
t n=1
G(s, t)
= aset + sG(s, t) G(s, t)
t
G(s, t)
= aset + (s 1)G(s, t)
t
(1s)t
e G(s, t) = ase(1s)t et
t
as
G(s, t) = et + C(s)e(s1)t
(1 s) +
1349
The initial condition is
X
G(s, 0) = pn (0)sn = 0.
n=1
as
G(s, t) = et e(s1)t .
(1 s) +
Thus we have
a
pn (t) et as t .
(1 + /)n
a t
=(pn (t)) = e
(1 + /)n
n
1 /
=a [cos(t) + sin(t)]
1 + (/)2
a
= [cos(t)=[(1 /)n ] + sin(t)<[(1 /)n ]]
(1 + (/)2 )n
n n
a X j
X j
(1)(j+1)/2 (1)j/2
= 2 n
cos(t) + sin(t)
(1 + (/) )
j=1
j=0
odd j even j
Solution 49.9
1. Substituting pn = An et into the differential equation yields
An e(t+ ) = [An1 et An et ]
An ( + e ) = An1
rn ( + e ) = rn1
r=
+ e
Thus we have
n
1
pn (t) = et .
1 + e /
1350
Taking the imaginary part,
n
1 t
=(pn (t)) = = e
1 + e
" n #
1 e
== cos(t) + sin(t)
1 + (e e ) + ( )2
n
1 sin( ) cos( )
== cos(t) + sin(t)
1 2 sin( ) + ( )2
n h n i
1 h
= cos(t)= 1 sin( ) cos( )
1 2 sin( ) + ( )2
h n i i
+ sin(t)< 1 sin( ) cos( )
n
1
=
1 2 sin( ) + ( )2
n h ij h inj
h X
cos(t) (1)(j+1)/2 cos( ) 1 sin( )
j=1
odd j
n h ij h inj i
X
+ sin(t) (1)j/2 cos( ) 1 sin( )
j=0
even j
3.
1351
1352
Chapter 50
1353
50.1 Exercises
Exercise 50.1
Consider the nonlinear PDE
ut + uux = 0.
The solution u is constant along lines (characteristics) such that x ut = k for any constant k. Thus
the slope of these lines will depend on the initial data u(x, 0) = f (x).
1. In terms of this initial data, write down the equation for the characteristic in the x, t plane
which goes through the point (x, t) = (, 0).
2. State a criteria on f such that two characteristics will intersect at some positive time t. As-
suming intersections do occur, what is the time of the first intersection? You may assume that
f is everywhere continuous and differentiable.
2
3. Apply this to the case where f (x) = 1 ex to indicate where and when a shock will form
and sketch (roughly) the solution both before and after this time.
Exercise 50.2
Solve the equation
t + (1 + x)x + = 0 in < x < , t > 0,
with initial condition (x, 0) = f (x).
Exercise 50.3
Solve the equation
t + x + =0
1+x
in the region 0 < x < , t > 0 with initial condition (x, 0) = 0, and boundary condition (0, t) =
g(t). [Here is a positive constant.]
Exercise 50.4
Solve the equation
t + x + 2 = 0
in < x < , t > 0 with initial condition (x, 0) = f (x). Note that the solution could become
infinite in finite time.
Exercise 50.5
Consider
ct + ccx + c = 0, < x < , t > 0.
1. Use the method of characteristics to solve the problem with
c = F (x) at t = 0.
( is a positive constant.)
2. Find equations for the envelope of characteristics in the case F 0 (x) < 0.
3. Deduce an inequality relating max |F 0 (x)| and which decides whether breaking does or does
not occur.
Exercise 50.6
For water waves in a channel the so-called shallow water equations are
ht + (hv)x = 0 (50.1)
2 1 2
(hv)t + hv + gh = 0, g = constant. (50.2)
2 x
1354
Investigate whether there are solutions with v = V (h), where V (h) is not posed in advance but is
obtained from requiring consistency between the h equation obtained from (1) and the h equation
obtained from (2).
There will be two possible choices for V (h) depending on a choice of sign. Consider each case
separately. In each case fix the arbitrary constant that arises in V (h) by stipulating that before the
waves arrive, h is equal to the undisturbed depth h0 and V (h0 ) = 0.
Find the h equation and the wave speed c(h) in each case.
Exercise 50.7
After a change of variables, the chemical exchange equations can be put in the form
+ =0 (50.3)
t x
= ; , , = positive constants. (50.4)
t
1. Investigate wave solutions in which = (X), = (X), X = x U t, U = constant, and
show that (X) must satisfy an ordinary differential equation of the form
d
= quadratic in .
dX
2. Discuss ths smooth shock solution as we did for a different example in class. In particular
find the expression for U in terms of the values of as X , and find the sign of d/dX.
Check that
2 1
U=
2 1
in agreement with the discontinuous theory.
Exercise 50.8
Find solitary wave solutions for the following equations:
1. t + x + 6x xxt = 0. (Regularized long wave or B.B.M. equation)
2. utt uxx 32 u2 xx uxxxx = 0. (Boussinesq)
1355
50.2 Hints
Hint 50.1
Hint 50.2
Hint 50.3
Hint 50.4
Hint 50.5
Hint 50.6
Hint 50.7
Hint 50.8
1356
50.3 Solutions
Solution 50.1
1.
x = + u(, 0)t
x = + f ()t
2. Consider two points 1 and 2 where 1 < 2 . Suppose that f (1 ) > f (2 ). Then the two
characteristics passing through the points (1 , 0) and (2 , 0) will intersect.
1 + f (1 )t = 2 + f (2 )t
2 1
t=
f (1 ) f (2 )
We see that the two characteristics intersect at the point
2 1 2 1
(x, t) = 1 + f (1 ) , .
f (1 ) f (2 ) f (1 ) f (2 )
We see that if f (x) is not a non-decreasing function, then there will be a positive time when
characteristics intersect.
Assume that f (x) is continuously differentiable and is not a non-decreasing function. That
is, there are points where f 0 (x) is negative. We seek the time T of the first intersection of
characteristics.
2 1
T = min
1 <2 f (1 ) f (2 )
f (1 )>f (2 )
(f (2 ) f (1 ))/(2 1 ) is the slope of the secant line on f (x) that passes through the points
1 and 2 . Thus we seek the secant line on f (x) with the minimum slope. This occurs for the
tangent line where f 0 (x) is minimum.
1
T =
min f 0 ()
3. First we find the time when the characteristics first intersect. We find the minima of f 0 (x)
with the derivative test.
2
f (x) = 1 ex
2
f 0 (x) = 2x ex
2
f 00 (x) = 2 4x2 ex = 0
1
x =
2
The minimum slope occurs at x = 1/ 2.
1 e1/2
T = = 1.16582
2 e1/2 / 2 2
Figure 50.1 shows the solution at various times up to the first collision of characteristics, when
a shock forms. After this time, the shock wave moves to the right.
Solution 50.2
The method of characteristics gives us the differential equations
x0 (t) = (1 + x) x(0) =
d
= (, 0) = f ()
dt
1357
1 1
0.8 0.8
0.6 0.6
0.4 0.4
0.2 0.2
-3 -2 -1 1 2 3 -3 -2 -1 1 2 3
1 1
0.8 0.8
0.6 0.6
0.4 0.4
0.2 0.2
-3 -2 -1 1 2 3 -3 -2 -1 1 2 3
Solution 50.3
d
= t + x0 (t)x =
dt 1+x
The characteristic curves x(t) satisfy x0 (t) = 1, so x(t) = t + c. The characteristic curve that
separates the region with domain of dependence on the x axis and domain of dependence on the t
axis is x(t) = t. Thus we consider the two cases x > t and x < t.
x > t. x(t) = t + .
x < t. x(t) = t .
Now we solve the differential equation for in the two domains.
x > t.
d
= , (, 0) = 0, =xt
dt 1+x
d
=
dt 1+t+
Z t
1
= c exp dt
t++1
= cexp ( log(t + + 1))
= c(t + + 1)
1358
applying the initial condition, we see that
=0
x < t.
d
= , (0, ) = g( ), =tx
dt 1+x
d
=
dt 1+t
= c(t + 1 )
= g( )(t + 1 )
= g(t x)(x + 1)
Solution 50.4
The method of characteristics gives us the differential equations
x0 (t) = 1 x(0) =
d
= 2 (, 0) = f ()
dt
Solving the first differential equation,
x(t) = t + .
The second differential equation is then
d
= 2 , (, 0) = f (), =xt
dt
2 d = dt
1 = t + c
1
=
tc
1
=
t + 1/f ()
1
= .
t + 1/f (x t)
Solution 50.5
1. Taking the total derivative of c with respect to t,
dc dx
= ct + cx .
dt dt
Equating terms with the partial differential equation, we have the system of differential equa-
tions
dx
=c
dt
dc
= c.
dt
1359
subject to the initial conditions
c(, t) = c1 et
c(, t) = F ()et
The solution to the problem at the point (x, t) is found by first solving
F ()
x= (1 et ) +
for and then using this value to compute
c(x, t) = F ()et .
1360
Thus the equations that describe the envelope are
F ()
x= +
F 0 ()
1
t = log 1 + 0 .
F ()
3. The second equation for the envelope has a solution for positive t if there is some x that
satisfies
1 < 0 < 0.
F (x)
This is equivalent to
< F 0 (x) < .
So in the case that F 0 (x) < 0, there will be breaking iff
Solution 50.6
With the substitution v = V (h), the two equations become
ht + (V + hV 0 )hx = 0
(V + hV 0 )ht + (V 2 + 2hV V 0 + gh)hx = 0.
g
(V 0 )2 = .
h
There are two choices depending on which sign we choose when taking the square root of the above
equation.
Positive V0 .
r
g
V0 =
h
p
V = 2 gh + const
p
V = 2 g( h h0 )
1361
Negative V0 .
r
0 g
V =
h
p
V = 2 gh + const
p
V = 2 g( h0 h)
Solution 50.7
1. Making the substitutions, = (X), = (X), X = x U t, the system of partial differential
equations becomes
U 0 + 0 = 0
U 0 = .
U + = c
= c + U .
Now we substitute the expression for into the second partial differential equation.
U 0 = (c + U ) (c + U )
c c
0 = + + + +
U U U
Thus (X) satisfies the ordinary differential equation
c c
0 = 2 + + .
U U U
2. Assume that
(X) 1 as X +
(X) 2 as X
0 (X) 0 as X .
We see that the roots of the denominator of the integrand must be 1 and 2 . Thus we can
write the ordinary differential equation for (X) as
0 (X) = ( 1 )( 2 ) = 2 (1 + 2 ) + 1 2 .
1362
Equating coefficients in the polynomial with the differential equation for part 1, we obtain the
two equations
c c
= 1 2 , + = (1 + 2 ).
U U U
Solving the first equation for c,
U 1 2
c= .
Now we substitute the expression for c into the second equation.
U 1 2
+ = (1 + 2 )
U U
2 1 2
=+ (1 + 2 )
U
Thus we see that U is
U= .
2 + 2 1 2 (1 + 2 )
Since the quadratic polynomial in the ordinary differential equation for (X) is convex, it is
negative valued between its two roots. Thus we see that
d
< 0.
dX
1363
Solution 50.8
1.
t + x + 6x xxt = 0
We make the substitution
(x, t) = z(X), X = x U t.
(1 U )z 0 + 6zz 0 + U z 000 = 0
(1 U )z + 3z 2 + U z 00 = 0
1 1
(1 U )z 2 + z 3 + U (z 0 )2 = 0
2 2
0 2 U 1 2 2
(z ) = z z3
U U !
r
U 1 2 1 U 1
z(X) = sech X
2 2 U
r !!
U 1 2 1 U 1 p
(x, t) = sech x (U 1)U t
2 2 U
2 = 0
= .
1 2
We set
U 1
2 = .
U
is then
=
1 2
p
(U 1)/U
=
1 (U 1)/U )
p
(U 1)U
=
U (U 1)
p
= (U 1)U .
1364
00
3 2
(U 2 1)z 00 z z 0000 = 0
2
0
03 2
2
(U 1)z z z 000 = 0
2
3
(U 2 1)z z 2 z 00 = 0
2
1 2 1 1
(U 1)z 2 z 3 (z 0 )2 = 0
2 2 2
(z 0 )2 = (U 2 1)z 2 z 3
p
2 2 1 2
z = (U 1) sech U 1X
2
p
2 2 1 2
p
2
u(x, t) = (U 1) sech U 1x U U 1t
2
2 2 4 = 0
2 = 2 (2 + 1).
We set p
= U 2 1.
is then
2 = 2 (2 + 1)
= (U 2 1)U 2
p
= U U 2 1.
3.
tt xx + 2x xt + xx t xxxx
We make the substitution
(x, t) = z(X), X = x U t.
(U 2 1)z 00 2U z 0 z 00 U z 00 z 0 z 0000 = 0
(U 2 1)z 00 3U z 0 z 00 z 0000 = 0
3
(U 2 1)z 0 (z 0 )2 z 000 = 0
2
1365
Multiply by z 00 and integrate.
1 2 1 1
(U 1)(z 0 )2 (z 0 )3 (z 00 )2 = 0
2 2 2
(z 00 )2 = (U 2 1)(z 0 )2 (z 0 )3
p
0 2 2 1 2
z = (U 1) sech U 1X
2
p
2 2 1 2
p
2
x (x, t) = (U 1) sech U 1x U U 1t .
2
2 = 2 (2 + 1).
4.
ut + 30u2 u1 + 20u1 u2 + 10uu3 + u5 = 0
We make the substitution
u(x, t) = z(X), X = x U t.
1 5 1
U z 2 + z 4 + 5z(z 0 )2 (z 00 )2 + z 0 z 000 = 0
2 2 2
Assume that
(z 0 )2 = P (z).
Differentiating this relation,
2z 0 z 00 = P 0 (z)z 0
1
z 00 = P 0 (z)
2
1
z = P 00 (z)z 0
000
2
1
z z = P 00 (z)P (z).
000 0
2
1366
Substituting this expressions into the differential equation for z,
1 5 11 0 1
U z 2 + z 4 + 5zP (z) (P (z))2 + P 00 (z)P (z) = 0
2 2 24 2
4U z 2 + 20z 4 + 40zP (z) (P 0 (z))2 + 4P 00 (z)P (z) = 0
5 = 0.
We set
= U 1/4 .
The solution for u(x, t) becomes
2
x t
sech2
2 2
where
= 5 .
1367
1368
Part VIII
Appendices
1369
Appendix A
Greek Letters
The following table shows the greek letters, (some of them have two typeset variants), and their
corresponding Roman letters.
1371
1372
Appendix B
Notation
Y (x) Bessel function of the second kind and order , Neumann function
Z set of integers
Z+ set of positive integers
1373
1374
Appendix C
Analytic Functions. A function f (z) is analytic in a domain if the derivative f 0 (z) exists in that
domain.
If f (z) = u(x, y) + v(x, y) is defined in some neighborhood of z0 = x0 + y0 and the partial
derivatives of u and v are continuous and satisfy the Cauchy-Riemann equations
ux = vy , uy = vx ,
then f 0 (z0 ) exists.
Residue Theorem. Let C be a positively oriented, simple, closed contour. If f (z) is analytic in
and on C except for isolated singularities at z1 , z2 , . . . , zN inside C then
I X N
f (z) dz = 2 Res(f (z), zn ).
C n=1
Jordans Lemma. Z
eR sin d <
.
0 R
Let a be a positive constant. If f (z) vanishes as |z| then the integral
Z
f (z) eaz dz
C
1375
Taylor Series. Let f (z) be a function that is analytic and single valued in the disk |z z0 | < R.
X f (n) (z0 )
f (z) = (z z0 )n
n=0
n!
Laurent Series. Let f (z) be a function that is analytic and single valued in the annulus r <
|z z0 | < R. In this annulus f (z) has the convergent series,
X
f (z) = cn (z z0 )n ,
n=
where I
1 f (z)
cn = dz
2 (z z0 )n+1
and the path of integration is any simple, closed, positive contour around z0 and lying in the annulus.
The path of integration is shown in Figure C.1.
Im(z)
r
Re(z)
1376
Appendix D
Table of Derivatives
d c
f = cf c1 f 0
dx
d
f (g) = f 0 (g)g 0
dx
d2
f (g) = f 00 (g)(g 0 )2 + f 0 g 00
dx2
dn
n n1 n2 2 n
n d f n d f dg n d fd g n d g
(f g) = g + + + + f
dxn 0 dxn 1 dxn1 dx 2 dxn2 dx2 n dxn
d 1
ln x =
dx |x|
d x
c = cx ln c
dx
d g df dg
f = gf g1 + f g ln f
dx dx dx
d
sin x = cos x
dx
d
cos x = sin x
dx
d
tan x = sec2 x
dx
d
csc x = csc x cot x
dx
d
sec x = sec x tan x
dx
1377
d
cot x = csc2 x
dx
d 1
arcsin x = , arcsin x
dx 1 x2 2 2
d 1
arccos x = , 0 arccos x
dx 1 x2
d 1
arctan x = , arctan x
dx 1 + x2 2 2
d
sinh x = cosh x
dx
d
cosh x = sinh x
dx
d
tanh x = sech2 x
dx
d
csch x = csch x coth x
dx
d
sech x = sech x tanh x
dx
d
coth x = csch2 x
dx
d 1
arcsinh x =
dx 2
x +1
d 1
arccosh x = , x > 1, arccosh x > 0
dx 2
x 1
d 1
arctanh x = , x2 < 1
dx 1 x2
Z x
d
f () d = f (x)
dx c
Z c
d
f () d = f (x)
dx x
Z h Z h
d f (, x)
f (, x) d = d + f (h, x)h0 f (g, x)g 0
dx g g x
1378
Appendix E
Table of Integrals
Z Z
dv du
u dx = uv v dx
dx dx
f 0 (x)
Z
dx = log f (x)
f (x)
f 0 (x)
Z p
p dx = f (x)
2 f (x)
x+1
Z
x dx = for 6== 1
+1
Z
1
dx = log x
x
eax
Z
eax dx =
a
abx
Z
abx dx = for a > 0
b log a
Z
log x dx = x log x x
Z
1 1 x
dx = arctan
x2
+a2 a a
(
1
log ax for x2 < a2
Z
1
dx = 2a1
a+x
xa
2
x a 2
2a log x+a for x2 > a2
Z
1 x x
dx = arcsin = arccos for x2 < a2
a2 x2 |a| |a|
Z
1 p
dx = log(x + x2 a2 )
x2 a2
Z
1 1 x
dx = sec1
x x2 a2 |a| a
1379
!
a2 x2
Z
1 1 a+
dx = log
2
x a x2 a x
Z
1
sin(ax) dx = cos(ax)
a
Z
1
cos(ax) dx = sin(ax)
a
Z
1
tan(ax) dx = log cos(ax)
a
Z
1 ax
csc(ax) dx = log tan
a 2
Z
1 ax
sec(ax) dx = log tan +
a 4 2
Z
1
cot(ax) dx = log sin(ax)
a
Z
1
sinh(ax) dx = cosh(ax)
a
Z
1
cosh(ax) dx = sinh(ax)
a
Z
1
tanh(ax) dx = log cosh(ax)
a
Z
1 ax
csch(ax) dx = log tanh
a 2
Z
i i ax
sech(ax) dx = log tanh +
a 4 2
Z
1
coth(ax) dx = log sinh(ax)
a
Z
1 x
x sin ax dx = 2 sin ax cos ax
a a
a2 x2 2
Z
2x
x2 sin ax dx = sin ax cos ax
a2 a3
Z
1 x
x cos ax dx = cos ax + sin ax
a2 a
2x cos ax a2 x2 2
Z
x2 cos ax dx = + sin ax
a2 a3
1380
Appendix F
Definite Integrals
Integrals from to . Let f (z) be analytic except for isolated singularities, none of which
lie on the real axis. Let a1 , . . . , am be the singularities of f (z) in the upper half plane; and CR be
the semi-circle from R to R in the upper half plane. If
lim R max |f (z)| = 0
R zCR
then Z m
X
f (x) dx = 2 Res (f (z), aj ) .
j=1
Let b1 , . . . , bn be the singularities of f (z) in the lower half plane. Let CR be the semi-circle from R
to R in the lower half plane. If
lim R max |f (z)| = 0
R zCR
then Z n
X
f (x) dx = 2 Res (f (z), bj ) .
j=1
Integrals from 0 to . Let f (z) be analytic except for isolated singularities, none of which lie
on the positive real axis, [0, ). Let z1 , . . . , zn be the singularities of f (z). If f (z) z as z 0
for some > 1 and f (z) z as z for some < 1 then
Z n
X
f (x) dx = Res (f (z) log z, zk ) .
0 k=1
Z n n
1X X
Res f (z) log2 z, zk +
f (x) log dx = Res (f (z) log z, zk )
0 2
k=1 k=1
Assume that a is not an integer. If z f (z) z as z 0 for some > 1 and z a f (z) z as
a
n n
2 a X
Z
2 X
xa f (x) log x dx = Res (z a
f (z) log z, z k ) , + Res (z a f (z), zk )
0 1 e2a
k=1
sin2 (a) k=1
1381
Fourier Integrals. Let f (z) be analytic except for isolated singularities, none of which lie on the
real axis. Suppose that f (z) vanishes as |z| . If is a positive real number then
Z n
X
f (x) ex dx = 2 Res(f (z) ez , zk ),
k=1
where z1 , . . . , zn are the singularities of f (z) in the upper half plane. If is a negative real number
then Z Xn
f (x) ex dx = 2 Res(f (z) ez , zk ),
k=1
1382
Appendix G
Table of Sums
X r
rn = , for |r| < 1
n=1
1r
N
X r rN +1
rn =
n=1
1r
b
X (a + b)(b + 1 a)
n=
n=a
2
N
X N (N + 1)
n=
n=1
2
b
X b(b + 1)(2b + 1) a(a 1)(2a 1)
n2 =
n=a
6
N
X N (N + 1)(2N + 1)
n2 =
n=1
6
X (1)n+1
= log(2)
n=1
n
X 1 2
2
=
n=1
n 6
X (1)n+1 2
2
=
n=1
n 12
X 1
3
= (3)
n=1
n
X (1)n+1 3(3)
3
=
n=1
n 4
1383
X 1 4
=
n=1
n4 90
X (1)n+1 7 4
=
n=1
n4 720
X 1
= (5)
n=1
n5
X (1)n+1 15(5)
5
=
n=1
n 16
X 1 6
=
n=1
n6 945
X (1)n+1 31 6
=
n=1
n6 30240
1384
Appendix H
X
(1 z)1 = zn |z| < 1
n=0
X
(1 z)2 = (n + 1)z n |z| < 1
n=0
X
(1 + z) = zn |z| < 1
n=0
n
X zn
ez = |z| <
n=0
n!
X zn
log(1 z) = |z| < 1
n=1
n
z 2n1
1+z X
log =2 |z| < 1
1z n=1
2n 1
X (1)n z 2n
cos z = |z| <
n=0
(2n)!
X (1)n z 2n+1
sin z = |z| <
n=0
(2n + 1)!
z3 2z 5 17z 7
tan z = z + + + + |z| < 2
3 15 315
z3 1 3z 5 1 3 5z 7
cos1 z = z + + + + |z| < 1
2 23 245 2467
z3 1 3z 5 1 3 5z 7
sin1 z = z + + + + |z| < 1
23 245 2467
X (1)n+1 z 2n1
tan1 z = |z| < 1
n=1
2n 1
1385
X z 2n
cosh z = |z| <
n=0
(2n)!
X z 2n+1
sinh z = |z| <
n=0
(2n + 1)!
z3 2z 5 17z 7
tanh z = z + + |z| < 2
3 15 315
X (1)n z +2n
J (z) = |z| <
n=0
n!( + n + 1) 2
z +2n
X 1
I (z) = |z| <
n=0
n!( + n + 1) 2
1386
Appendix I
d
f (t) sf(s) f (0)
dt
d2
f (t) s2 f(s) sf (0) f 0 (0)
dt2
dn
f (t) sn f(s) sn1 f (0)
dtn
sn2 f 0 (0) f (n1) (0)
t
f(s)
Z
f ( ) d
0 s
f(s)
Z tZ
f (s) ds d
0 0 s2
1387
d
tf (t) f (s)
ds
dn
tn f (t) (1)n f (s)
dsn
Z 1 Z
f (t) f (t)
, dt exists f(t) dt
t 0 t s
Z t
f ( )g(t ) d, f, g C 0 f(s)g(s)
0
RT
0
est f (t) dt
f (t), f (t + T ) = f (t)
1 esT
RT
0
est f (t) dt
f (t), f (t + T ) = f (t)
1 + esT
1
1
s
1
t
s2
n!
tn , for n = 0, 1, 2, . . .
sn+1
3/2
t1/2 s
2
1/2
t1/2 s
(1)(3)(5) (2n 1) n1/2
tn1/2 , n Z+ s
2n
( + 1)
t , <() > 1
s+1
Log s
Log t
s
( + 1)
t Log t, <() > 1 (( + 1) Log s)
sn+1
(t) 1 s>0
1388
(n) (t), n Z0+ sn s>0
1
ect s>c
sc
1
t ect s>c
(s c)2
tn1 ect 1
, n Z+ s>c
(n 1)! (s c)n
c
sin(ct)
s2 + c2
s
cos(ct)
s2 + c2
c
sinh(ct) s > |c|
s2 c2
s
cosh(ct) s > |c|
s2 c2
2cs
t sin(ct)
(s2 + c2 )2
s2 c2
t cos(ct)
(s2 + c2 )2
n!
tn ect , n Z+
(s c)n+1
c
edt sin(ct) s>d
(s d)2 + c2
sd
edt cos(ct) s>d
(s d)2 + c2
(
0 for c < 0
(t c) sc
e for c > 0
(
0 for t < c 1 cs
H(t c) = e
1 for t > c s
cn
J (ct) > 1
s2 + c2 s + s2 + c2
cn
I (ct) <(s) > c, > 1
s2 c2 s s2 + c2
1389
1390
Appendix J
Z
1
f (x) f (x) ex dx
2
Z
F() ex d F()
xn f (x) n F (n) ()
f (x + c) ec F ()
ecx f (x) F ( + c)
2 1 2
ecx , c>0 e /4c
4c
c/
ec|x| , c>0
2 + c2
2c
, c>0 ec||
x2 + c2
(
1 0 for > 0
, >0
x e for < 0
(
1 e for > 0
, <0
x 0 for < 0
1391
1
sign()
x 2
(
0 for x < c 1
H(x c) = ec
1 for x > c 2
1
ecx H(x), <(c) > 0
2(c + )
1
ecx H(x), <(c) > 0
2(c )
1 ()
1
(x ) e
2
((x + ) + (x )) cos()
((x + ) (x )) sin()
(
1 for |x| < c sin(c)
H(c |x|) = ,c>0
0 for |x| > c
1392
Appendix K
Z
1
f (x) n f (x) ex dx
(2) Rn
Z
F() ex d F()
Rn
1393
1394
Appendix L
Z
1
f (x) f (x) cos (x) dx
0
Z
2 C() cos (x) d C()
0
1
f 0 (x) S() f (0)
1 0
f 00 (x) 2 C() f (0)
xf (x) Fs [f (x)]
1
f (cx), c>0 C
c c
2c
ec
x2 + c2
c/
ecx
2 + c2
2 1 2
ecx e /(4c)
4c
r
x2 /(4c) 2
e ec
c
1395
1396
Appendix M
Z
1
f (x) f (x) sin (x) dx
0
Z
2 S() sin (x) d S()
0
f 0 (x) C()
1
f 00 (x) 2 S() + f (0)
xf (x) Fc [f (x)]
1
f (cx), c>0 S
c c
2x
ec
x2 + c2
/
ecx
2 + c2
x 1 c
2 arctan e
c
1 cx 1
e arctan
x c
1
1
2
1
x
2 2
x ecx e /(4c)
4c3/2
x x2 /(4c) 2
e ec
2c3/2
1397
1398
Appendix N
Table of Wronskians
W [x a, x b] ba
W [cos(ax), sin(ax)] a
W [cosh(ax), sinh(ax)] a
1399
1400
Appendix O
Sturm-Liouville Eigenvalue
Problems
y 00 + 2 y = 0, y(a) = y(b) = 0
n n(x a)
n = , yn = sin , nN
ba ba
ba
hyn , yn i =
2
y 00 + 2 y = 0, y(a) = y 0 (b) = 0
(2n 1) (2n 1)(x a)
n = , yn = sin , nN
2(b a) 2(b a)
ba
hyn , yn i =
2
y 00 + 2 y = 0, y 0 (a) = y(b) = 0
(2n 1) (2n 1)(x a)
n = , yn = cos , nN
2(b a) 2(b a)
ba
hyn , yn i =
2
y 00 + 2 y = 0, y 0 (a) = y 0 (b) = 0
n n(x a)
n = , yn = cos , n = 0, 1, 2, . . .
ba ba
ba
hy0 , y0 i = b a, hyn , yn i = for n N
2
1401
1402
Appendix P
G0 + p(x)G = (x ), G( : ) = 0
Z x
G(x|) = exp p(t) dt H(x )
y 00 = 0, y(a) = y(b) = 0
(x< a)(x> b)
G(x|) =
ba
y 00 = 0, y(a) = y 0 (b) = 0
G(x|) = a x<
y 00 = 0, y 0 (a) = y(b) = 0
G(x|) = x> b
y 00 c2 y = 0, y(a) = y(b) = 0
sinh(c(x< a)) sinh(c(x> b))
G(x|) =
c sinh(c(b a))
y 00 c2 y = 0, y(a) = y 0 (b) = 0
sinh(c(x< a)) cosh(c(x> b))
G(x|) =
c cosh(c(b a))
y 00 c2 y = 0, y 0 (a) = y(b) = 0
cosh(c(x< a)) sinh(c(x> b))
G(x|) =
c cosh(c(b a))
npi
y 00 + c2 y = 0, y(a) = y(b) = 0, c 6= ba , nN
(2n1)pi
y 00 + c2 y = 0, y(a) = y 0 (b) = 0, c 6= 2(ba) , nN
1403
(2n1)pi
y 00 + c2 y = 0, y 0 (a) = y(b) = 0, c 6= 2(ba) , nN
1404
Appendix Q
Trigonometric Identities
1405
Function Product Identities
1 1
sin x sin y = cos(x y) cos(x + y)
2 2
1 1
cos x cos y = cos(x y) + cos(x + y)
2 2
1 1
sin x cos y = sin(x + y) + sin(x y)
2 2
1 1
cos x sin y = sin(x + y) sin(x y)
2 2
Exponential Identities
ex ex ex + ex
ex = cos x + sin x, sin x = , cos x =
2 2
ex ex ex + ex
sinh x = , cosh x =
2 2
sinh x ex ex
tanh x = = x
cosh x e + ex
Reciprocal Identities
1 1 1
csch x = , sech x = , coth x =
sinh x cosh x tanh x
Pythagorean Identities
1406
Function Sum and Difference Identities
1 1
sinh x sinh y = 2 sinh (x y) cosh (x y)
2 2
1 1
cosh x + cosh y = 2 cosh (x + y) cosh (x y)
2 2
1 1
cosh x cosh y = 2 sinh (x + y) sinh (x y)
2 2
sinh(x y)
tanh x tanh y =
cosh x cosh y
sinh(x y)
coth x coth y =
sinh x sinh y
1
3
2 0.5
1
-2 -1 1 2 -2 -1 1 2
-1
-2 -0.5
-3 -1
1407
1408
Appendix R
Bessel Functions
1409
1410
Appendix S
A~x = ~b.
This equation has a unique solution if and only if det(A) 6= 0. If the determinant vanishes then
there are either no solutions or an infinite number of solutions. If the determinant is nonzero, the
solution for each xj can be written
det Aj
xj =
det A
where Aj is the matrix formed by replacing the j th column of A with b.
1411
1412
Appendix T
Vector Analysis
Rectangular Coordinates
f = f (x, y, z), ~g = gx i + gy j + gz k
f f f
f = i+ j+ k
x y z
gx gy gz
~g = + +
x y z
i j k
~g = x y z
gx gy gz
2f 2f 2f
f = 2 f = + +
x2 y 2 z 2
Spherical Coordinates
f = f (r, , ), ~g = gr r + g + g
Divergence Theorem. ZZ I
u dx dy = u n ds
Stokes Theorem. ZZ I
( u) ds = u dr
1413
1414
Appendix U
Partial Fractions
where the ak s are constants and the last ellipses represents the partial fractions expansion of the
roots of r(x). The coefficients are
1 dk p(x)
ak = .
k! dxk r(x) x=
1 + x + x2
.
(x 1)3
1 + x + x2
.
x2 (x 1)2
1415
The expansion has the form
a0 a1 b0 b1
2
+ + 2
+ .
x x (x 1) x1
The coefficients are
1 + x + x2
1
a0 = = 1,
0! (x 1)2
x=0
1 d 1 + x + x2 2(1 + x + x2 )
1 + 2x
a1 = = = 3,
1! dx (x 1)2
x=0 (x 1)2 (x 1)3
x=0
1 1 + x + x2
b0 = = 3,
0! x2
x=1
1 d 1 + x + x2 1 + 2x 2(1 + x + x2 )
b1 = = = 3,
1! dx x2
x=1 x2 x3
x=1
Thus we have
1 + x + x2 1 3 3 3
2 2
= 2+ + 2
.
x (x 1) x x (x 1) x1
If the rational function has real coefficients and the denominator has complex roots, then you
can reduce the work in finding the partial fraction expansion with the following trick: Let and
be complex conjugate pairs of roots of the denominator.
p(x) a0 a1 an1
= + + +
(x )n (x )n r(x) (x )n (x )n1 x
a0 a1 an1
+ + + + + ( )
(x )n (x )n1 x
Thus we dont have to calculate the coefficients for the root at . We just take the complex conjugate
of the coefficients for .
1416
Appendix V
Finite Math
1417
1418
Appendix W
Physics
In order to reduce processing costs, a chicken farmer wished to acquire a plucking machine. Since
there was no such machine on the market, he hired a mechanical engineer to design one. After
extensive research and testing, the professor concluded that it was impossible to build such a machine
with current technology. The farmer was disappointed, but not wanting to abandon his dream of an
automatic plucker, he consulted a physicist. After a single afternoon of work, the physicist reported
that not only could a plucking machine be built, but that the design was simple. The elated farmer
asked him to describe his method. The physicist replied, First, assume a spherical chicken . . . .
The problems in this text will implicitly make certain simplifying assumptions about chickens.
For example, a problem might assume a perfectly elastic, frictionless, spherical chicken. In two-
dimensional problems, we will assume that chickens are circular.
1419
1420
Appendix X
Probability
1421
will usually suffice. Its quite likely that some kind of symmetry is involved. And if it isnt your
response will puzzle the professor. They may continue with the next topic, not wanting to admit that
they dont see the symmetry in such an elementary problem. If they press further, start mumbling
to yourself. Pretend that you are lost in thought, perhaps considering some generalization of the
result. They may be a little irked that you are ignoring them, but its better than divulging your
true method.
1422
Appendix Y
Economics
There are two important concepts in economics. The first is Buy low, sell high, which is self-
explanitory. The second is opportunity cost, the highest valued alternative that must be sacrificed
to attain something or otherwise satisfy a want. I discovered this concept as an undergraduate
at Caltech. I was never very interested in computer games, but one day I found myself randomly
playing tetris. Out of the blue I was struck by a revelation: I could be having sex right now. I
havent played a computer game since.
1423
1424
Appendix Z
Glossary
Phrases often have different meanings in mathematics than in everyday usage. Here I have col-
lected definitions of some mathematical terms which might confuse the novice.
beyond the scope of this text: Beyond the comprehension of the author.
difficult: Essentially impossible. Note that mathematicians never refer to problems they have
solved as being difficult. This would either be boastful, (claiming that you can solve difficult
problems), or self-deprecating, (admitting that you found the problem to be difficult).
interesting: This word is grossly overused in math and science. It is often used to describe any
work that the author has done, regardless of the works significance or novelty. It may also
be used as a synonym for difficult. It has a completely different meaning when used by the
non-mathematician. When I tell people that I am a mathematician they typically respond
with, That must be interesting., which means, I dont know anything about math or what
mathematicians do. I typically answer, No. Not really.
non-obvious or non-trivial: Real fuckin hard.
one can prove that . . . : The one that proved it was a genius like Gauss. The phrase literally
means you havent got a chance in hell of proving that . . .
simple: Mathematicians communicate their prowess to colleagues and students by referring to all
problems as simple or trivial. If you ever become a math professor, introduce every example
as being really quite trivial. 1
Here are some less interesting words and phrases that you are probably already familiar with.
corollary: a proposition inferred immediately from a proved proposition with little or no additional
proof
lemma: an auxiliary proposition used in the demonstration of another proposition
theorem: a formula, proposition, or statement in mathematics or logic deduced or to be deduced
from other formulas or propositions
1 For even more fun say it in your best Elmer Fudd accent. This next pwobwem is weawy quite twiviaw.
1425
1426
Index
1427
comparison test, 323 linear, 467
Gauss test, 327 order, 467
in the mean, 782 ordinary, 467
integral test, 324 scale-invariant, 604
of integrals, 893 separable, 471
Raabes test, 327 without explicit dep. on y, 573
ratio test, 325 differential operator
root test, 326 linear, 544
sequences, 321 Dirac delta function, 631, 783
series, 321 direction
uniform, 327 negative, 152
convolution theorem positive, 152
and Fourier transform, 944 directional derivative, 99
for Laplace transforms, 906 discontinuous functions, 36, 810
convolutions, 906 discrete derivative, 703
counter-clockwise, 152 discrete integral, 703
curve, 151 disjoint sets, 4
closed, 151 domain, 4
continuous, 151
Jordan, 152 economics, 1423
piecewise smooth, 151 eigenfunctions, 807
simple, 151 eigenvalue problems, 807
smooth, 151 eigenvalues, 807
elements
definite integral, 76 of a set, 3
degree empty set, 3
of a differential equation, 467 entire, 221
del, 99 equidimensional differential equations, 569
delta function equidimensional-in-x D.E., 601
Kronecker, 24 equidimensional-in-y D.E., 602
derivative Euler differential equations, 569
complex, 221 Eulers formula, 122
determinant Eulers notation
derivative of, 546 i, 118
difference Eulers theorem, 475
of sets, 4 Euler-Mascheroni constant, 978
difference equations exact differential equations, 572
constant coefficient equations, 707 exact equations, 472
exact equations, 704 exchanging dep. and indep. var., 598
first order homogeneous, 705 extended complex plane, 153
first order inhomogeneous, 706 extremum modulus theorem, 305
differential calculus, 33
differential equations Fibonacci sequence, 711
autonomous, 599 fluid flow
constant coefficient, 563 ideal, 234
degree, 467 formally self-adjoint operators, 796
equidimensional-in-x, 601 Fourier coefficients, 779, 810
equidimensional-in-y, 602 behavior of, 819
Euler, 569 Fourier convolution theorem, 944
exact, 472, 572 Fourier cosine series, 816
first order, 467, 478 Fourier cosine transform, 948
homogeneous, 467 of derivatives, 950
homogeneous coefficient, 475 table of, 1395
inhomogeneous, 467 Fourier series, 807
1428
and Fourier transform, 935 homogeneous coefficient equations, 475
uniform convergence, 822 homogeneous differential equations, 467
Fourier Sine series, 817 homogeneous functions, 475
Fourier sine series, 868 homogeneous solution, 479
Fourier sine transform, 949 homogeneous solutions
of derivatives, 950 of differential equations, 544
table of, 1397
Fourier transform i
alternate definitions, 937 Eulers notation, 118
closure relation, 942 ideal fluid flow, 234
convolution theorem, 944 identity map, 4
of a derivative, 943 ill-posed problems, 484
Parsevals theorem, 946 linear differential equations, 549
shift property, 946 image
table of, 1391, 1393 of a mapping, 4
Fredholm alternative theorem, 670 imaginary number, 118
Fredholm equations, 620 imaginary part, 118
Frobenius series improper integrals, 81
first order differential equation, 488 indefinite integral, 73, 287
function indicial equation, 725
bijective, 5 infinity
injective, 5 complex, 153
inverse of, 5 first order differential equation, 491
multi-valued, 5 point at, 153
single-valued, 4 inhomogeneous differential equations, 467
surjective, 5 initial conditions, 480
function elements, 265 inner product
functional equation, 237 of functions, 777
fundamental set of solutions integers
of a differential equation, 551 set of, 3
fundamental theorem of algebra, 304 integral bound
fundamental theorem of calculus, 78 maximum modulus, 284
integral calculus, 73
gamblers ruin problem, 703, 708 integral equations, 620
Gamma function, 975 boundary value problems, 620
difference equation, 975 initial value problems, 620
Eulers formula, 975 integrals
Gauss formula, 977 improper, 81
Hankels formula, 976 integrating factor, 479
Weierstrass formula, 978 integration
Gauss test, 327 techniques of, 79
generating function intermediate value theorem, 37
for Bessel functions, 991 intersection
geometric series, 322 of sets, 4
Gibbs phenomenon, 825 interval
gradient, 99 closed, 3
Gramm-Schmidt orthogonalization, 774 open, 3
greatest integer function, 5 inverse function, 5
Greens formula, 553, 795 inverse image, 4
irregular singular points, 733
harmonic conjugate, 228 first order differential equations, 490
harmonic series, 323, 343
Heaviside function, 481, 631 j electrical engineering, 118
holomorphic, 221 Jordan curve, 152
1429
Jordans lemma, 1375 of a set, 4
ordinary points
Kramers rule, 1411 first order differential equations, 485
Kronecker delta function, 24 of linear differential equations, 715
orthogonal series, 779
LHospitals rule, 49 orthogonality
Lagranges identity, 553, 574, 795 weighting functions, 778
Laplace transform orthonormal, 777
inverse, 898
Laplace transform pairs, 900 Parsevals equality, 813
Laplace transforms, 897 Parsevals theorem
convolution theorem, 906 for Fourier transform, 946
of derivatives, 906 partial derivative, 98
Laurent expansions, 379, 1375 particular solution, 479
Laurent series, 338, 1376 of an ODE, 641
first order differential equation, 488 particular solutions
leading order behavior of differential equations, 544
for differential equations, 757 periodic extension, 810
least integer function, 5 piecewise continuous, 37
least squares fit point at infinity, 153
Fourier series, 811 differential equations, 733
Legendre polynomials, 775 polar form, 121
limit potential flow, 234
left and right, 34 power series
limits of functions, 33 definition of, 329
line integral, 282 differentiation of, 333
complex, 282 integration of, 333
linear differential equations, 467 radius of convergence, 330
linear differential operator, 544 uniformly convergent, 329
linear space, 771 principal argument, 120
Liouvilles theorem, 303 principal branch, 6
principal root, 127
magnitude, 119 principal value, 383, 940
maximum modulus integral bound, 284 pure imaginary number, 118
maximum modulus theorem, 305
Mellin inversion formula, 898 Raabes test, 327
minimum modulus theorem, 305 range
modulus, 119 of a mapping, 4
multi-valued function, 5 ratio test, 325
rational numbers
nabla, 99 set of, 3
natural boundary, 265 Rayleighs quotient, 866
Newtons binomial formula, 1417 minimum property, 866
norm real numbers
of functions, 777 set of, 3
normal form real part, 118
of differential equations, 617 rectangular unit vectors, 18
null vector, 18 reduction of order, 573
and the adjoint equation, 574
one-to-one mapping, 5
difference equations, 709
open interval, 3
region
opportunity cost, 1423
connected, 151
optimal asymptotic approximations, 765
multiply-connected, 151
order
simply-connected, 151
of a differential equation, 467
1430
regular, 221 uniform convergence, 327
regular singular points of Fourier series, 822
first order differential equations, 487 of integrals, 893
regular Sturm-Liouville problems, 863 union
properties of, 868 of sets, 4
residuals
of series, 322 variation of parameters
residue theorem, 381, 1375 first order equation, 480
principal values, 388 vector
residues, 379, 1375 components of, 18
of a pole of order n, 379, 1375 rectangular unit, 18
Riccati equations, 596 vector calculus, 97
Riemann zeta function, 323 vector field, 97
Riemann-Lebesgue lemma, 894 vector-valued functions, 97
root test, 326 Volterra equations, 620
Rouches theorem, 307
wave equation
scalar field, 97 DAlemberts solution, 1187
scale-invariant D.E., 604 Fourier transform solution, 1187
separable equations, 471 Laplace transform solution, 1188
sequences Webers function, 1000
convergence of, 321 Weierstrass M-test, 328
series, 321 well-posed problems, 484
comparison test, 323 linear differential equations, 549
convergence of, 321 Wronskian, 547
Gauss test, 327
geometric, 322 zero vector, 18
integral test, 324
Raabes test, 327
ratio test, 325
residuals, 322
root test, 326
tail of, 322
set, 3
similarity transformation, 1155
single-valued function, 4
singularity, 231
branch point, 231
spherical chicken, 1419
stereographic projection, 153
Stirlings approximation, 979
subset, 4
proper, 4
uniform continuity, 37
1431