Bessel Functions (Guide)
Bessel Functions (Guide)
Bessel Functions (Guide)
Bessel Functions
1. Introduction
This notebook has two purposes: to give a brief introduction to Bessel functions, and to illustrate how Mathematica can be used in working with Bessel functions. We begin with a summary of the origin of Bessel's equation in
our course.
By separating variables for the Laplace equation in cylindrical coordinates, we found solutions of the form
F(r)G(z)
cosHmqL
sinHmqL
where m = 0, 1, 2, ... . The radial function F satisfied Bessel's equation of order m with a parameter l:
d
dr
dF
dr
m2
+ lr -
F = 0.
(1)
Here l is a constant arising in the separation process. By introducing a new independent variable
x=
l r,
dx
dF
dx
+ x-
m2
x
=0.
(2)
dF
dx
+ xF = 0 .
(3)
dF
dr
- lr +
m2
r
F = 0.
This is called the modified Bessel's equation of order m with a parameter l. The scaling x =
d
dx
dF
dx
- x+
(4)
l r reduces this to
F =0.
Solutions of this equation are discussed in section 5. The special case m = 0 for axisymmetric solutions is
(5)
bessfunc.nb
d
dx
dF
- xF = 0 .
dx
(6)
Hx 2L2
H1 !L2
Hx 2L4
H2 !L2
Hx 2L6
H3 !L2
+ ... =
k=0
H-1Lk Hx 2L2 k
Hk !L2
(7)
Because the equation has no other singular points, we expect this series to converge for all x, a conclusion easily
verified by the ratio test.
The derivation of a series for Y0 HxL is much more difficult, and the resulting series is more complicated:
Y0 HxL =
2
p
2 Hx 2L2
:
- K1 +
p H1 !L2
1
2
Hx 2L4
H2 !L2
+ K1 +
1
2
Hx 2L6
H3 !L2
+ O
- . . .>
(8)
Fortunately for us, we do not have to use these series directly to get values for these functions. Bessel functions are
built-in to Mathematica, and we consider that next.
bessfunc.nb
Thus four terms are sufficient to give six-place accuracy. For x = 2 and 3, we expect that we will need more terms in
the series.
BesselJ@0, 2.0D
0.223891
Table@Jseries@2.0, nD, 8n, 4, 6<D
80.223958, 0.223889, 0.223891<
10
15
20
-0.2
-0.4
An interesting graph. We see that J0 is an oscillatory function. In fact it looks a lot like a damped trig function, an
observation which we will make more precise in section 2.6. We also see that J0 has many zeros (in fact it has
infinitely many). As we have already seen in class, we need to find these zeros to find the eigenvalues for solutions of
Laplace's equation by separation of variables. In section 2.5, we will see how to do this.
Now let's plot Y0 . We will have to be a little careful because of the singularity at x = 0, so we start our plot a
little away from 0. Let's use the same plot range that Mathematica selected above for J0 .
bessfunc.nb
graphY0 =
Plot@BesselY@0, xD, 8x, 0.01, 20<, PlotRange -> 8- 0.4, 1.0<, AxesLabel -> 8"x", "Y0 "<D
Y0
1.0
0.8
0.6
0.4
0.2
5
10
15
20
-0.2
-0.4
The function Y0 , like J0 , is oscillatory, and again looks like a damped trig function, apart from the divergence to - at
x = 0. Let's plot the two functions on the same graph:
Show@graphJ0, graphY0D
J0
1.0
0.8
0.6
0.4
0.2
5
10
15
20
-0.2
-0.4
Now we see that for larger x they look like damped trig functions differing only by a phase shift. We will see in
section 2.6 the truth of that statement.
The answer is in the form of a replacement rule. Let's check it by evaluating J0 at that value:
BesselJ@0, xD . root1
- 9.51574 10-17
That's close enough to zero for most purposes. We could continue finding the other roots of J0 by altering the initial
guess and repeating the calculation. Fortunately, there is a much easier way in Mathematica. Mathematica has builtin functions for getting the zeros of Bessel functions. For the J Bessel function the name of the function returning a
zero is BesselJZero[n,k]. This returns the kth positive zero of Jn . For convenience we use this function to construct a
list (Table) of the first 40 zeros of J0 . We assign the list to zerolist.
zerolist = N@Table@BesselJZero@0, iD, 8i, 1, 40<DD
82.40483, 5.52008, 8.65373, 11.7915, 14.9309, 18.0711, 21.2116, 24.3525, 27.4935, 30.6346,
33.7758, 36.9171, 40.0584, 43.1998, 46.3412, 49.4826, 52.6241, 55.7655, 58.907, 62.0485,
65.19, 68.3315, 71.473, 74.6145, 77.756, 80.8976, 84.0391, 87.1806, 90.3222, 93.4637,
96.6053, 99.7468, 102.888, 106.03, 109.171, 112.313, 115.455, 118.596, 121.738, 124.879<
The zeros are then available for any subsequent calculation. For example, the first zero is given by
bessfunc.nb
root1 = zerolist@@1DD
2.40483
This is the same as the value found by FindRoot. As before, we check this by evaluating J0 for this value:
BesselJ@0, xD . x -> root1
- 9.51574 10-17
We can learn something interesting about the zeros by calculating the spacing between successive zeros:
Table@Hzerolist@@n + 1DD - zerolist@@nDDL, 8n, 1, 39<D
83.11525, 3.13365, 3.13781, 3.13938, 3.14015, 3.14057, 3.14083, 3.14101, 3.14113, 3.14121,
3.14128, 3.14133, 3.14137, 3.1414, 3.14142, 3.14144, 3.14146, 3.14147, 3.14149, 3.1415,
3.1415, 3.14151, 3.14152, 3.14152, 3.14153, 3.14153, 3.14154, 3.14154, 3.14155, 3.14155,
3.14155, 3.14155, 3.14156, 3.14156, 3.14156, 3.14156, 3.14156, 3.14157, 3.14157<
We see that although the zeros are not exactly equally spaced, they are apparently asymptotically equally spaced, and
the spacing looks suspiciously like p! More on this later.
The main point in this section is that the roots of Bessel functions are easily and instantly available.
We see that these are indeed damped trig functions for large x, but the amplitude decrease is very gradual. It goes like
1
x , which is much more gradual than exponential damping. We also see that for large x, J0 and Y0 differ only by a
phase shift. Finally, we see that the larger zeros of J0 are the zeros of cos[x - p/4] -- that is (n - 1/4)p -- and this
explains our earlier observation that the interval between zeros appears to approach p.
We compare the exact and asymptotic functions graphically, showing the exact as a solid curve, and the
asymptotic as a dashed curve.
compJgraph = Plot@8BesselJ@0, xD, J0Large@xD<, 8x, 0, 10<,
PlotRange 8- 0.41, 1.<, PlotStyle 8Dashing@80.01, 0.<D, Dashing@80.01, 0.01<D<,
AxesLabel 8"x", "J0 HxL"<, PlotLabel "Exact HsolidL
Asymptotic HdashedL"D
Exact HsolidL
Asymptotic HdashedL
J0 HxL
1.0
0.8
0.6
0.4
0.2
2
10
-0.2
-0.4
We see that the approximation is quite good, especially for x about 3 or larger. Let's look at the numbers:
BesselJ@0, 3.0D
- 0.260052
bessfunc.nb
J0Large@3.0D
- 0.276507
10
-0.2
-0.4
Again excellent agreement for x greater than about 3. Here are the numbers for x = 3:
BesselY@0, 3.0D
0.37685
Y0Large@3.0D
0.368443
x
2
1
m!
Hx 2L2
H1 !L H1 + mL !
Hx 2L4
H2 !L H2 + mL !
Hx 2L6
H3 !L H3 + mL !
+ ...
(9)
bessfunc.nb
H-1Lk Hx 2L2 k
k=0
Hk !L Hk + mL !
Because the equation has no other singular points, the series converges for all x, a conclusion easily verified by the
ratio test.
The derivation of a series for Ym HxL is much more difficult, and the resulting series is complicated:
Ym HxL = Hx 2Lm
p
Hx 2L-m
p
m-1
Hm - k - 1L ! x
k=0
8y Hk + 1L + y Hm + k + 1L<
k=0
k!
2
H-1Lk Hx 2L2 k
k ! Hm + kL !
2k
2
p
ln K x Jm HxL 2
(10)
where y is the Digamma function, given by y(1) = -g (g is Euler's constant as before), and y(n) = -g + n-1
k=1 1 k.
Notice that the logarithm is not the worst singularity in Ym at x = 0. There are negative powers of x also, with
the highest being x-m .
We do not have to use these series directly because these functions are built in to Mathematica. We consider
that next.
Thus five terms are sufficient to give six-place accuracy for x = 2. Now we try m = 2.
BesselJ@2, 2.0D
0.352834
Table@Jseries@2.0, 2, nD, 8n, 3, 6<D
80.352778, 0.352836, 0.352834, 0.352834<
Thus again five terms are sufficient for six-place accuracy for x = 2. Finally, we try m = 3.
BesselJ@3, 2.0D
0.128943
Table@Jseries@2.0, 3, nD, 8n, 3, 6<D
80.128935, 0.128943, 0.128943, 0.128943<
bessfunc.nb
At this point, we don't have a good picture of how the Bessel functions depend on the order m. We next look
at some graphs to illustrate this.
10
15
20
-0.2
-0.4
An interesting graph, although a little busy. The graph illustrates the fact that the larger m is, the more slowly the
Bessel function starts up. This happens because Jm is proportional to xm . Thus at x = 0, Jm and its first m - 1 derivatives vanish. After this initial sluggish start for the larger m's, we see that the Jm ' s have similar behavior -- that is, they
all look like damped trig functions (which we look at in more detail in section 3.6) and they all have infinitely many
zeros (which we learn how to find in section 3.5).
Now let's plot Ym . We will have to be a little careful because of the singularity at x = 0, so we start our plot a
little away from 0. Let's use the same m-values as above for the J's.
graphYm = Plot@8BesselY@0, xD, BesselY@1, xD, BesselY@2, xD, BesselY@3, xD, BesselY@4, xD,
BesselY@5, xD<, 8x, 0.01`, 20<, PlotRange 8- 0.41, 1.<, AxesLabel 8"x", "Jm "<D
Jm
1.0
0.8
0.6
0.4
0.2
5
10
15
20
-0.2
-0.4
The functions with the higher values of m diverge to infinity more rapidly as x 0, because of the x-m term, and this
makes it easy to tell which curve is which on the graph. After the initial divergence, the Y's settle into the by-now
familiar damped oscillation. You may have noticed that the above graph took longer to produce than the previous one
for the J's. That's because the computational effort to find values of the Y's is much greater than that for finding the J's.
The zeros are then available for any subsequent calculation. For example, the first zero is given by
bessfunc.nb
root1 = zerolist@@1DD
8.77148
Excellent accuracy. As we did before for J0 , let's check the spacing between these zeros:
Table@Hzerolist@@n + 1DD - zerolist@@nDDL, 8n, 1, 39<D
83.56712, 3.36157, 3.27996, 3.23767, 3.21254, 3.19628, 3.1851, 3.17706, 3.17109, 3.16651,
3.16294, 3.16008, 3.15777, 3.15586, 3.15428, 3.15294, 3.15181, 3.15084, 3.15, 3.14927,
3.14863, 3.14807, 3.14757, 3.14713, 3.14674, 3.14638, 3.14607, 3.14578, 3.14552, 3.14528,
3.14506, 3.14487, 3.14469, 3.14452, 3.14437, 3.14422, 3.14409, 3.14397, 3.14386<
We see again that although the zeros are not exactly equally spaced, they seem to be approaching equal spacing for the
large roots, and as before the spacing seems to be approaching p. More on this later.
From our graph earlier of Jm for m = 0, 1, 2, 3, 4, and 5, we would expect the first root of Jm to increase with
m. Let's check this by finding the first root of each of the first 11 Bessel functions:
N@Table@BesselJZero@m, 1D, 8m, 0, 10<DD
82.40483, 3.83171, 5.13562, 6.38016, 7.58834,
8.77148, 9.93611, 11.0864, 12.2251, 13.3543, 14.4755<
These formulas include J0 and Y0 as special cases. As before, we see that the amplitude decrease is very gradual. It
goes like 1
x and is much more gradual than exponential damping. We also see that for large x, Jm and Ym differ
only by a phase shift. Finally, we see that the large zeros of Jm are the zeros of cos[x - mp/2 - p/4] -- that is (n + m/2 1/4)p -- and this explains our earlier observation that the interval between zeros approachs p.
We compare the exact and asymptotic functions graphically, showing the exact as a solid curve, and the
asymptotic as a dashed curve. By way of example, we do this for m = 5.
10
bessfunc.nb
20
30
40
-0.2
-0.4
We see that the approximation eventually joins up with the exact function, but we have to go to quite large x for this to
happen. This is in marked contrast with the case for J0 , where the results were good for x as small as 3. This is typical
of the higher order Bessel functions. For larger m it is even worse.
Now we do the same comparison for Y5 :
compY5graph = Plot@8BesselY@5, xD, YLarge@5, xD<, 8x, 0.01, 40<,
PlotRange -> 8- 0.4, 1.0<, PlotStyle -> 8Dashing@80.01, 0.0<D, Dashing@80.01, 0.01<D<,
AxesLabel -> 8"x", "Y5 HxL"<, PlotLabel -> "Exact HsolidL
Asymptotic HdashedL"D
Exact HsolidL
Asymptotic HdashedL
Y5 HxL
1.0
0.8
0.6
0.4
0.2
10
20
30
40
-0.2
-0.4
Again we must go to very large x to get good agreement. For both functions one can derive more accurate multi-term
approximations for large x. We will look at examples of such approximations in our discussion of the modified Bessel
functions.
2m
x
Jm HxL
(11)
and
Jm-1 HxL - Jm+1 HxL = 2 Jm HxL
(12)
The exact same formulas are also valid for Ym HxL or even for any fixed linear combination of Jm and Ym . The formulas
are valid even for negative indices, provided we use the standard definitions of Jm and Ym for negative order:
J-m HxL = H-1Lm Jm HxL , Y-m HxL = H-1Lm Ym HxL .
(13)
The recurrence formulas have surprisingly many uses. For example, a computational scheme for evaluating
Jm HxL for fixed x and many different values of m can be designed around equation (11). The formulas are also sometimes used in the evaluation of integrals involving Bessel functions, as we shall see in the next section.
bessfunc.nb
11
The recurrence formulas have surprisingly many uses. For example, a computational scheme for evaluating
Jm HxL for fixed x and many different values of m can be designed around equation (11). The formulas are also sometimes used in the evaluation of integrals involving Bessel functions, as we shall see in the next section.
d
dx
(14)
(15)
can be evaluated by letting u = x2 and dv = xJ0 HxLdx, hence v = xJ1 HxL. The integration-by-parts then gives
3
3
2
x J0 HxL x = x J1 HxL - 2 x J2 HxL .
(16)
Mathematica knows how to do these integrals analytically. Consider the integral given in (15):
Integrate@x * BesselJ@0, xD, xD
1
2
x2 Hypergeometric0F1RegularizedB2, -
x2
4
Not what we expected! Let's see what FullSimplify does for us.
FullSimplify@%D
x BesselJ@1, xD
That's better.
Here is the Mathematica version of the integral in (16):
Integrate@x ^ 3 * BesselJ@0, xD, xD
- x2 H- 2 BesselJ@2, xD + x BesselJ@3, xDL
By using equation (11), you can show that this is equivalent to equation (16).
12
bessfunc.nb
The functions I0 and K0 considered in this section are solutions of equation (6), the modified Bessel's equation
of order 0. In principle, all of the development for J0 and Y0 could be repeated here with the minor variations required
by the switch from J0 and Y0 to I0 and K0 . In practice, the functions I0 and K0 are not used as much in our course as J0
and Y0 , so we will keep the discussion very brief.
The modified Bessel's equation has a regular singular point at x = 0. The application of the method of Frobenius shows that the indicial equation has repeated roots 0, 0, so there is one solution of the Frobenius form, and a
second solution with a logarithmic singularity. The solution well-behaved at x = 0 is standardized as I0 HxL, and the
Frobenius series is
I0 HxL = 1 +
Hx 2L2
H1 !L2
Hx 2L4
H2 !L2
Hx 2L6
H3 !L2
+ ... =
k=0
Hx 2L2 k
Hk !L2
(17)
The series for the second solution K0 is more difficult to obtain and more intricate. It is given by
1
Hx 2L2
H1 !L2
+ K1 +
1
2
Hx 2L4
H2 !L2
+ K1 +
1
2
Hx 2L6
H3 !L2
+ O
- ...> .
(18)
These functions are both built in. The Mathematica function BesselI[0,x] gives I0 HxL and the function
BesselK[0,x] gives K0 HxL. Let's see what they look like.
graphI0 = Plot@BesselI@0, xD, 8x, 0, 4<,
PlotRange 880, 4<, 8- 0.1, 11.5<<, AxesLabel -> 8"x", "I0 HxL"<D
I0 HxL
10
8
6
4
2
0
We see that I0 starts at 1 and appears to increase exponentially. Now we plot K0 , avoiding the singularity at x = 0.
graphK0 = Plot@BesselK@0, xD, 8x, 0.01, 4<, AxesLabel -> 8"x", "K0 HxL"<D
K0 HxL
1.2
1.0
0.8
0.6
0.4
0.2
1
We see that K0 appears to drop exponentially. As with J0 and Y0 , it is possible to derive simple asymptotic approximations for large argument. One can show that for large x, I0 HxL is approximated by I0Large[x], and that K0 HxL is approximated by K0Large[x], where
I0Large@x_D := Exp@xD Sqrt@2 * p * xD
K0Large@x_D := Exp@- xD * Sqrt@p H2 * xLD
bessfunc.nb
13
We see that the approximation is adequate although not spectacular. It is possible to develop more accurate approximations which essentially provide a small correction to I0Large. Let's look at an example of such an approximation here.
From Abramowitz and Stegun (p. 377, formula 9.7.1) we get
I0LargeMod@x_D := HExp@xD Sqrt@2 p xDL H1 + 1 H8 xLL
compIgraph = Plot@8BesselI@0, xD, I0LargeMod@xD<, 8x, 0.01, 4<,
PlotRange -> 80, 15.0<, PlotStyle -> 8Dashing@0D, Dashing@80.01, 0.01<D<,
AxesLabel -> 8"x", "I0 HxL"<, PlotLabel -> "Exact HsolidL
Asymptotic HdashedL"D
Exact HsolidL
Asymptotic HdashedL
I0 HxL
14
12
10
8
6
4
2
0
14
bessfunc.nb
Again we see a good but not great approximation, and again it is possible to develop more accurate approximations.
x and K0 HxL behaves like e-x
x . For small x,
m!
x
Hx 2L2
Hx 2L4
H1 !L H1 + mL !
H2 !L H2 + mL !
m
Hx 2L2 k
.
2 k=0 Hk !L Hk + mL !
Hx 2L6
H3 !L H3 + mL !
+ ... =
(19)
The second solution, Km , has a logarithmic singularity at x = 0, as well as a singularity of the form x-m . The
series for Km , which is difficult to derive, is given by
Km HxL =
m
H-1L
1
2
m-1
Hx 2L-m
k=0
H-1Lk Hm - k - 1L ! x
k!
Hx 2L 8y Hk + 1L + y Hm + k + 1L<
k=0
2k
+ H-1Lm+1 ln K x Im HxL +
2
2k
Hx 2L
k ! Hm + kL !
where y is the Digamma function defined earlier (just after equation (10)).
These functions are both built in. The Mathematica function BesselI[m,x] gives Im HxL, and the function
BesselK[m,x] gives Km HxL. Let's see what these functions look like. We first plot Im for m equal to 0 through 5.
(20)
bessfunc.nb
15
We see the apparent exponential growth as before, but we also see that the higher order functions are slower to get
started. The reason is that Im HxL has a factor xm , which means that the function and the first m - 1 derivatives vanish at
x = 0.
Now let's take a look at a plot of Km for m = 0 through 5. We start with a slight offset in x to avoid the singularity at x = 0.
graphKm = Plot@8BesselK@0, xD, BesselK@1, xD, BesselK@2, xD, BesselK@3, xD, BesselK@4, xD,
BesselK@5, xD<, 8x, 0.01, 4<, PlotRange -> 80, 5<, AxesLabel -> 8"x", "Km "<D
Km
5
4
3
2
1
We see the apparent exponential decay for large x as before, and the singularity at x = 0, which becomes stronger as m
increases.
There are simple asymptotic approximations for Im HxL and Km HxL for large x, given by
ILarge@m_, x_D := Exp@xD Sqrt@2 * p * xD
KLarge@m_, x_D := Exp@- xD * Sqrt@p H2 * xLD
By way of example, we compare the exact and asymptotic expressions for m = 3, beginning with Im .
16
bessfunc.nb
3.5
4.0
4.5
5.0
We see that the agreement is good qualitatively, but only fair quantitatively.
Finally, we look at the exact and asymptotic functions for K3 . We choose the x and y ranges to best illustrate
the comparison.
compK3graph = Plot@8BesselK@3, xD, KLarge@3, xD<,
8x, 0.01, 4<, PlotStyle -> 8Dashing@0D, Dashing@80.01, 0.01<D<,
AxesLabel -> 8"x", "K3 HxL"<, PlotLabel -> "Exact HsolidL
Asymptotic HdashedL"D
Exact HsolidL
Asymptotic HdashedL
K3 HxL
7
6
5
4
3
2
1
1
Again the right qualitative behavior, but only fair quantitative agreement. For both Im and Km we can find more
accurate multi-term asymptotic approximations for large x.
dy
dr
We have yn HrL = J0 Han r aL, where an is the nth root of J0 . These eigenfunctions are orthogonal on [0,a] with respect
to the weight function r. Any piecewise smooth function f(r) on [0,a] can be expanded in these eigenfunctions. The
coefficients in the expansion are calculated using orthogonality. The result is
a
f(r) =
n=1 Cn yn HrL , where Cn =
0 f HrL yn HrL r r
a
2
0 8yn HrL< r r
bessfunc.nb
17
f(r) =
n=1 Cn yn HrL , where Cn =
0 f HrL yn HrL r r
a
2
0 8yn HrL< r r
6.2 Example: f (r ) = a 2 - r 2 on 0 r a
We now illustrate this theory by expanding the function
f@r_D := a2 - r2
We find the first 21 zeros of J0 and assign them to the list named a.
a = N@Table@BesselJZero@0, iD, 8i, 1, 21<DD
82.40483, 5.52008, 8.65373, 11.7915, 14.9309, 18.0711,
21.2116, 24.3525, 27.4935, 30.6346, 33.7758, 36.9171, 40.0584,
43.1998, 46.3412, 49.4826, 52.6241, 55.7655, 58.907, 62.0485, 65.19<
We call the nth expansion coefficient coeff[[n]]. The theoretical expression is given above in equation (1). We define
expressions for both the numerator and the denominator numerically in terms of the Mathematica function NIntegrate.
We also obtain the coefficiencts below from an analytical expression derived in class. With more complicated functions f[r], numerical integration may be the only choice.
num@n_D := NIntegrate@f@rD * y@r, nD * r, 8r, 0, a<D
den@n_D := NIntegrate@Hy@r, nDL ^ 2 * r, 8r, 0, a<D
We calculate the first 21 coefficients, and assign them to a list named coeff.
coeff = Module@8ans, c, n<, ans = 8<;
Do@Hc = num@nD den@nD; ans = Append@ans, cDL, 8n, 1, 21<D; ansD
89.9722, - 1.258, 0.409288, - 0.188918, 0.104726, - 0.0649906, 0.0435408, - 0.0308311,
0.0227658, - 0.0173713, 0.0136099, - 0.0108969, 0.00888466, - 0.00735654, 0.00617251,
- 0.00523902, 0.00449182, - 0.0038857, 0.00338819, - 0.00297548, 0.00262987<
In class, we also derived an analytical expression for these coefficients. Let's verify that we get the same result
that way.
analcoeff@n_D := I8 a2 M IHa@@nDDL3 BesselJ@1, a@@nDDDM;
analycoefflist =
Module@8ans, c, n<, ans = 8<; Do@Hc = analcoeff@nD; ans = Append@ans, cDL, 8n, 1, 21<D; ansD;
We get excellent agreement. Note that the fourth coefficient is only about 2% of the first coefficient, and the rest are
even smaller. This suggests rapid convergence, which we are about to verify with some graphs. This is not so surprising given that the function being expanded satisfies the same zero boundary condition at the edge as the eigenfunctions.
Now we define the kth partial sum of our Fourier-Bessel series.
fourbess@r_, k_D := Sum@coeff@@nDD * y@r, nD, 8n, 1, k<D
Finally, we define graph[k], which produces a graph of f[r] in blue and the kth partial sum of the Fourier-Bessel series
in red. We use pltrange as a variable containing the plot range. We use SetOptions to set the value of the plotting
option Imagesize. We set it to 250, a good value for printing. For computer visualization, a value of 350 is better.
18
bessfunc.nb
Finally, we define graph[k], which produces a graph of f[r] in blue and the kth partial sum of the Fourier-Bessel series
in red. We use pltrange as a variable containing the plot range. We use SetOptions to set the value of the plotting
option Imagesize. We set it to 250, a good value for printing. For computer visualization, a value of 350 is better.
pltrange = 80, 9.2<;
SetOptions@Plot, ImageSize 250D;
graph@k_D := Plot@8f@rD, fourbess@r, kD<, 8r, 0, a<,
PlotRange -> pltrange, AxesLabel -> 8"r", SequenceForm@"f@rD = ", f@rDD<,
PlotLabel Row@8"k =", PaddedForm@k, 3D<D, PlotStyle ->
88RGBColor@0, 0, 1D, Thickness@0.004D<, 8RGBColor@1, 0, 0D, Thickness@0.004D<<D
k= 2
f@rD = 9 - r2
f@rD = 9 - r2
0.0
0.5
1.0
1.5
2.0
2.5
3.0
0.0
0.5
1.0
k= 3
2.5
3.0
2.0
2.5
3.0
2.0
2.5
3.0
f@rD = 9 - r2
0.5
1.0
1.5
2.0
2.5
3.0
0.0
0.5
1.0
k= 5
1.5
k= 6
f@rD = 9 - r2
f@rD = 9 - r2
0.0
2.0
k= 4
f@rD = 9 - r2
0.0
1.5
0.5
1.0
1.5
2.0
2.5
3.0
0.0
0.5
1.0
1.5
6.3 Example: f (r ) = 1 on 0 r a
Let's do one more example with a different function. We must re-define f, and then re-execute the calculation
of the coefficients. We choose f to be a constant.
f@r_D := 1
bessfunc.nb
19
Again we had an analytic expression for these coefficients in class, so again we use those in a consistency check.
analcoeff@n_D := 2 Ha@@nDD BesselJ@1, a@@nDDDL
analycoefflist =
Module@8ans, c<, ans = 8<; Do@Hc = analcoeff@nD; ans = Append@ans, cDL, 8n, 1, 21<D; ansD;
Again we see that the agreement is excellent. We continue our calculations using the numerical coefficients. We don't
expect the convergence to be as good here, because the function doesn't vanish at the endpoints. Let's do 21 partial
sums for this one. We use the command Manipulate to produce this graph sequence. We first produce the graph
sequence, and then hand the finished graphs to Manipulate for display.
pltrange = 80, 1.5<;
DynamicModule@8mangraph, k<, Do@mangraph@kD = graph@kD, 8k, 1, 21, 1<D;
Manipulate@mangraph@kD, 8k, 1, 21, 1<DD
k= 4
f@rD = 1
1.4
1.2
1.0
0.8
0.6
0.4
0.2
0.0
0.5
1.0
1.5
2.0
2.5
3.0
For visualization in the printed version of the notebook, we construct and display every fourth graph in this
sequence.
20
bessfunc.nb
f@rD = 1
1.4
1.4
1.2
1.2
1.0
1.0
0.8
0.8
0.6
0.6
0.4
0.4
0.2
0.0
0.2
0.5
1.0
1.5
2.0
2.5
3.0
k= 9
f@rD = 1
0.0
1.4
1.2
1.2
1.0
1.0
0.8
0.8
0.6
0.6
0.4
0.4
0.2
1.0
1.5
2.0
2.5
3.0
2.0
2.5
3.0
2.0
2.5
3.0
k = 13
0.2
0.5
1.0
1.5
2.0
2.5
3.0
k = 17
f@rD = 1
0.0
0.5
1.0
1.4
1.2
1.2
1.0
1.0
0.8
0.8
0.6
0.6
0.4
0.4
0.2
1.5
k = 21
f@rD = 1
1.4
0.0
0.5
f@rD = 1
1.4
0.0
k= 5
f@rD = 1
0.2
0.5
1.0
1.5
2.0
2.5
3.0
0.0
0.5
1.0
1.5
The series is clearly converging, but the struggle is very reminiscent of the Fourier series for a square wave,
complete with the Gibbs phenomenon. The resemblance is more than accidental. Remember that the Bessel functions
for large argument (i.e., large n in the series) behave like damped trig functions, so the convergence issues are mathematically related to those of the Fourier series.
The above calculation has been very inefficient. In the sequence of 21 partial sums, we re-computed at each
step all of the terms that appeared in the preceding graph, plus one new term. It would be far more efficient to save the
nth partial sum, and then create the n+1st partial sum by computing only the new term and adding it to the nth partial
sum. This is exactly what is done in the notebook Convergence of Bessel Expansions. In that notebook we will also
see how to send the graph output to a Manipulate panel for convenience in visualization.
bessfunc.nb
21
8. References
The literature on Bessel functions is enormous. There is a famous 800-page treatise on Bessel functions first
published in 1922 and still very useful:
A Treatise on the Theory of Bessel Functions, by G.N. Watson, Cambridge University Press, 2nd edition,
1962.
The single most useful practical reference for users of Bessel functions is
Handbook of Mathematical Functions, ed. M. Abramowitz and I. A. Stegun, National Bureau of Standards,
1964.
This reference has many useful formulas, graphs and tables for a wide variety of special functions. You can no longer
buy a hardbound copy for $6.50 as you could in 1964, but a relatively inexpensive paperback reprint is still available.
A recent update of the above handbook is
NIST Handbook of Mathematical Functions, ed. Fran W.J. Olver,, Daniel W. Lozier, Ronald F. Bos=isvert,
and Charles W. Clark, Cambridge University Press, 2010. The complete version of the NIST handbook is available
online at http://dlmf.nist.gov/.
22
bessfunc.nb
This reference has many useful formulas, graphs and tables for a wide variety of special functions. You can no longer
buy a hardbound copy for $6.50 as you could in 1964, but a relatively inexpensive paperback reprint is still available.
A recent update of the above handbook is
NIST Handbook of Mathematical Functions, ed. Fran W.J. Olver,, Daniel W. Lozier, Ronald F. Bos=isvert,
and Charles W. Clark, Cambridge University Press, 2010. The complete version of the NIST handbook is available
online at http://dlmf.nist.gov/.
The ultimate reference on special functions, including Bessel functions, is
Higher Transcendental Functions, Vol. I, II, and III, The Bateman Manuscript Project, ed. A. Erdelyi,
W. Magnus, F. Oberhettinger, and F.C. Tricomi, McGraw-Hill, 1953.
Volume II contains over 100 pages of formulas with Bessel functions.
Almost all texts on advanced calculus or mathematical methods for scientists and engineers have some useful
information on Bessel functions. There are hundreds of such books. Here are two by way of example -- the first
elementary and the second quite advanced:
Advanced Engineering Mathematics, E. Kreyszig, 7th and earlier editions, John Wiley.
Methods of Theoretical Physics, P.M. Morse and H. Feshbach, McGraw-Hill, 1953.
The historical information given in section 6 on Bessel came from the CD version of the Encyclopedia Brittanica, and from a web site at St. Andrews University devoted to the history of mathematics. The picture of Bessel also
came from that web site. The web site is called the MacTutor History of Mathematics Archive, and its address is
http://www-groups.dcs.st-andrews.ac.uk/~history/