Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
0% found this document useful (0 votes)
52 views

Numerical Analysis Assignment Help

The document provides help with equations analysis and numeric assignments. It can be contacted by calling +1 678 648 4277, emailing support@mathhomeworksolver.com, or visiting https://www.mathhomeworksolver.com/. The document then presents five problems related to vector calculus, partial differential equations, and numerical methods. It provides detailed solutions to each problem involving derivations, proofs, plots, and numerical computations.
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
52 views

Numerical Analysis Assignment Help

The document provides help with equations analysis and numeric assignments. It can be contacted by calling +1 678 648 4277, emailing support@mathhomeworksolver.com, or visiting https://www.mathhomeworksolver.com/. The document then presents five problems related to vector calculus, partial differential equations, and numerical methods. It provides detailed solutions to each problem involving derivations, proofs, plots, and numerical computations.
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 15

Equations Analysis and Numeric's Assignment Help

For any help related queries, call us at : - +1 678 648 4277
You can mail us at : - support@mathhomeworksolver.com or
reach us at : - https://www.mathhomeworksolver.com/

Problem
Problem 1:

Consider the space of three-component vector fields u(x) on some


finite-volume 3d domain Ω ⊂ R3 . One linear operator on these
fields is the curl ∇×, which is important in electromagnetism (which
we will study in more detail later in 18.303). Define the inner
product of two vector fields u and v ´ by the volume integral (u, v) =
u¯ · v. Ω

(a) An 18.02 exercise: derive the identity ∇ · (u × v) = (∇ × u) · v − u ·


(∇ × v).

(b) Figure out how to do integration by parts with the curl: show
that ‚ (u, ∇ × v) = (∇ × u, v) + w · dS, where dS is the usual
outward surface-normal area element, and the w appearing ∂Ω
in the surface integral over the boundary (∂Ω) is some vector
field to be determined. (Hint: use the identity you derived in the
previous part, combined with the divergence theorem.)
(c) Give a possible boundary condition on our space of vector
fields such that ∇× is self-adjoint with this inner product.
(Boundary conditions can only involve one vector field at a
time! No fair giving an equation that relates u to v!) You
should not have to specify all the components of u on the
boundary. It may be convenient to define a vector field n(x)
on ∂Ω to denote the outward normal vector at each point on
the boundary.1
(d) Show that ∇×∇× is self-adjoint for this inner product under either
some boundary condition on u (similar to above) or some boundary
condition on derivatives of u. Is it positive or negative definite or
semidefinite?

(e) Two of Maxwell’s equations in vacuum are


where c is the ∂t c2 ∂t speed of light. Take the curl of both sides of
the second equation to obtain a PDE in E alone. Suppose that Ω is the
interior of a hollow metal container, where the boundary conditions are
that E is perpendicular to the metal at the surface (i.e. E × n| = 0).2
Combining these ∂Ω facts with the previous parts, explain why you
would expect to obtain oscillating solutions to Maxwell’s equations
(standing electromagnetic waves, essentially light bouncing around
inside the container).

(This kind of system exists, for example, in the microwave regime where
metals have high conductivity, and such containers are called
microwave resonant cavities.
Problem 2:

In class, we solved for the eigenfunctions of ∇2 in two


dimensions, in a cylindrical region r ∈ [0, R], θ ∈ [0, 2π] using
separation of variables, and obtained Bessel’s equation and
Bessel-function solutions. Although Bessel’s equation has two
solutions Jm(kr) and Ym(kr) (the Bessel functions), the second
solution (Ym) blows up as r → 0 and so for that problem we
could only have Jm(kr) solutions (although we still needed to
solve a transcendental
equation to obtain k). In this problem, you will solve for the 2d
eigenfunctions of ∇2 in an annular region Ω that does not contain the
origin, as depicted schematically in Fig. 1, between radii R1 and R2, so
that

Figure 1: Schematic of the domain Ω for problem 3: an annular


region in two dimensions, with radii r ∈ [R1, R2] and angles θ ∈ [0,
2π].
you will need both the Jm and Ym solutions. Exactly as in class, the
separation of variables ansatz u(r, θ) = ρ(r)τ(θ) leads to functions τ(θ)
spanned by sin(mθ) and cos(mθ) for integers m, and functions ρ(r) that
satisfy Bessel’s equation. Thus, the eigenfunctions are of the form:

for arbitrary constants A and B, for integers m = 0, 1, 2, . . ., and for


constants α, β, and k to be determined. For fun, we will also change
the boundary conditions somewhat. We will impose “Neumann”
boundary condition ∂u = 0 at R1 and R2. That is, for a function u(r, θ)
in cylindrical coordinates, ∂r ∂u|r=R1 = 0 and ∂u |r=R2 = 0. The
following exact identities for the derivatives of the Bessel ∂r ∂r
functions will be helpful:

(a) Using the boundary conditions, write down two equations for α, β
and k, of the form ( ) α E = 0 for some 2 × 2 matrix E. This only has a
solution when det E = 0, and β from this fact obtain a single equation
for k of the form fm(k) = 0 for some function fm that depends on m.
This is a transcendental equation; you can’t solve it by hand for k. In
terms of k (which is still unknown), write down a possible expression
for α and β, i.e. a basis for N(E).
(b) Assuming R1 = 1, R2 = 2, plot your function fm(k) versus k ∈ [0, 20]
for m = 0, 1, 2. Note that Julia provides the Bessel functions built-in:
Jm(x) is besselj(m,x) and Ym(x) is bessely(m,x). You can plot a function
with the plot command. See the IJulia notebook posted on the course
web page for lecture 8 for some examples of plotting and finding roots
in Julia.

(c) For m = 0, find the first three (smallest k > 0) solutions k1, k2, and k3
to f0(k) = 0. Get a rough estimate first from your graph (zooming if
necessary), and then get an accurate answer by calling the
scipy.optimize.newton function as in pset 1, and also as illustrated in
the lecture8 IJulia notebook. (Note that there is also a k = 0
eigenfunction for m = 0, corresponding to the constant function: the
nullspace of Aˆ with Neumann boundary conditions, as in class.) ´

(d) Because ∇2 is self-adjoint under (u, v) = ¯uv (we showed in class, in


general Ω, that this is Ω still true with these boundary conditions), we
know that the eigenfunctions must be orthogonal. From class, this
implies that the radial parts must also be orthogonal when integrated
via ´ r dr. Check that your Bessel solutions for k1 and k2 are indeed
orthogonal, by numerically integrating their product via the quadgk
function in Julia as in pset 2 and as in the lecture-8 IJulia notebook

(e) Let’s change the problem. Suppose that the domain is now 0 ≤ r ≤
R2, and the operator is Aˆ = c(r)∇2 with c(r) = 2 for r < R1 and c(r) = 1
for r ≥ R1. Suppose we impose Dirichlet boundary conditions u(R2) = 0
(i) What is the form of the eigenfunctions? (Define them in terms of
Jm(kr) and Ym(kr) with unknown coefficients in the r < R1 and r ≥
R1 regions; don’t try to solve for the coefficients.)

(ii) If we solve for eigenfunctions Au λu ˆ = , and u is everywhere finite,


then what continuity conditions must u satisfy at r = R1 in order for
Au ˆ to be well defined and finite? If you combine these continuity
conditions with the boundary condition at R2, you should find that
the number of equations that u must satisfy matches the number
of unknown coefficients in the previous part.

(iii) As before, write down a condition fm(k) = 0 that must be satisfied


in order for the above equations to have a solution. The roots of
this function then give the eigenvalues. (You would have to solve it
numerically as above, but you need not do this here; just write
down fm, which you can leave in the form of a determinant.)

Problem 3:

The Bessel functions u(x) = Jm(kx), from class, solve the eigenproblem:
on [0, R] where u(R) = 0 and u(0) = 0 for m > 0.

(a) Show that this operator Aˆ is of the form of a Sturm-Louville


operator (from the class notes), and is therefore self-adjoint for
an appropriate inner product (which?). (Hint: rewrite the first two
terms of A ˆ as a single term.)

(b) Show that Aˆ is negative definite, hence λ < 0 and we are entitled
to write λ = −k2 for a real k.

(c) Write down a center-difference discretization of this operator Aˆ


for un = u(nΔx) with n = 1, . . . , R Δx = R/(N + 1). Be careful of
where you evaluate the 1/r factors (both to maintain second-
order accuracy, and to avoid dividing by zero.) ˆ
(d) In Julia, form the matrix approximation A of A for m = 1 (with N =
100 and R = 1) using code similar to problem 2 from pset 2.
Compare its smallest-magnitude eigenfunction to J1(k1,1r/R)
(where k1,1 is the first root of J1), evaluated with the help of the
Julia code posted in Lecture 9. They should be the same up to
some overall scale factor, within the discretization accuracy.
Solution
Solution 1:

(a) Writing out the derivation in 18.02 fashion, this is tedious but
straightforward:

(A much more compact derivation is possible using Einstein notation


and the Levi–Civitá tensor, but probably most of you haven’t seen this
notation.)

(b) Given the above identity, integration by parts is straight forwards:


applying the divergence theorem in the second line. So, the surface term ∂Ω
w · dS is for
w = u¯ × v .

(c) We must have (u¯ × v) · dS = 0. Let dS = n dS, where n is the outward unit
normal vector at each point on ∂Ω. Then we must have u¯ × v ⊥ n, which is
true if, for example, both u and v are parallel to n at the boundary. i.e. if u ×
n| = 0 . It is not necessary to require ∂Ω u| = 0 on the boundary, and that
answer will not be accepted as I specifically requested ∂Ω that you not
constrain all the components of u on the boundary.

Another way to see this is to write (u¯ ×v)· dS = n·(u¯ ×v)dS = v ·(n×u¯)dS = u¯
·(v ×n)dS by elementary triple-product identities, and hence we again see
that it is sufficient to have u × n = 0 on the boundary.

Although I will accept the above answer, it is actually possible to contrive a


slightly weaker condition: u and v can have components ⊥ to n on the
boundary, as long as those surfaceparallel components are in the same
direction for both u and v (to obtain zero cross product). 1 That is, suppose
p(x) is some surface-parallel (p ⊥ n) vector field on ∂Ω. Then it is sufficient
for the surface-parallel component of u to be ⊥ to p everywhere on the
boundary, or equivalently u · p| = 0
Actually, there are other possible conditions on u if we allow non-
local boundary conditions, where u at one point on the boundary is
related to u at another point. For example, if Ω is a cube and u is
periodic (i.e. u on one face equals u on the opposite face), then the
‚ ∂Ω vanishes because each face of the cube cancels the opposite
face, without requiring any component of u to be zero.] (d) We just
“integrate by parts” twice

where the surface terms cancel if either


that is if ∂Ω ∂Ω either u or its curl are normal to the surface. (As in the
previous part, one can actually weaken this slightly, but this is
sufficient for our purposes.)

To check definiteness, carry integration by parts “halfway” through:

so it is positive semidefinite. It is not positive-definite since \ × u = 0 for


u = \φ = 0 for any non-constant scalar field φ , and we can easily
choose such a φ such that \φ satisfies the boundary conditions (e.g.,
choose φ so that it is constant in a neighborhood of ∂Ω).
(e) Taking the curl of both sides of we obtain

That is, we have

where Aˆ = −\×\×. From the previous parts, for E ⊥ n at the surface,


this Aˆ is self-adjoint and negative semidefinite, and hence we have a
hyperbolic equation.

As in class, we therefore expect orthogonal eigenfunctions and real √


λ ≤ 0, and hence oscillating “normal mode” solutions with
eigenfrequencies

[Technically, we also obtain λ = 0 solutions which are non-oscillatory—


from above, these are nullspace solutions E = \φ for some φ, which
physically correspond to the time-independent fields of fix charge
distributions, where −φ is the potential and \ · E = \2φ is the charge
density. Everything else, all of the other eigenfunctions, are oscillating
solutions.]

Solution 2:

See also the IJulia notebook posted with the solutions. (a) Setting the
slopes to be zero at R1 and R2 simply gives
See also the IJulia notebook posted with the solutions.

(a) Setting the slopes to be zero at R1 and R2 simply gives

at the two radii, or E

Hence, writing fm(k) = det E, we get


Given a k for which fm(k) = 0, then we can solve for the nullspace of E
by arbitrarily choosing α a scaling such that α = 1 and solving for β
from the first or second rows of
E
(b) The plot is shown in Figure 1. Note that fm(k) for m > 0 has a
divergence as k → 0, so we used the ylim command to rescale the
vertical axis (otherwise it would be hard to read the plot!); see the
solution IJulia notebook.

(c) We’ll use the Scilab newton function, similar to class, to find the
roots, with initial guesses provided by our plot in Figure 1. We find k1
≈ 3.196578, k2 ≈ 6.31234951, and k3 ≈ 9.4444649. See the solutions
notebook.

(d) See the IJulia notebook. Using our k1 and k2 from part (c) and our
α and β from part (a), ´ R2 we find that which is
zero up to roundoff errors.

(e) Here, we have and obtain:

(i) At the origin, we can’t blow up, and therefore we only have J for r
< R1, but we have both J and Y outside this. Hence

Note that the angular dependence must be the same for all r in order
to have continuity at r = R1. However, we are not guaranteed that k1 =
k2! In particular, we want Auˆ = λu for some λ, and plugging in the
Bessel functions and the values of √ c we find λ = −2k2 = −k2. Hence,
we let k2 = k and k1 = k/ 2.
(ii) To get a finite Auˆ , we must have u and ∂u/∂r continuous at r =
R1. Combined with u = 0 at r = R2, this gives the equations

where the first two rows are the continuity conditions and the last
row is the Dirichlet condition, and we have defined a matrix Em(k).

(iii) We simply need fm(k) = det Em(k) to solve for the solutions k and
hence the eigenvalues.

You might also like