Ref Jaj 1999
Ref Jaj 1999
Ref Jaj 1999
1 Introduction 1
3 Ultrasound imaging 31
3.1 Fourier relation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
3.2 Focusing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
3.3 Fields from array transducers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
3.4 Imaging with arrays . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
3.5 Simulation of ultrasound imaging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
3.6 Synthetic phantoms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
3.7 Anatomic phantoms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
i
ii
CHAPTER
ONE
Introduction
These notes have been prepared for the international summer school on advanced ultrasound imaging sponsored
by The Danish Research Academy. The notes should be read in conjunction with the notes prepared by Anderson
and Trahey1 . The intended audience is Ph.D. students working in medical ultrasound. A knowledge of general
linear acoustics and signal processing is assumed.
The notes give a linear description of general ultrasound imaging through the use of spatial impulse responses.
It is shown in Chapter 2 how both the emitted and scattered fields for the pulsed and continuous wave case can
be calculated using this approach. Chapter 3 gives a brief overview of modern ultrasound imaging and how it is
simulated using spatial impulse responses. Chapter 4 finally, gives a brief description of both spectral and color
flow imaging systems and their modeling and simulation.
1 M.E. Anderson and G.E. Trahey: A seminar on k-space applied to medical ultrasound, Dept. of Biomedical Engineering, Duke University,
1
2
CHAPTER
TWO
This chapter gives a linear description of acoustic fields using spatial impulse responses. It is shown how both the
pulsed emitted and scattered fields can be accurately derived using spatial impulse responses, and how attenuation
and different boundary conditions can be incorporated. The chapter goes into some detail of deriving the different
results and explaining their consequence. Different examples for both simulated and measured fields are given.
The chapter is based on the papers [1], [2] and [3] and on the book [4].
It is a well known fact in electrical engineering that a linear electrical system is fully characterized by its impulse
response as shown in Fig. 2.1. Applying a delta function to the input of the circuit and measuring its output
characterizes the system. The output y(t) to any kind of input signal x(t) is then given by
Z +∞
y(t) = h(t) ∗ x(t) = h(θ)x(t − θ)dθ, (2.1)
−∞
where h(t) is the impulse response of the linear system and ∗ denotes time convolution. The transfer function of
the system is given by the Fourier transform of the impulse response and characterizes the systems amplification
of a time-harmonic input signal.
The same approach can be taken to characterize a linear acoustic system. The basic set-up is shown in Fig. 2.2. The
acoustic radiator (transducer) on the left is mounted in a infinite rigid, baffle and its position is denoted by ~r2 . It
radiates into a homogeneous medium with a constant speed of sound c and density ρ0 throughout the medium. The
point denoted by ~r1 is where the acoustic pressure from the transducer is measured by a small point hydrophone.
A voltage excitation of the transducer with a delta function will give rise to a pressure field that is measured by
the hydrophone. The measured response is the acoustic impulse response for this particular system with the given
set-up. Moving the transducer or the hydrophone to a new position will give a different response. Moving the
hydrophone closer to the transducer surface will often increase the signal1 , and moving it away from the center
1 This is not always the case. It depends on the focusing of the transducer. Moving closer to the transducer but away from its focus will
3
Figure 2.2: A linear acoustic system.
Figure 2.3: Illustration of Huygens’ principle for a fixed time instance. A spherical wave with a radius of |~r| = ct
is radiated from each point on the aperture.
axis of the transducer will often diminish it. Thus, the impulse response depends on the relative position of both
the transmitter and receiver (~r2 −~r1 ) and hence it is called a spatial impulse response.
A perception of the sound field for a fixed time instance can be obtained by employing Huygens’ principle in which
every point on the radiating surface is the origin of an outgoing spherical wave. This is illustrated in Fig. 2.3. Each
of the outgoing spherical waves are given by
µ ¶ µ ¶
|~r2 −~r1 | |r|
ps (~r1 ,t) = δ t − =δ t− (2.2)
c c
where ~r1 indicates the point in space, ~r2 is the point on the transducer surface, and t is the time for the snapshot
of the spatial distribution of the pressure. The spatial impulse response is then found by observing the pressure
waves at a fixed position in space over time by having all the spherical waves pass the point of observation and
summing them. Being on the acoustical axis of the transducer gives a short response whereas an off-axis point
yields a longer impulse response as shown in Fig. 2.3.
In this section the exact expression for the spatial impulse response will more formally be derived. The basic setup
is shown in Fig. 2.4. The triangularly shaped aperture is placed in an infinite, rigid baffle on which the velocity
normal to the plane is zero, except at the aperture. The field point is denoted by ~r1 and the aperture by ~r2 . The
pressure field generated by the aperture is then found by the Rayleigh integral [5]
It is convenient to introduce the velocity potential ψ that satisfies the equations [6]
~v(~r,t) = −∇ψ(~r,t)
∂ψ(~r,t)
p(~r,t) = ρ0 . (2.5)
∂t
Then only a scalar quantity need to be calculated and all field quantities can be derived from it. The surface integral
is then equal to the velocity potential:
Z
vn (~r2 ,t − |~r1 −~
r2 |
c )
ψ(~r1 ,t) = dS (2.6)
S 2π |~r1 −~r2 |
Assume now that the surface velocity is uniform over the aperture making it independent of~r2 , then:
Z
δ(t − |~r1 −~
r2 |
c )
ψ(~r1 ,t) = vn (t) ∗ dS, (2.8)
S 2π |~r1 −~r2 |
is called the spatial impulse response and characterizes the three-dimensional extent of the field for a particular
transducer geometry. Note that this is a function of the relative position between the aperture and the field.
∂vn (t)
p(~r1 ,t) = ρ0 ∗ h(~r1 ,t) (2.10)
∂t
which equals the emitted pulsed pressure for any kind of surface vibration vn (t). The continuous wave field can
be found from the Fourier transform of (2.10). The received response for a collection of scatterers can also be
found from the spatial impulse response [7], [1]. This is derived in Section 2.6. Thus, the calculation of the spatial
impulse response makes it possible to find all ultrasound fields of interest.
The calculation of the spatial impulse response assumes linearity and any complex-shaped transducer can therefore
be divided into smaller apertures and the response can be found by adding the responses from the sub-apertures.
The integral is, as mentioned before, a statement of Huyghens’ principle of summing contributions from all areas
of the aperture.
An alternative interpretation is found by using the acoustic reciprocity theorem [8]. This states that: ”If in an
unchanging environment the locations of a small source and a small receiver are interchanged, the received signal
will remain the same.” Thus, the source and receiver can be interchanged. Emitting a spherical wave from the field
point and finding the wave’s intersection with the aperture also yields the spatial impulse response. The situation
is depicted in Fig. 2.5, where an outgoing spherical wave is emitted from the origin of the coordinate system. The
dashed curves indicate the circles from the projected spherical wave.
The calculation of the impulse response is then facilitated by projecting the field point onto the plane of the aperture.
The task is thereby reduced to a two-dimensional problem and the field point is given as a (x, y) coordinate set and
a height z above the plane. The three-dimensional spherical waves are then reduced to circles in the x − y plane
with the origin at the position of the projected field point as shown in Fig. 2.6.
The spatial impulse response is, thus, determined by the relative length of the part of the arc that intersects the
aperture. Thereby it is the crossing of the projected spherical waves with the edges of the aperture that determines
the spatial impulse responses. This fact is used for deriving equations for the spatial impulse responses in the next
section.
Figure 2.5: Emission of a spherical wave from the field point and its intersection of the aperture.
Field point
y
r1
x
Aperture
r2
Figure 2.6: Intersection of spherical waves from the field point by the aperture, when the field point is projected
onto the plane of the aperture.
The spatial impulse response is found from the Rayleigh integral derived earlier
Z
δ(t − |~r1 −~
r2 |
c )
h(~r1 ,t) = dS (2.11)
S 2π |~r1 −~r2 |
The task is to project the field point onto the plane coinciding with the aperture, and then find the intersection of
the projected spherical wave (the circle) with the active aperture as shown in Fig. 2.6.
From the derivation in the last section it can be seen that the spatial impulse response in general can be expressed
as
c N(t) h (i) i
∑
(i)
h(~r1 ,t) = Θ2 (t) − Θ1 (t) (2.16)
2π i=1
where N(t) is the number of arc segments that crosses the boundary of the aperture for a given time and
(i) (i)
Θ2 (t), Θ1 (t) are the associated angles of the arc. This was also noted by Stepanishen [9]. The calculation can,
thus, be formulated as finding the angles of the aperture edge’s intersections with the projected spherical wave,
sorting the angles, and then summing the arc angles that belong to the aperture. Finding the intersections can be
done from the description of the edges of the aperture. A triangle can be described by three lines, a rectangle by
four, and the intersections are then found from the intersections of the circle with the lines. This makes it possible
to devise a general procedure for calculating spatial impulse responses for any flat, bounded aperture, since the
task is just to find the intersections of the boundary with the circle.
The spatial impulse response is calculated from the time the aperture first is intersected by a spherical wave to the
time for the intersection furthest away. The intersections are found for every time instance and the corresponding
angles are sorted. The angles lie in the interval from 0 to 2π. It is then found whether the arc between two angles
belongs to the aperture, and the angle difference is added to the sum, if the arc segment is inside the aperture.
This yields the spatial impulse response according to Eq. (2.16). The approach can be described by the flow chart
shown in Fig. 2.8.
The only part of the algorithm specific to the aperture is the determination of the intersections and whether the
point is inside the aperture. Section 2.3.2 shows how this is done for polygons, Section 2.3.3 for circles, and
Section 2.3.6 for higher-order parametric boundaries.
All the intersections need not be found for all times. New intersections are only introduced, when a new edge
or corner of the aperture is met. Between times when two such corners or edges are encountered the number of
intersections remains constant and only intersections, which belong to points inside the aperture need to be found.
Note that an aperture edge gives rise to a discontinuity in the spatial impulse response. Also testing whether the
point is inside the aperture is often superfluous, since this only needs to be found once after each discontinuity in the
response. These two observations can significantly reduce the number of calculations, since only the intersections
affecting the response are found.
The procedure first finds the number of discontinuities. Then only intersection influencing the response are calcu-
lated between two discontinuity points. This can potentially make the approach faster than the traditional approach,
where the response from a number of different rectangles or triangles must be calculated.
The boundary of any polygon can be defined by a set of bounding lines as shown in Fig. 2.9.
The active aperture is then defined as lying on one side of the line as indicated by the arrows, and a point on the
aperture must be placed correctly in relation to all lines. The test whether a point is on the aperture is thus to go
through all lines and test whether the point lies in the active half space for the line, and stop if it is not. The point
is inside the aperture, if it passes the test for all the lines.
The intersections are found from the individual intersections between the projected circle and the lines. They are
determined from the equations for the projected spherical wave and the line:
r2 = (x − x0 )2 + (y − y0 )2
y = αx + y1 (2.17)
r2 = (ct)2 − z2p
Here (x0 , y0 ) is the center of the circle, α the slope of the line, and y1 its intersect with the y-axis. The intersections
are given from the solutions to:
0= (1 + α2 )x2 + (2αy1 − 2x0 − 2y0 α)x + (y20 + y21 + x02 − 2y0 y1 − r2 )
= Ax2 + Bx +C (2.18)
D = B2 − 4AC
The angles are µ ¶
y − y0
Θ = arctan (2.19)
x − x0
Intersections between the line and the circle are only found if D > 0. A determinant D < 0 indicates that the circle
did not intersect the line. If the line has infinite slope, the solution is found from the equation:
x = x1
0 = y2 − 2y0 y + y20 + (x1 − x0 )2 − r2 (2.20)
= A∞ y2 + B∞ y +C∞
in which A∞ , B∞ ,C∞ replace A, B,C, respectively, and the solutions are found for y rather than x. Here x1 is the
line’s intersection with the x-axis.
The times for discontinuities in the spatial impulse response are given by the intersections of the lines that define the
aperture’s edges and by the minimum distance from the projected field point to the lines. The minimum distance is
found from a line passing through the field point that is orthogonal to the bounding line. The intersection between
the orthogonal line and the bounding line is:
αy p + x p − αy1
x = (2.21)
α2 + 1
y = αx + y1
where (x p , y p , z p ) is the position of the field point. For an infinite slope line the solution is x = x1 and y = y p . The
corresponding time is: q
(x − x p )2 + (y − y p )2 + z2p
ti = (2.22)
c
The intersections of the lines are also found, and the corresponding times are calculated by (2.22) and sorted in
ascending order. They indicate the start and end time for the response and the time points for discontinuities in the
response.
The other basic shape for a transducer apart from rectangular shapes is the flat, round surface used for single
element piston transducers and annular arrays. For these the intersections are determined by two circles as depicted
in Fig. 2.10.
Here O1q is the center of the aperture with radius ra and the projected spherical wave is centered at O2 with radius
rb (t) = (ct)2 − z2p . The length ha (t) is given by [10, page 66]
p
p(t)(p(t) − a)(p(t) − ra )(p(t) − rb (t))
2
ha (t) = (2.23)
a
a = ||O1 − O2 ||
a + ra + rb (t)
p(t) =
2
y = ha (t) (2.24)
q
l = ± rb2 (t) − h2a (t)
The sign for l depends on the position of the intersections. A negative sign is used if the intersections are for
negative values of x, and positive sign is used for positive x positions.
When the field point is outside the active aperture the spatial impulse response is
µ ¶
|Θ2 − Θ1 | c ha (t)
h(~r1 ,t) = c = arctan (2.25)
2π π l
µ ¶
ha (t)
Θ2 = arctan = −Θ1
l
It must be noted that a proper four-quadrant arc-tan should be used to give the correct response. An alternative
formula is [11, page 19]
à p !
c 2 p(t)(p(t) − a)(p(t) − ra )(p(t) − rb (t))
h(~r1 ,t) = arcsin (2.26)
2π rb2 (t)
µ ¶
c aha (t)
= arcsin
2π rb2 (t)
The start time ts for the response is found from
ra + rb (t) = ||O1 − O2 ||
q q
rb2 (t) + z2p (||O1 − O2 || − ra )2 + z2p
ts = = (2.27)
c c
and the response ends at the time te when
rb (t) = ra + ||O1 − O2 ||
q q
rb2 (t) + z2p (||O1 − O2 || + ra )2 + z2p
te = = (2.28)
c c
rb (t) = ra + ||O1 − O2 ||
q
(||O1 − O2 || + ra )2 + z2p
te = (2.31)
c
The determination of which part of the arc that subtracts or adds to the response is determined by what the active
aperture is. One ring in an annular array can be defined as consisting of an active aperture outside a circle combined
with an active aperture inside a circle for defining the inner and outer rim of the aperture. A circular aperture can
also be combined with a line for defining the active area of a split aperture used for continuous wave probing.
For reference the expression for a concave transducer, which is a type often used in medical ultrasonics, is given.
A derivation of the solution can be found in [12] and [13].
where:
½ µ 2 ¶¾
1 − d/R 1 R + r2 − c2t 2
η(t) = R +
sin Θ tan Θ 2rR
s
µ 2 ¶2
R + r2 − c2t 2
σ(t) = R 1 − (2.33)
2rR
r = |~r1 |.
Analytical solutions for spatial impulse response have been derived for a number of different geometries by various
authors. The response from a flat rectangle can be found in [14, 15], and for a flat triangle in [16]).
For ellipses or other higher order parametric surfaces it is in general not easy to find analytic solutions for the
spatial impulse response. The boundary method described above can, however, be used for providing a simple
solution to the problem, since the intersections between the projected spherical wave and the edge of the aperture
uniquely determine the spatial impulse response. It is therefore possible to use root finding for a set of (non-linear)
equations for finding these intersections. The problem is to find when both the spherical wave and the aperture
have crossing contours in the plane of the aperture, i.e., when
in which S(x, y) = 0 defines the boundary of the aperture. The problem of numerically finding these roots is in
general not easy, if a good initial guess on the position of the intersections is not found [17, pages 286–289]. Good
initial values are however found here, since the intersections must lie on the projected circle and the intersections
only move slightly from time point to time point. An efficient Newton-Raphson algorithm can therefore be devised
for finding the intersections, and the procedure detailed here can be used to find the spatial impulse response for
any flat transducer geometry with an arbitrary apodization and both hard and soft baffle mounting.
Field point
x
r1
r0 Θ
r1
d z
r2
R
Aperture
Region II
Field point x
r1
r0 r1
Θ
d z
r2
Aperture
Figure 2.11: Definition of variables for spatial impulse response of concave transducer.
Often ultrasound transducers do not vibrate as a piston over the aperture. This can be due to the clamping of the
active surface at its edges, or intentionally to reduce side-lobes in the field. Applying for example a Gaussian
apodization will significantly lower side lobes and generate a field with a more uniform point spread function as a
function of depth. Apodization is introduced in (2.12) by writing [18]
Z Θ Z d
2 2 δ(t − Rc )
h(~r1 ,t) = a p (r, Θ) r dr dΘ (2.35)
Θ1 d1 2πR
in which a p (r, Θ) is the apodization over the aperture. Using the same substitutions as before yields
Z Θ Z t
c 2 2
h(~r1 ,t) = a p1 (t 0 , Θ)δ(t − t 0 ) dt 0 dΘ (2.36)
2π Θ1 t1
q
where a p1 (t 0 , Θ) = a p ( (ct 0 )2 − z2p , Θ). The inner integral is a convolution of the apodization function with a
δ-function and readily yields
Z Θ
c 2
h(~r1 ,t) = a p1 (t, Θ) dΘ (2.37)
2π Θ1
as noted by several authors [18, 19, 20]. The response for a given
qtime instance can, thus, be found by integrating
the apodization function along the fixed arc with a radius of r = (ct)2 − z2p for the angles for the active aperture.
Any apodization function can therefore be incorporated into the calculation by employing numerical integration.
Often the assumption of an infinite rigid baffle for the transducer mounting is not appropriate and another form
of the Rayleigh integral must be used. For a soft baffle, in which the pressure on the baffle surface is zero, the
Rayleigh-Sommerfeld integral is used. This is [21, pages 46–50]
Z
δ(t − |~r1 −~
r2 |
c )
hs (~r1 ,t) = cos ϕ dS (2.38)
S 2π |~r1 −~r2 |
assuming that |~r1 −~r2 | À λ. Here cos ϕ is the angle between the line through the field point orthogonal to the
aperture plane and the radius of the spherical wave as shown in Fig. 2.12.
h
y
Aperture
The angle ϕ is constant for a given radius of the projected spherical wave and thus for a given time. It is given by
zp zp
cos ϕ = = (2.39)
R ct
Using the substitutions from Section 2.2, the Rayleigh-Sommerfeld integral can then be rewritten as
Z t
zp 2 δ(t − t 0 )
hs (~r1 ,t) = c(Θ2 − Θ1 ) dt 0 (2.40)
2π t1 ct 0
then gives
z p Θ2 − Θ1 zp
hs (~r1 ,t) = c = h(~r1 ,t). (2.42)
ct 2π ct
The spatial impulse response can, thus, be found from the spatial impulse response for the rigid baffle case by
multiplying with z p /(ct).
The first example shows the spatial impulse responses from a 3 × 5 mm rectangle for different spatial positions 5
mm from the front face of the transducer. The responses are found from the center of the rectangle (y = 0) and
out in steps of 2 mm in the x direction to 6 mm away from the center of the rectangle. A schematic diagram of
the situation is shown in Fig. 2.13 for the on-axis response. The impulse response is zero before the first spherical
wave reaches the aperture. Then the response stays constant at a value of c. The first edge of the aperture is met,
and the response drops of. The decrease with time in increased when the next edge of the aperture is reached and
the response becomes zero when the projected spherical waves all are outside the area of the aperture.
A plot of the results for the different lateral field positions is shown in Fig. 2.14. It can be seen how the spatial
impulse response changes as a function of relative position to the aperture.
The second example shows the response from a circular, flat transducer. Two different cases are shown in Fig.
2.15. The top graph shows the traditional spatial impulse response when no apodization is used, so that the
aperture vibrates as a piston. The field is calculated 10 mm from the front face of the transducer starting at the
center axis of the aperture. Twenty-one responses for lateral distance of 0 to 20 mm off axis are then shown. The
same calculation is repeated in the bottom graph, when a Gaussian apodization has been imposed on the aperture.
1400
1200
1000
h [m/s]
800
600
400 6
200 5
4
0
3
3
3.5 2
4
4.5 1
5
5.5
−6 6 0 Lateral distance [mm]
x 10
Time [s]
Figure 2.14: Spatial impulse response from a rectangular aperture of 4 × 5 mm at for different lateral positions
The vibration amplitude is a factor of 1/ exp(4) less at the edges of the aperture than at the center. It is seen how
the apodization reduces some of the sharp discontinuities in the spatial impulse response, which can reduce the
sidelobes of the field.
In medical ultrasound, a pulsed field is emitted into the body and is scattered and reflected by density and prop-
agation velocity perturbations. The scattered field then propagates back through the tissue and is received by
the transducer. The field is converted to a voltage signal and used for the display of the ultrasound image. A
full description of a typical imaging system, using the concept of spatial impulse response, is the purpose of the
section.
The received signal can be found by solving an appropriate wave equation. This has been done in a number of
papers (e.g. [22], [23]). Gore and Leeman [22] considered a wave equation where the scattering term was a
function of the adiabatic compressibility and the density. The transducer was modeled by an axial and lateral pulse
that were separable. Fatemi and Kak [23] used a wave equation where scattering originated only from velocity
fluctuations, and the transducer was restricted to be circularly symmetric and unfocused (flat).
The scattering term for the wave equation used here is a function of density and propagation velocity perturbations,
and the wave equation is equivalent to the one used by Gore and Leeman [22]. No restrictions are enforced on
the transducer geometry or its excitation, and analytic expressions for a number of geometries can be incorporated
into the model.
The model includes attenuation due to propagation and scattering, but not the dispersive attenuation observed for
propagation in tissue. This can, however, be incorporated into the model as indicated in Section 2.6.6.
The derivation is organized as follows. The following section derives the wave equation and describes the different
1600
1400
1200
1000
h [m/s]
800
600
0
400
5
200
10
0
1.8
1.6 15
1.4
1.2
1
0.8 20 Lateral distance [mm]
−5 0.6
x 10
Time [s]
1600
1400
1200
1000
h [m/s]
800
600
0
400
5
200
10
0
1.8
1.6 15
1.4
1.2
1
0.8 20 Lateral distance [mm]
−5 0.6
x 10
Time [s]
Figure 2.15: Spatial impulse response from a circular aperture. Graphs are shown without apodization of the
aperture (top) and with a Gaussian apodization function (bottom). The radius of the aperture is 5 mm and the field
is calculated 10 mm from the transducer surface.
This section derives the wave equation. The section has been included in order to explain in detail the different
linearity assumptions and approximations made to obtain a solvable wave equation. The derivation closely follows
that developed by Chernov (1960).
The first approximation states that the instantaneous acoustic pressure and density can be written
in which P is the mean pressure of the medium and ρ is the density of the undisturbed medium. Here p1 is the
pressure variation caused by the ultrasound wave and is considered small compared to P and ρ1 is the density
change caused by the wave. Both p1 and ρ1 are small quantities of first order.
Our second assumption is that no heat conduction or conversion of ultrasound to thermal energy take place. Thus,
the entropy is constant for the process, so that the acoustic pressure and density satisfy the adiabatic equation [24]:
dPins dρins
= c2 (2.45)
dt dt
The equation contains total derivatives, as the relation is satisfied for a given particle of the tissue rather than at
a given point in space. This is the Lagrange description of the motion [6]. For our purpose the Euler description
is more appropriate. Here the coordinate system is fixed in space and the equation describes the properties of
whatever particle of fluid there is at a given point at a given time. Converting to an Eulerian description results in
the following constitutive equation [24], [6]:
1 ∂p1 ∂ρ1
= +~u · ∇ρ (2.46)
c2 ∂t ∂t
using that P and ρ do not depend on time and that ρ1 is small compared to ρ. Here u is the particle velocity, ∇ is
the gradient operator, and · symbolizes the scalar product.
The pressure, density, and particle velocity must also satisfy the hydrodynamic equations [24]:
d~u
ρins = −∇Pins (2.47)
dt
∂ρins
= −∇ · (ρins~u) (2.48)
∂t
which are the dynamic equation and the equation of continuity. Using (2.43) and (2.44) and discarding higher
order terms we can write
∂~u
ρ = −∇p1 (2.49)
∂t
∂ρ1
= −∇ · (ρ~u) (2.50)
∂t
∂2 ρ1 ∂~u
= −∇ · (ρ ) = −∇ · (−∇p1 ) = ∇2 p1 (2.51)
∂2t ∂t
1 ∂2 p1 1
∇2 p1 − = ∇ρ · ∇p1 (2.53)
c2 ∂2t ρ
Assuming that the propagation velocity and the density only vary slightly from their mean values yields
ρ(~r) = ρ0 + ∆ρ(~r)
c(~r) = c0 + ∆c(~r) (2.54)
1 ∂2 p1 1
∇2 p1 − = ∇(ρ0 + ∆ρ) · ∇p1 (2.55)
(c0 + ∆c)2 ∂2t (ρ0 + ∆ρ)
Ignoring small quantities of second order and using the approximation (∆ ¿ 1):
1
≈ 1−∆ (2.56)
1+∆
gives: µ ¶
1 2∆c ∂2 p1 1 ∆ρ
∇2 p1 − ( − 3) 2 = ∇(∆ρ) − 2 ∇(∆ρ) · ∇p1 (2.57)
c0 2 c0 ∂t ρ0 ρ0
Neglecting the second order term (∆ρ/ρ0 2 )∇(∆ρ) · ∇p1 yields the wave equation:
1 ∂2 p1 2∆c ∂2 p1 1
∇2 p1 − = − + ∇(∆ρ) · ∇p1 (2.58)
c0 ∂t
2 2 c0 ∂t
3 2 ρ0
The two terms on the right side of the equation are the scattering terms which vanish for a homogeneous medium.
The wave equation was derived by Chernov [24]. It has also been considered in Gore & Leeman [22] and Morse &
Ingard [6] in a slightly different form, where the scattering terms were a function of the adiabatic compressibility
κ and the density.
Having derived a suitable wave equation, we now calculate the scattered field from a small inhomogeneity em-
bedded in a homogeneous surrounding. The scene is depicted in Fig. 2.16. The inhomogeneity is identified by~r1
and enclosed in the volume V 0 . The scattered field is calculated at the point indicated by ~r2 by integrating all the
spherical waves emanating from the scattering region V 0 using the time dependent Green’s function for unbounded
space. Thus, the scattered field is [6], [22]:
Z Z ·
1
ps (~r2 ,t) = ∇(∆ρ(~r1 )) · ∇p1 (~r1 ,t1 )
V 0 T ρ0
¸
2∆c(~r1 ) ∂2 p1 (~r1 ,t1 )
− G(~r1 ,t1 |~r2 ,t)dt1 d 3~r1 (2.59)
c30 ∂t 2
where G is the free space Green’s function:
δ(t − t1 − |~r2c−~r1 |
)
G(~r1 ,t1 |~r2 ,t) = 0
(2.60)
4π |~r2 −~r1 |
d 3~r1 means integrating w.r.t. ~r1 over the volume V 0 , and T denotes integration over time.
We denote by
1 2∆c(~r1 ) ∂2
Fop = ∇(∆ρ(~r1 )) · ∇ − (2.61)
ρ0 c30 ∂t 2
the scattering operator.
where pi is the incident pressure field. As can be seen, the integral can not be solved directly. To solve it we apply
the Born-Neumann expansion [25]. If Gi symbolizes the integral operator representing Green’s function and the
integration and Fop the scattering operator, then the first order Born approximation can be written:
Here ps has been set to zero in (2.62). Inserting ps1 in (2.62) and then in (2.59) we arrive at
ps2 (~r2 ,t) = Gi Fop [pi (~r1 ,t1 ) + Gi Fop pi (~r1 ,t1 )]
= Gi Fop pi (~r1 ,t1 ) + [Gi Fop ]2 pi (~r1 ,t1 ) (2.64)
It is emphasized here that Gi indicates an integral over ~r1 and t1 , and not the pressure at point ~r1 and time t1 but
over the volume of V 0 and time T indicated by~r1 and t1 .
Terms involving [Gi Fop ]N pi (~r1 ,t1 ), where N > 1, describe multiple scattering of order N. Usually the scattering
from small obstacles is considered weak so higher order terms can be neglected. Thus, a useful approximation is
to employ only the first term in the expansion. This corresponds to the first order Born-approximation.
Using this (2.59) can be approximated by (note the replacement of p1 (~r1 ,t1 ) with pi (~r1 ,t1 )):
Z Z ·
1
ps (~r2 ,t) ≈ ∇(∆ρ(~r1 )) · ∇pi (~r1 ,t1 )
0
V T ρ0
¸
2∆c(~r1 ) ∂2 pi (~r1 ,t1 )
− G(~r1 ,t1 |~r2 ,t)dt1 d 3~r1 (2.66)
c30 ∂t 2
So in order to calculate the scattered field, the incident field for the homogeneous medium must be calculated.
The incident field is generated by the ultrasound transducer assuming no other sources exist in the tissue. The
field is conveniently calculated by employing the velocity potential ψ(~r,t), and enforcing appropriate boundary
conditions [26], [27]. The velocity potential satisfies the following wave equation for the homogeneous medium:
1 ∂2 ψ
∇2 ψ − =0 (2.67)
c0 2 ∂2t
and the pressure is calculated from:
∂ψ(~r,t)
p(~r,t) = ρ0 (2.68)
∂t
The coordinate system shown in Fig. 2.17 is used in the calculation. The particle velocity normal to the transducer
surface is denoted by v(~r3 +~r4 ,t), where~r3 identifies the position of the transducer and~r4 a point on the transducer
surface relative to~r3 .
when the transducer is mounted in a rigid infinite planar baffle. S denotes the transducer surface.
|~r1 −~r3 −~r4 | is the distance from S to the point where the field is calculated and c0 the mean propagation velocity.
The field is calculated under the assumption of radiation into an isotropic, homogeneous, non-dissipative medium.
If a slightly curved transducer is used, an additional term is introduced as shown in Morse & Feshbach [28]. This
term is called the second order diffraction term in Penttinen & Luukkala [13]. It can be shown to vanish for a
planar transducer, and as long as the transducer is only slightly curved and large compared to the wavelength of
the ultrasound, the resulting expression is a good approximation to the pressure field [13].
If the particle velocity is assumed to be uniform over the surface of the transducer, (2.69) can be reduced to [29]:
Z Z
ψ(~r1 ,~r3 ,t) = v(t3 ) g(~r1 ,t |~r3 +~r4 ,t3 )d 2~r4 dt3 (2.71)
T S
This is the spatial impulse response previously derived, and the sound pressure for the incident field then is:
The received signal is the scattered pressure field integrated over the transducer surface, convolved with the electro-
mechanical impulse response, Em (t), of the transducer. To calculate this we introduce the coordinate system shown
in Fig. 2.18. ~r6 +~r5 indicates a receiving element on the surface of the transducer that is located at~r5 . The received
signal is: Z
pr (~r5 ,t) = Em (t) ? ps (~r6 +~r5 ,t)d 2~r6 (2.74)
t S
Combining this with (2.74) and comparing with (2.9) we see that pr includes Green’s function for bounded space
integrated over the transducer surface, which is equal to the spatial impulse response. Inserting the expression for
pi and performing the integration over the transducer surface and over time, results in:
Z
" #
∂v(t)
pr (~r5 ,t) = Em (t) ? 2
1 Fop ρ0 ? h(~r1 ,~r3 ,t) ? h(~r5 ,~r1 ,t)d 3~r1 (2.76)
t V0 ∂t t t
If the position of the transmitting and the receiving transducer is the same (~r3 =~r5 ), then a simple rearrangement
of (2.76) yields: Z
ρ0 ∂v(t)
pr (~r5 ,t) = Em (t) ? ? Fop [h pe (~r1 ,~r5 ,t)] d 3~r1 (2.77)
2 t ∂t t V 0
where
h pe (~r1 ,~r5 ,t) = h(~r1 ,~r5 ,t) ? h(~r5 ,~r1 ,t) (2.78)
t
is the pulse-echo spatial impulse response.
The calculated signal is the response measured for one given position of the transducer. For a B-mode scan picture
a number of scan-lines is measured and combined to a picture. To analyze this situation, the last factor in (2.77) is
explicitly written out
Z · ¸
1 2∆c(~r1 ) ∂2 h pe (~r1 ,~r5 ,t) 3
∇(∆ρ(~r1 )) · ∇h pe (~r1 ,~r5 ,t) − d ~r1 (2.79)
V 0 ρ0 c30 ∂t 2
From section 2.6.3 it is know that H pe is a function of the distance between ~r1 and ~r5 , while ∆ρ and ∆c only are
functions of ~r1 . So when ~r5 is varied over the volume of interest, the resulting image is a spatial non-stationary
convolution between ∆ρ, ∆c and a modified form of the pulse-echo spatial impulse response.
If we assume that the pulse-echo spatial impulse is slowly varying so that the spatial frequency content is constant
over a finite volume, then (2.79) can be rewritten
Z · ¸
1 2∆c(~r1 ) ∂2 h pe (~r1 ,~r5 ,t) 3
∆ρ(~r1 )∇2 h pe (~r1 ,~r5 ,t) − d ~r1 (2.80)
V 0 ρ0 c30 ∂t 2
h pe is a function of the distance between the transducer and the scatterer or equivalently of the corresponding time
given by
|~r1 −~r5 |
t= (2.81)
c0
The Laplace operator is the second derivative w.r.t. the distance, which can be approximated with the second
derivative w.r.t. time. So
1 ∂2 h pe (~r1 ,~r5 ,t)
∇2 h pe (~r1 ,~r5 ,t) = 2 (2.82)
c0 ∂t 2
assuming only small deviations from the mean propagation velocity.
ρ0 ∂v3 (t)
v pe (t) = E m (t) ? (2.85)
2c20 t ∂t 3
∆ρ(~r1 ) 2∆c(~r1 )
fm (~r1 ) = − (2.86)
ρ0 c0
h pe (~r1 ,~r5 ,t) = h(~r1 ,~r5 ,t) ? h(~r5 ,~r1 ,t) (2.87)
t
Expression (2.84) consists of three distinct terms. The interesting signal, and the one that should be displayed in
medical ultrasound, is fm (~r1 ). We, however, measure a time and spatially smoothed version of this, which obscures
the finer details in the picture. The smoothing consists of a convolution in time with a fixed wavelet v pe (t) and a
spatial convolution with a spatially varying h pe (~r1 ,~r5 ,t).
To show that the pulse-echo response can be calculated to good accuracy, a single example is shown in Fig. 2.19
for a concave transducer (r = 8.1 mm) with a focus at R = 150 mm. The measured and simulated responses were
obtained at a distance of 60 mm from the transducer surface. The measured pressure field was acquired by moving
a needle pointing toward the transducer in steps of 0.2 mm in the lateral direction, making measurements in a
plane containing the acoustical axis of the transducer. The simulated field was calculated by measuring v pe as the
response from a planar reflector, and then using (2.32) and (2.84) to calculate the field. The envelope of the RF-
signals is shown as a contour plot with 6 dB between the contours. The plots span 20 mm in the lateral direction
and 4 µs in the axial direction.
The model includes attenuation of the pulse due to propagation, but not the dispersive attenuation of the wave
observed when propagating in tissue. This changes the pulse continuously as it propagates down through the
tissue. Not including dispersive attenuation is, however, not a serious drawback of the theory, as this change of
the pulse can be lumped into the already spatially varying h pe . Or, if in the far field and assuming a homogeneous,
dispersive attenuation, then an attenuation transfer function can be convolved onto v pe to yield an attenuated pulse.
Submerging the transducer into a homogeneously attenuating medium will modify the propagation of the spherical
waves, which will change continuously as a function of distance from the transducer. The spatial impulse is then
changed to
Z Z
δ(τ − |~r+~r1 |
c )
hatt (t,~r) = a(t − τ, |~r +~r1 |) dSdτ (2.88)
T S |~r +~r1 |
when linear propagation is assumed and the attenuation is the same throughout the medium. a is the attenuation
impulse response. The spherical wave is convolved with the distance dependent attenuation impulse response and
spherical waves emanating from different parts of the aperture are convolved with different attenuation impulse
responses.
8.1
Time [s]
8
7.9
7.8
−10 −8 −6 −4 −2 0 2 4 6 8 10
x [mm]
−5
x 10 Simulated response
8.2
8.1
Time [s]
7.9
7.8
−10 −8 −6 −4 −2 0 2 4 6 8 10
x [mm]
Figure 2.19: Measured and simulated pulse-echo response for a concave transducer. The axial distance was 60
mm and the small scatterer was moved in the lateral direction. The envelope of the RF signal is shown as 6 dB
contours.
A model for the attenuation must be introduced in order to solve the integral. Ultrasound propagating in tissue ex-
periences a nearly linear with frequency attenuation and a commonly used attenuation amplitude transfer function
is
|A0 ( f , |~r|)| = exp(−β0 f |~r|) (2.89)
where β0 is attenuation in nepers per meter. We here prefer to split the attenuation into a frequency dependent and
a frequency independent part term as
α is the frequency independent attenuation coefficient, and f0 the transducer center frequency. The phase of the
attenuation need also be considered. Kak and Dines [30] introduced a linear with frequency phase response
Θ( f ) = 2π f τb |~r| (2.91)
where τb is the bulk propagation delay per unit length and is equal to 1/c. This, however, results in an attenuation
impulse response that is non-causal. Gurumurthy and Arthur [31] therefore suggested using a minimum phase
impulse response, where the amplitude and phase spectrum form a Hilbert transform pair. The attenuation spectrum
is then given by
The inverse Fourier transform of (2.92) must be inserted into (2.88) and the integral has to be solved for the
particular transducer geometry. This is clearly a difficult, if not impossible, task and some approximations must
0.7
0.6
0.5
0.3
0.2
0.1
0
-30 -20 -10 0 10 20 30
Lateral displacement [mm]
Figure 2.20: Error of assuming a non-varying frequency dependent attenuation over the duration of the spatial
impulse response.
be introduced. All spherical waves arrive at nearly the same time instance, if the distance to the field point is
much larger than the transducer aperture. In this case the attenuation function is, thus, the same for all waves and
the result is a convolution between the attenuation impulse response and the spatial impulse response, which is a
Dirac impulse. A spatial impulse response other than a Dirac function indicates that the spherical waves arrive
at different times. Multiplying the arrival time with the propagation velocity gives the distance to the points on
the aperture contributing to the response at that time instance. A first approximation is, therefore, to multiply the
non-attenuated spatial impulse response with the proper frequency independent term. This approximation also
assumes that the span of values for |~r + ~r1 | is so small that both 1/|~r +~r1 | and the attenuation can be assumed to
be roughly constant.
The frequency dependent function will also change for the different values of the spatial impulse response. A
non-stationary convolution must, thus, be performed. One possible method to avoid this is to assume that the
frequency dependent attenuation impulse response is constant for the time and, thus, distances |~r +~r1 | where h is
non-zero. The mean distance is then used in (2.92) and the inverse Fourier transform of A( f , |~rmid |) is convolved
with h(t,~r). The accuracy of the approach depends on the duration of h and of the attenuation. The error in dB for
a concave transducer with a radius of 10 mm focused at 100 mm and an attenuation of 0.5 dB/[MHz cm] is shown
in Fig. 2.20. The axial distance to the transducer is 50 mm.
An example of the influence of attenuation on the point spread function (PSF) is shown in Fig. 2.21. A concave
transducer with a radius of 8 mm, center frequency of 3 MHz, and focused at 100 mm was used. Fig. 2.21 shows
point spread functions calculated under different conditions. The logarithmic envelope of the received pressure is
displayed as a function of time and lateral displacement. The left most graph shows the normalized PSF for the
transducer submerged in a non-attenuating medium. The distance to the field point is 60 mm and the function is
shown for lateral displacements from -8 to 8 mm. Introducing a 0.5 dB/[MHz cm] attenuation yields the normalized
PSF shown in the middle. The central core of the PSF does not change significantly, but the shape at -30 dB and
below are somewhat different from the non-attenuated response. A slightly broader and longer function is seen,
but the overall shape is the same.
An commonly used approach to characterize the field is to include the attenuation into the basic one-dimensio-
nal pulse, and then use the non-attenuated spatial impulse response in calculating the PSF. This is the approach
8
8 8
7.95
7.95 7.95
Time [s]
Time [s]
Time [s]
7.9
7.9 7.9
7.85
7.85 7.85
7.8
7.8 7.8
-5 0 5 -5 0 5 -5 0 5
Lateral distance [mm] Lateral distance [mm] Lateral distance [mm]
Figure 2.21: Contour plots of point spread functions for different media and calculation methods. a: Non attenu-
ating medium. b: 0.5 dB/[MHz cm] attenuation. c: 0.5 dB/[MHz cm] attenuation on the one-dimensional pulse.
There is 6 dB between the contour lines. The distance to the transducer is 60 mm.
used in the rightmost graph in Fig. 2.21. All attenuation is included in the pulse and the spatial impulse response
calculated in the leftmost graph is used for making the PSF. The similarity to the center graph is striking. Apart
from a slightly longer response, nearly all features of the field are the same. It is, thus, appropriate to estimate
the attenuated one-dimensional pulse and reconstruct the whole field from this and knowledge of the transducer
geometry.
THREE
Ultrasound imaging
Modern medical ultrasound scanners are used for imaging nearly all soft tissue structures in the body. The anatomy
can be studied from gray-scale B-mode images, where the reflectivity and scattering strength of the tissues are
displayed. The imaging is performed in real time with 20 to 100 images per second. The technique is widely used
since it does not use ionizing radiation and is safe and painless for the patient.
This chapter gives a short introduction to modern ultrasound imaging using array transducers. Part of the chapter
is based on [4] and [32].
This section derives a simple relation between the oscillation of the transducer surface and the ultrasound field. It is
shown that field in the far-field can be found by a simple one-dimensional Fourier transform of the one-dimensional
aperture pattern. This might seem far from the actual imaging situation in the near field using pulsed excitation,
but the approach is very convenient in introducing all the major concepts like main and side lobes, grating lobes,
etc. It also very clearly reveals information about the relation between aperture properties and field properties.
Consider a simple line source as shown in Fig. 3.1 with a harmonic particle speed of U0 exp( jωt). Here U0 is
the vibration amplitude and ω is its angular frequency. The line element of length dx generates an increment in
pressure of [8]
ρ0 ck 0
dp = j 0
U0 a p (x)e j(ωt−kr ) dx, (3.1)
4πr
where ρ0 is density, c is speed of sound, k = ω/c is the wavenumber, and a p (x) is an amplitude scaling of the
individual parts of the aperture. In the far-field (r ¿ L) the distance from the radiator to the field points is (see
Fig. 3.1)
r0 = r − x sin θ (3.2)
The emitted pressure is found by integrating over all the small elements of the aperture
Z +∞ 0
ρ0 cU0 k e j(ωt−r )
p(r, θ,t) = j a p (x) dx. (3.3)
4π −∞ r0
Notice that a p (x) = 0 if |x| > L/2. Here r0 can be replaced with r, if the extent of the array is small compared to
the distance to the field point (r ¿ L). Using this approximation and inserting (3.2) in (3.3) gives
Z +∞ Z +∞
ρ0 cU0 k ρ0 cU0 k j(ωt−kr)
p(r, θ,t) = j a p (x)e j(ωt−kr+kx sin θ) dx = e a p (x)e jkx sin θ dx, (3.4)
4πr −∞ 4πr −∞
31
Figure 3.1: Geometry for line aperture.
since ωt and kr are independent of x. Hereby the pressure amplitude of the field for a given frequency can be split
into two factors:
ρ0 cU0 kL
Pax (r) =
4πr
Z
1 +∞
H(θ) = a p (x)e jkx sin θ dx (3.5)
L −∞
P(r, θ) = Pax (r)H(θ)
The first factor Pax (r) characterizes how the field drops off in the axial direction as a factor of distance, and H(θ)
gives the variation of the field as a function of angle. The first term drops off with 1/r as for a simple point source
and H(θ) is found from the aperture function a p (x). A slight rearrangement gives1
Z +∞ Z +∞
1 sin θ 1 0
H(θ) = a p (x)e j2πx f c dx = a p (x)e j2πx f dx. (3.6)
L −∞ L −∞
There is, thus, a Fourier relation between the radial beam pattern and the aperture function, and the normal Fourier
relations can be used for understanding the beam patterns for typical apertures.
0.8
0.6
H(θ) [m]
0.4
0.2
−0.2
−0.4
−100 −80 −60 −40 −20 0 20 40 60 80 100
θ [deg]
0.8
0.6
H(θ) [m]
0.4
0.2
−0.2
−0.4
−1.5 −1 −0.5 0 0.5 1 1.5
k sin(θ) [rad/m] 4
x 10
Figure 3.2: Angular beam pattern for a line aperture with a uniform aperture function as a function of angle (top)
and as a function of k sin(θ) (bottom).
The first example is for a simple line source, where the aperture function is constant such that
½
1 |x| ≤ L/2
a p (x) = (3.8)
0 else
A plot of the sinc function is shown in Fig. 3.2. A single main lobe can be seen with a number of side lobe peaks.
The peaks fall off proportionally to k or f . The angle of the first zero in the function is found at
c λ
sin θ = = . (3.10)
Lf L
The angle is, thus, dependent on the frequency and the size of the array. A large array or a high emitted frequency,
therefore, gives a narrow main lobe.
The relative sidelobe level is, thus, independent of the size of the array and of the frequency, and is solely deter-
mined by the aperture function a p (x) through the Fourier relation. The large discontinuities of a p (x), thus, give
rise to the high side lobe level, and they can be reduced by selecting an aperture function that is smoother like a
Hanning window or a Gaussian shape.
H(θ) [m]
4
0
−1 −0.5 0 0.5 1
k sin(θ) [rad/m] x 10
4
Beam pattern for 8 element array. 1.5 λ element width and 2λ spacing
Main lobe
8
Grating lobe
6
H(θ) [m]
0
−1 −0.5 0 0.5 1
k sin(θ) [rad/m] x 10
4
Figure 3.3: Grating lobes for array transducer consisting of 8 point elements (top) and of 8 elements with a size of
1.5λ (bottom). The pitch (or distance between the elements) is 2λ.
Modern ultrasound transducers consist of a number of elements each radiating ultrasound energy. Neglecting the
phasing of the element (see Section 3.2) due to the far-field assumption, the aperture function can be described by
N/2
a p (x) = a ps (x) ∗ ∑ δ(x − dx n), (3.12)
n=−N/2
where a ps (x) is the aperture function for the individual elements, dx is the spacing (pitch) between the centers of
the individual elements, and N is the number of elements in the array. Using the Fourier relationship the angular
beam pattern can be described by
H p (θ) = H ps (θ)H per (θ), (3.13)
where
N/2 N/2 N/2
∑ ∑ ∑
f sinθ
δ(x − dx n) ↔ H per (θ) = e− jndx ksinθ . = e− j2π c ndx . (3.14)
n=−N/2 n=−N/2 n=−N/2
Summing the geometric series gives
¡ ¢
sin (N + 1) 2k dx sin θ
H per (θ) = ¡ ¢ (3.15)
sin 2k dx sin θ
is the Fourier transform of series of delta functions. This function repeats itself with a period that is a multiple of
k
dx sin θ
π =
2
π λ
sin θ = = . (3.16)
kdx dx
This repetitive function gives rise to the grating lobes in the field. An example is shown in Fig. 3.3.
The grating lobes are due to the periodic nature of the array, and corresponds to sampling of a continuous time
signal. The grating lobes will be outside a ±90 deg. imaging area if
λ
= 1
dx
dx = λ (3.17)
An array beam can be steered in a direction by applying a time delay on the individual elements. The difference in
arrival time for a given direction θ0 is
dx sin θ0
τ= (3.18)
c
Steering in a direction θ0 can, therefore, be accomplished by using
cτ
sin θ0 = (3.19)
dx
where τ is the delay to apply to the signal on the element closest to the center of the array. A delay of 2τ is then
applied on the second element and so forth. The beam pattern for the grating lobe is then replaced by
³ ³ ´´
sin (N + 1) 2k dx sin θ − dcτx
H per (θ) = ³ ³ ´´ . (3.20)
sin 2k dx sin θ − dcτx
Notice that the delay is independent of frequency, since it is essentially only determined by the speed of sound.
3.2 Focusing
The essence of focusing an ultrasound beam is to align the pressure fields from all parts of the aperture to arrive
at the field point at the same time. This can be done through a physically curved aperture, through a lens in front
of the aperture, or by the use of electronic delays for multi-element arrays. All seek to align the arrival of the
waves at a given point through delaying or advancing the fields from the individual elements. The delay (positive
or negative) is determined using ray acoustics. The path length from the aperture to the point gives the propagation
time and this is adjusted relative to some reference point. The propagation from the center of the aperture element
to the field point is q
1
ti = (xi − x f )2 + (yi − y f )2 + (zi − z f )2 (3.21)
c
where (x f , y f , z f ) is the position of the focal point, (xi , yi , zi ) is the center for the physical element number i, c is
the speed of sound, and ti is the calculated propagation time.
A point is selected on the whole aperture as a reference for the imaging process. The propagation time for this is
q
1
tc = (xc − x f )2 + (yc − y f )2 + (zc − z f )2 (3.22)
c
where (xc , yc , zc ) is the reference center point on the aperture. The delay to use on each element of the array is then
µq q ¶
1
∆ti = (xc − x f ) + (yc − y f ) + (zc − z f ) − (xi − x f ) + (yi − y f ) + (zi − z f )
2 2 2 2 2 2 (3.23)
c
Notice that there is no limit on the selection of the different points, and the beam can, thus, be steered in a preferred
direction.
The arguments here have been given for emission from an array, but they are equally valid during reception of the
ultrasound waves due to acoustic reciprocity. At reception it is also possible to change the focus as a function of
time and thereby obtain a dynamic tracking focus. This is used by all modern ultrasound scanners, Beamformers
based on analog technology makes it possible to create several receive foci and the newer digital scanners change
the focusing continuously for every depth in receive. A single focus is only possible in transmit and composite
imaging is therefore often used in modern imaging. Here several pulse emissions with focusing at different depths
3.2. Focusing 35
in the same direction are used and the received signals are combined to form one image focused in both transmit
and receive at different depths (composit imaging).
For each focal zone there is an associated focal point and the time from which this focus is used. The arrival
time from the field point to the physical transducer element is used for deciding which focus is used. Another
possibility is to set the focusing to be dynamic, so that the focus is changed as a function of time and thereby
depth. The focusing is then set as a direction defined by two angles and a starting point on the aperture.
Section 3.1 showed that the side and grating lobes of the array can be reduced by employing apodization of the
elements. Again a fixed function can be used in transmit and a dynamic function in receive defined by
Here a1,1 is the amplitude scaling value multiplied onto element 1 after time instance t1 . Typically a Hamming
or Gaussian shaped function is used for the apodization. In receive the width of the function is often increased
to compensate for attenuation effects and for keeping the point spread function roughly constant. The F-number
defined by
D
F= (3.24)
L
where L is the total width of the active aperture and D is the distance to the focus, is often kept constant. More
of the aperture is often used for larger depths and a compensation for the attenuation is thereby partly made. An
example of the use of dynamic apodization is given in Section 3.6.
Most modern scanners use arrays for generating and receiving the ultrasound fields. These fields are quite simple
to calculate, when the spatial impulse response for a single element is known. This is the approach used in the
Field II program, and this section will extend the spatial impulse response to multi element transducers and will
elaborate on some of the features derived for the fields in Section 3.1.
Since the ultrasound propagation is assumed to be linear, the individual spatial impulse responses can simply be
added. If he (~r p ,t) denotes the spatial impulse response for the element at position~ri and the field point~r p , then the
spatial impulse response for the array is
N−1
ha (~r p ,t) = ∑ he (~ri ,~r p ,t), (3.25)
i=0
ra re θ
Figure 3.4: Geometry of linear array (from [4], Copyright Cambridge University Press).
Let us assume that the elements are very small and the field point is far away from the array, so he is a Dirac
function. Then
k N−1 |~ri −~r p |
ha (~r p ,t) = ∑ δ(t − c )
R p i=0
(3.26)
when R p = |~ra −~r p |, k is a constant of proportionality, and ~ra is the position of the array. Thus, ha is a train of
Dirac pulses. If the spacing between the elements is D, then
µ ¶
k N−1 |~ra + iD~re −~r p |
ha (~r p ,t) = ∑ δ t−
R p i=0 c
, (3.27)
where~re is a unit vector pointing in the direction along the elements. The geometry is shown in Fig. 3.4.
The difference in arrival time between elements far from the transducer is
D sin Θ
∆t = . (3.28)
c
The spatial impulse response is, thus, a series of Dirac pulses separated by ∆t.
µ ¶
k N−1 Rp
ha (~r p ,t) ≈ ∑ δ t − c − i∆t .
R p i=0
(3.29)
The time between the Dirac pulses and the shape of the excitation determines whether signals from individual
elements add or cancel out. If the separation in arrival times corresponds to exactly one or more periods of a sine
wave, then they are in phase and add constructively. Thus, peaks in the response are found for
1 D sin Θ
n = . (3.30)
f c
The main lobe is found for Θ = 0 and the next maximum in the response is found for
µ ¶ µ ¶
c λ
Θ = arcsin = arcsin . (3.31)
fD D
For a 3 MHz array with an element spacing of 1 mm, this amounts to Θ = 31◦ , which will be within the image
plane. The received response is, thus, affected by scatterers positioned 31◦ off the image axis, and they will appear
in the lines acquired as grating lobes. The first grating lobe can be moved outside the image plane, if the elements
are separated by less than a wavelength. Usually, half a wavelength separation is desirable, as this gives some
margin for a broad-band pulse and beam steering.
-5
-10
-20
-25
-30
-35
-40
-80 -60 -40 -20 0 20 40 60 80
Angle [deg]
Figure 3.5: Far-field continuous wave beam profile at 3 MHz for linear array consisting of 64 point sources with
an inter-element spacing of 1 mm (from [4], Copyright Cambridge University Press).
The beam pattern as a function of angle for a particular frequency can be found by Fourier transforming ha
µ µ ¶¶
k N−1 Rp D sin Θ
Ha ( f ) = ∑ exp − j2π f c + i c
R p i=0
µ ¶
R p k N−1 D sin Θ i
= exp(− j2π ) ∑ exp − j2π f c
c R p i=0
(3.32)
Θ
sin(π f D sin
c N) D sin Θ k Rp
= D sin Θ
exp(− jπ f (N − 1) ) exp(− j2π ).
c )
sin(π f c Rp c
R Θ
The terms exp(− j2π cp ) and exp(− jπ f (N − 1) D sinc ) are constant phase shifts and play no role for the amplitude
of the beam profile. Thus, the amplitude of the beam profile is
¯ ¯
¯ k sin(Nπ D sin Θ) ¯
¯ λ ¯
|Ha ( f )| = ¯ ¯. (3.33)
¯ R p sin(π Dλ sin Θ) ¯
The beam profile at 3 MHz is shown in Fig. 3.5 for a 64-element array with D = 1 mm.
Several factors change the beam profile for real, pulsed arrays compared with the analysis given here. First, the
elements are not points, but rather are rectangular elements with an off-axis spatial impulse response markedly
different from a Dirac pulse. Therefore, the spatial impulse responses of the individual elements will overlap
and exact cancellation or addition will not take place. Second, the excitation pulse is broad band, which again
influences the sidelobes. Examples of simulated responses are shown in Fig. 3.6.
The top graph shows an array of 64 point sources excited with a Gaussian 3 MHz pulse with Br = 0.2. The space
between the elements is 1 mm. The maximum of the response at a radial position of 100 mm from the transducer is
taken. The bottom graph shows the response when rectangular elements of 1 × 6 mm are used. This demonstrates
the somewhat crude approximation of using the far-field point source CW response to characterize arrays.
Fig. 3.7 shows the different point spread functions encountered when a phased array is used to scan over a 15 cm
depth. The array consists of 128 elements each 0.2 × 5 mm in size, and the kerf between the elements is 0.05
mm. The transmit focus is at 70 mm, and the foci are at 30, 70, and 110 mm during reception. Quite complicated
-20
-30
-40
-80 -60 -40 -20 0 20 40 60 80
Angle [deg]
Array of rectangular sources
0
Norm. max. amp. [dB]
-10
-20
-30
-40
-80 -60 -40 -20 0 20 40 60 80
Angle [deg]
Figure 3.6: Beam profiles for an array consisting of point sources (top) or rectangular elements (bottom). The
excitation pulse has a frequency of 3 MHz and the element spacing is 1 mm. The distance to the field point is 100
mm (from [4], Copyright Cambridge University Press).
−5
x 10
20
18
16
110 mm receive focus
14
12
Time [s]
30 mm receive focus
4
0
−40 −30 −20 −10 0 10 20 30 40
Lateral displacement [mm]
Figure 3.7: Point spread functions for different positions in a B-mode image. Onehundredtwentyeight 0.2 × 5
mm elements are used for generating and receiving the pulsed field. Three different receive foci are used. The
contours shown are from 0 to -24 dB in steps of 6 dB relative to the maximum at the particular field point (from
[4], Copyright Cambridge University Press).
0 0
Transducer Transducer
elements elements
Beam shape
Beam shape
(a) (b)
point spread functions are encountered, and they vary substantially with depth in tissue. Notice especially the edge
waves, which dominate the response close to the transducer. The edge effect can be reduced by weighting responses
from different elements. This is also called apodization. The excitation pulses to elements at the transducer edge
are reduced, and this diminishes the edge waves. More examples are shown below in Section 3.6.
Basically there are three different kinds of images acquired by multi-element array transducers, i.e. linear, convex,
and phased as shown in Figures 3.8, 3.10, and 3.11. The linear array transducer is shown in Fig. 3.8. It selects the
region of investigation by firing a set of elements situated over the region. The beam is moved over the imaging
region by firing sets of contiguous elements. Focusing in transmit is achieved by delaying the excitation of the
individual elements, so an initially concave beam shape is emitted, as shown in Fig. 3.9.
The beam can also be focused during reception by delaying and adding responses from the different elements. A
continuous focus or several focal zones can be maintained as explained in Section 3.2. Only one focal zone is
possible in transmit, but a composite image using a set of foci from several transmissions can be made. Often 4 to
8 zones can be individually placed at selected depths in modern scanners. The frame rate is then lowered by the
number of transmit foci.
The linear arrays acquire a rectangular image, and the arrays can be quite large to cover a sufficient region of
interest (ROI). A larger area can be scanned with a smaller array, if the elements are placed on a convex surface as
shown in Fig. 3.10. A sector scan is then obtained. The method of focusing and beam sweeping during transmit
and receive is the same as for the linear array, and a substantial number of elements (often 128 or 256) is employed.
The convex and linear arrays are often too large to image the heart when probing between the ribs. A small array
size can be used and a large field of view attained by using a phased array as shown in Fig. 3.11. All array elements
are used here both during transmit and receive. The direction of the beam is steered by electrically delaying the
signals to or from the elements, as shown in Fig. 3.9b. Images can be acquired through a small window and the
beam rapidly sweeped over the ROI. The rapid steering of the beam compared to mechanical transducers is of
especial importance in flow imaging. This has made the phased array the choice for cardiological investigations
through the ribs.
More advanced arrays are even being introduced these years with the increase in number of elements and dig-
ital beamforming. Especially elevation focusing (out of the imaging plane) is important. A curved surface as
shown in Fig. 3.12 is used for obtaining the elevation focusing essential for an improved image quality. Electronic
beamforming can also be used in the elevation direction by dividing the elements in the elevation direction. The
elevation focusing in receive can then be dynamically controlled for e.g. the array shown in Fig. 3.13.
One of the first steps in designing an ultrasound system is the selection of the appropriate number of elements for
the array transducers and the number of channels for the beamformer. The focusing strategy in terms of number
of focal zones and apodization must also be determined. These choices are often not easy, since it is difficult
to determine the effect in the resulting images of increasing the number of channels and selecting more or less
advanced focusing schemes. It is therefore beneficial to simulate the whole imaging system in order to quantify
the image quality.
The program Field II was rewritten to make it possible to simulate the whole imaging process with time varying fo-
−5
−10
5
z [mm]
y [mm]
−15 0
−5
−20 −10 0 10 20
−20
x [mm]
−25
−5 0 5
y [mm]
−5
−10
z [mm]
−5
−15 −10
z [mm]
−20 −15
−25 −20 20
−25 10
−20 −10 0 10 20 0
x [mm] −10
5 −20
0 x [mm]
−5
y [mm]
Figure 3.12: Elevation focused convex array transducer for obtaining a rectangular cross-sectional image, which
is focused in the out-of-plane direction. The curvature in the elevation direction is exaggerated in the figure for
illustration purposes.
z [mm]
y [mm]
−10
0
−15 −5
−20 −10 0 10 20
x [mm]
−20
−5 0 5
y [mm]
−5
z [mm]
−10 −5
z [mm]
−15 −10
−20 −15
20
−20 −10 0 10 20 −20 10
0
x [mm]
5 −10
0 −20 x [mm]
−5
y [mm]
Figure 3.13: Elevation focused convex array transducer with element division in the elevation direction. The
curvature in the elevation direction is exaggerated in the figure for illustration purposes
cusing and apodization as described in [33] and [16]. This has paved the way for doing realistic simulated imaging
with multiple focal zones for transmission and reception and for using dynamic apodization. It is hereby possible
to simulate ultrasound imaging for all image types including flow images, and the purpose of this section is to
present some standard simulation phantoms that can be used in designing and evaluating ultrasound transducers,
beamformers and systems. The phantoms described can be divided into ordinary string/cyst phantoms, artificial
human phantoms and flow imaging phantoms. The ordinary computer phantoms include both a string phantoms
for evaluating the point spread function as a function of spatial positions as well as a cyst/string phantom. Arti-
ficial human phantoms of a fetus in the third month of development and an articial kedney are also shown. The
simulation of flow and the associated phantoms will be described in Section 4.7. All the phantoms can be used
with any arbitrary transducer configuration like single element, linear, convex, or phased array transducers, with
any apodization and focusing scheme.
The first simple treatment of ultrasound is often based on the reflection and transmission of plane waves. It is
assumed that the propagating wave impinges on plane boundaries between tissues with different mean acoustic
properties. Such boundaries are rarely found in the human body, and seldom show on ultrasound images. This
is demonstrated by the image shown in Fig. 3.14. Here the skull of the fetus is not clearly marked. It is quite
obvious that there is a clear boundary between the fetus and the surrounding amniotic fluid. The skull boundary
is not visible in the image, because the angle between the beam and the boundary has a value such that the sound
bounces off in another direction, and, therefore, does not reach the transducer. Despite this, the extent of the head
can still be seen. This is due to the scattering of the ultrasound wave. Small changes in density, compressibility,
and absorption give rise to a scattered wave radiating in all directions. The backscattered field is received by the
transducer and displayed on the screen. One might well argue that scattering is what makes ultrasound images
useful for diagnostic purposes, and it is, as will be seen later, the physical phenomena that makes detection of
blood velocities possible. Ultrasound scanners are, in fact, optimized to show the backscattered signal, which
is considerably weaker than that found from reflecting boundaries. Such reflections will usually be displayed as
bright white on the screen, and can potentially saturate the receiving circuits in the scanner. An example can be
seen at the neck of the fetus, where a structure is perpendicular to the beam. This strong reflection saturates the
Boundary
Amniotic of placenta
fluid
Foot
Mouth
Spine
Figure 3.14: Ultrasound image of a 13th week fetus. The markers at the border of the image indicate one centimeter
(from [4], Copyright Cambridge University Press).
input amplifier of this scanner. Typical boundary reflections are encountered from the diaphragm, blood vessel
walls, and organ boundaries.
An enlarged view of an image of a liver is seen in Fig. 3.15. The image has a grainy appearance, and not a
homogeneous gray or black level as might be expected from homogeneous liver tissue. This type of pattern is
called speckle. The displayed signals are the backscatter from the liver tissue, and are due to connective tissue,
cells, and fibrious tissue in the liver. These structures are much smaller than one wavelength of the ultrasound,
and the speckle pattern displayed does not directly reveal physical structure. It is rather the constructive and
destructive interference of scattered signals from all the small structures. So it is not possible to visualize and
diagnose microstructure, but the strength of the signal is an indication of pathology. A strong signal from liver
tissue, making a bright image, is, e.g., an indication of a fatty or cirrhotic liver.
As the scattered wave emanates from numerous contributors, it is appropriate to characterize it in statistical terms.
The amplitude distribution follows a Gaussian distribution [34], and is, thus, fully characterized by its mean and
variance. The mean value is zero since the scattered signal is generated by differences in the tissue from the mean
acoustic properties.
Although the backscattered signal is characterized in statistical terms, one should be careful not to interpret the
signal as random in the sense that a new set of values is generated for each measurement. The same signal will
result, when a transducer is probing the same structure, if the structure is stationary. Even a slight shift in position
will yield a backscattered signal correlated with that from the adjacent position. The shift over which the signals
are correlated is essentially dependent on the extent of the ultrasound field. This can also be seen from the image
in Fig. 3.15, as the laterally elongated white speckles in the image indicate transverse correlation. The extent of
these speckle spots is a rough indication of the point spread function of the system.
The correlation between different measurements is what makes it possible to estimate blood velocities with ultra-
sound. As there is a strong correlation for small movements, it is possible to detect shifts in position by comparing
or, more strictly, correlating successive measurements of moving structure, e.g., blood cells.
Since the backscattered signal depends on the constructive and destructive interference of waves from numerous
small tissue structures, it is not meaningful to talk about the reflection strength of the individual structures. Rather,
it is the deviations within the tissue and the composition of the tissue that determine the strength of the returned
signal. The magnitude of the returned signal is, therefore, described in terms of the power of the scattered sig-
It is, thus, important that the simulation approach models the scattering mechanisms in the tissue. This is essentially
what the model derived in Chapter 2 does. Here the received signal from the transducer is:
where ?r denotes spatial convolution. v pe is the pulse-echo impulse, which includes the transducer excitation
and the electro-mechanical impulse response during emission and reception of the pulse. fm accounts for the
inhomogeneities in the tissue due to density and propagation velocity perturbations which give rise to the scattered
signal. h pe is the pulse-echo spatial impulse response that relates the transducer geometry to the spatial extent of
the scattered field. Explicitly written out these terms are:
So the received response can be calculated by finding the spatial impulse response for the transmitting and receiving
transducer and then convolving with the impulse response of the transducer. A single RF line in an image can be
calculated by summing the response from a collection of scatterers in which the scattering strength is determined
by the density and speed of sound perturbations in the tissue. Homogeneous tissue can thus be made from a
collection of randomly placed scatterers with a scattering strength with a Gaussian distribution, where the variance
of the distribution is determined by the backscattering cross-section of the particular tissue. This is the approach
taken in these notes.
The computer phantoms typically consist of 100,000 or more discrete scatterers, and simulating 50 to 128 RF
lines can take several days depending on the computer used. It is therefore beneficial to split the simulation into
concurrently run sessions. This can easily be done by first generating the scatterer’s position and amplitude and
then storing them in a file. This file can then the be used by a number of workstations to find the RF signal for
different imaging directions, which are then stored in separate files; one for each RF line. These files are then used
to assemble an image. This is the approach used for the simulations shown here in which 3 Pentium Pro 200 MHz
PCs can generate one phantom image over night using Matlab 5 and the Field II program.
The first synthetic phantom consists of a number of point targets placed with a distance of 5 mm starting at 15
mm from the transducer surface. A linear sweep image of the points is then made and the resulting image is
compressed to show a 40 dB dynamic range. This phantom is suited for showing the spatial variation of the point
spread function for a particular transducer, focusing, and apodization scheme.
Twelve examples using this phantom are shown in Fig. 3.16. The top graphs show imaging without apodization
and the bottom graphs show images when a Hanning window is used for apodization in both transmit and receive.
A 128 elements transducer with a nominal frequency of 3 MHz was used. The element height was 5 mm, the
width was a wavelength and the kerf 0.1 mm. The excitation of the transducer consisted of 2 periods of a 3 MHz
sinusoid with a Hanning weighting, and the impulse response of both the emit and receive aperture also was a two
cycle, Hanning weighted pulse. In the graphs A – C, 64 of the transducer elements were used for imaging, and
the scanning was done by translating the 64 active elements over the aperture and focusing in the proper points. In
graph D and E 128 elements were used and the imaging was done solely by moving the focal points.
Graph A uses only a single focal point at 60 mm for both emission and reception. B also uses reception focusing
20
40
Axial distance [mm]
60
80
100
120
−10 0 10
A B C D E F
20
40
Axial distance [mm]
60
80
100
120
−10 0 10
Figure 3.16: Point target phantom imaged for different set-up of transmit and receive focusing and apodization.
See text for an explanation of the set-up.
The focusing scheme used for E and F applies a new receive profile for each 2 mm. For analog beamformers
this is a small zone size. For digital beamformers it is a large zone size. Digital beamformer can be programmed
for each sample and thus a ”continuous” beamtracking can be obtained. In imaging systems focusing is used to
obtain high detail resolution and high contrast resolution preferably constant for all depths. This is not possible,
so compromises must be made. As an example figure F shows the result for multiple transmit zones and receive
zones, like E, but now a restriction is put on the active aperture. The size of the aperture is controlled to have
a constant F-number (depth of focus in tissue divided by width of aperture), 4 for transmit and 2 for receive, by
dynamic apodization. This gives a more homogeneous point spread function throughout the full depth. Especially
for the apodized version. Still it can be seen that the composite transmit can be improved in order to avoid the
increased width of the point spread function at e.g. 40 and 60 mm.
The next phantom consists of a collection of point targets, five cyst regions, and five highly scattering regions. This
can be used for characterizing the contrast-lesion detection capabilities of an imaging system. The scatterers in
the phantom are generated by finding their random position within a 60 × 40 × 15 mm cube, and then ascribe a
Gaussian distributed amplitude to the scatterers. If the scatterer resides within a cyst region, the amplitude is set to
zero. Within the highly scattering region the amplitude is multiplied by 10. The point targets has a fixed amplitude
of 100, compared to the standard deviation of the Gaussian distributions of 1. A linear scan of the phantom was
done with a 192 element transducer, using 64 active elements with a Hanning apodization in transmit and receive.
The element height was 5 mm, the width was a wavelength and the kerf 0.05 mm. The pulses where the same as
used for the point phantom mentioned above. A single transmit focus was placed at 60 mm, and receive focusing
was done at 20 mm intervals from 30 mm from the transducer surface. The resulting image for 100,000 scatterers
is shown in Fig. 3.17. A homogeneous speckle pattern is seen along with all the features of the phantom.
The anatomic phantoms are attempts to generate images as they will be seen from real human subjects. This is done
by drawing a bitmap image of scattering strength of the region of interest. This map then determines the factor
multiplied onto the scattering amplitude generated from the Gaussian distribution, and models the difference in the
density and speed of sound perturbations in the tissue. Simulated boundaries were introduced by making lines in
the scatterer map along which the strong scatterers were placed. This is marked by completely white lines shown
in the scatterer maps. The model is currently two-dimensional, but can readily be expanded to three dimensions.
Currently, the elevation direction is merely made by making a 15 mm thickness for the scatterer positions, which
are randomly distributed in the interval.
Two different phantoms have been made; a fetus in the third month of development and a left kidney in a longi-
tudinal scan. For both was used 200,000 scatterers randomly distributed within the phantom, and with a Gaussian
distributed scatterer amplitude with a standard deviation determined by the scatterer map. The phantoms were
scanned with a 5 MHz 64 element phased array transducer with λ/2 spacing and Hanning apodization. A single
transmit focus 70 mm from the transducer was used, and focusing during reception is at 40 to 140 mm in 10 mm
increments. The images consists of 128 lines with 0.7 degrees between lines.
Fig. 3.19 shows the artificial kidney scatterer map on the left and the resulting image on the right. Note especially
the bright regions where the boundary of the kidney is orthogonal to the ultrasound, and thus a large signal is
received. Note also the fuzziness of the boundary, where they are parallel with the ultrasound beam, which is also
seen on actual ultrasound scans. Fig. 3.18 shows the fetus. Note how the anatomy can be clearly seen at the level
of detail of the scatterer map. The same boundary features as for the kidney image is also seen.
The images have many of the features from real scan images, but still lack details. This can be ascribed to the low
level of details in the bitmap images, and that only a 2D model is used. But the images do show great potential for
making powerful fully synthetic phantoms, that can be used for image quality evaluation.
40
45
50
55
Axial distance [mm]
60
65
70
75
80
85
−20 −10 0 10 20
Lateral distance [mm]
Figure 3.17: Computer phantom with point targets, cyst regions, and strongly reflecting regions.
40
60
80
−40 −20 0 20 40
Lateral distance [mm]
40
60
80
100
−50 0 50
Lateral distance [mm]
FOUR
Medical ultrasound scanners can be used for both displaying gray-scale images of the anatomy and for visualizing
the blood flow dynamically in the body. The systems can interrogate the flow at a single position in the body and
there find the velocity distribution over time. They can also show a dynamic color image of velocity at up to 20 to
60 frames a second. Both measurements are performed by repeatedly pulsing in the same direction and then use
the correlation from pulse to pulse to determine the velocity. This chapter gives a simple model for the interaction
between the ultrasound and the moving blood. The calculation of the velocity distribution is then explained along
with the different physical effects influencing the estimation. The estimation of mean velocities using auto- and
cross-correlation for color flow mapping is also described. Finally the simulation of these flow systems using
spatial impulse responses are described.
The chapter is a revised version of [35] and [32] with parts originating from [4].
4.1 Introduction
It is possible with modern ultrasound scanners to visualize the blood flow at either one sample volume in the body
or over a cross-sectional region. The first approach gives a display of the velocity distribution at the point as a
function of time. The second method gives a color flow image superimposed on the gray-scale anatomic image.
The color coding shows the velocity towards or away from the transducer, and this can be shown with up to 20
images a second. Thus, the dynamics of the blood flow in e.g. the vessel and heart and across heart valves can be
diagnosed.
The history of ultrasound velocity measurement dates back to the mid 1950ties, where experiments by Satomura
[36], [37] in Japan demonstrated that continuous wave (CW) ultrasound was capable of detecting motion. CW
ultrasound cannot locate the depth of the motion and several pulsed wave solutions where therefore made by
Baker [38], Peronneau and Leger [39], and Wells [40] around 1970. These systems could detect the motion at a
specific location. The Baker system could show the velocity distribution over time and across the vessel lumen.
This was later extended by Kasai and co-workers [41], [42] to generate actual cross-sectional images of velocity
in real time. They used an autocorrelation velocity estimator adapted from radar to find the mean velocity from
only 8 to 16 pulse-echo lines. Color flow mapping systems using a cross-correlation estimator was also suggested
by Bonnefous and co-workers [43]. From these early experimental systems the scanners have now evolved into
systems for routine use in nearly all hospitals, and ultrasound is one of the most common means of diagnosing
hemodynamic problems today.
This chapter will give a brief description of the main features of all these systems. The chapter starts by deriving a
basic model for ultrasound’s interaction with a point scatterer, which shows that the frequency of the received sam-
pled signal is proportional to velocity. Systems for finding the velocity distribution in a vessel are then described
and finally the newest color flow mapping method are explained. A more comprehensive treatment of the systems
can be found in [4].
51
x
r1
z
Blood
r2 vessel
v .Tprf
θ
Ultrasound
beam
Figure 4.1: Coordinate system for finding the movement of blood particles (from [4], Copyright Cambridge Uni-
versity Press).
The data for the velocity measurement is obtained by emitting a short ultrasound pulse with 4 to 8 cycles at a
frequency of 2 to 10 MHz. The ultrasound is then scattered in all directions by the blood particles and part of the
scattered signal is received by the transducer and converted to a voltage signal. The blood velocity is found through
the repeated measurement at a particular location. The blood particles will then pass through the measurement gate
and give rise to a signal with a frequency proportional to velocity.
A coordinate system for the measurement is show in Fig. 4.1. The vector~r1 indicates the position for one scatterer,
when the first ultrasound pulse interacts with the scatterer. The vector~r2 indicates the position for interaction with
the next ultrasound pulse emitted Tpr f seconds later. The movement of the scatterer in the z-direction away from
the transducer in the time interval between the two pulses is
Dz = |~r2 −~r1 | cos(θ) = |~v| cos(θ)Tpr f , (4.1)
where θ is the angle between the ultrasound beam and the particle’s velocity vector ~v. The traveled distance gives
rise to a delay in the second signal compared to the first. Denoting the first received signal as r1 (t) and the second
as r2 (t) gives
r1 (t) = r2 (t − ts ),
2Dz 2|~v| cos(θ) 2vz
ts = = Tpr f = Tpr f (4.2)
c c c
where c is the speed of sound. Emitting a sinusoidal pulse p(t) with a frequency of f0 gives a received signal of
p(t) = g(t) sin(2π f0t) for 0 < t < M/ f0
r1 (t) = p(t − t0 )
2d
t0 = (4.3)
c
where d is the depth of interrogation, and g(t) is the envelope of the pulse. Repeating the measurement a number
of times gives a received, sampled signal for a fixed depth of
ri (t) = p(t − t0 − its )
Amplitude 5
Time [s]
4
Figure 4.2: RF sampling of simulated signal from blood vessel. The left graph shows the different received RF
lines, and the right graph is the sampled signal. The dotted line indicates the time when samples are acquired (from
[4], Copyright Cambridge University Press).
Here t is time relative to the pulse emission. Taking the measurement at the same time tx relative to the pulse
emission corresponding to a fixed depth in tissue gives
assuming the measurement is taken at times when the envelope g(t) of the pulse is constant. Such a measurement
will yield one sample for each pulse-echo RF line, and thus samples the slow movement of the blood scatterers
past the measurement position or range gate as shown in Fig.4.2. The sampled signal r(i) can be written as:
2vz
r(i) = sin(−2π f0 iTpr f + Θx ) (4.6)
c
showing that the frequency of this signal is:
2vz
f0 ,
fp = (4.7)
c
which is proportional to the projected velocity of the blood in the direction of the ultrasound beam.
The received signal will not only consist of a single frequency, since an ultrasound pulse is emitted and received.
This pulse will be sampled, when it moves past the range gate, and the corresponding digital signal will thus have
a spectrum determined by the pulse shape. The frequency axis of the pulse spectrum will also be multiplied by the
2vz /c factor due to the sampling operation. An example is shown in Fig. 4.3. The spectrum of the RF pulse is
shown on the top, and the resulting frequency axis for the received digital signal is shown on the bottom.
This simple model can also be used for determining the effect of velocity aliasing, limited observation time, atten-
uation, and non-linear propagation. The signal must be sampled according to the Nyquist limit, so that frequency
components in the spectrum are below half the sampling frequency. Here it is f pr f = 1/Tpr f , and the maximum
Original RF
frequency axis
f0 f
Frequency axis
of sampled signal
2v fprf f’
fp = c z f0 2
Figure 4.3: Frequency scaling of the received RF signal from the sampling for a number of pulse-echo lines (from
[4], Copyright Cambridge University Press).
Obtaining only a limited number of pulse-echo lines will truncate the digital version of the measurement pulse.
This corresponds to multiplication with a rectangular window, and the resulting spectrum is convolved with the
Fourier transform of the window leading to a broadening of the spectrum.
Ultrasound propagating in tissue is attenuated due to scattering and absorption. The attenuation is proportional to
depth and frequency and is typically in the range from 0.5 to 1 dB/[MHz cm]. This will for a pulsed system lead
to a loss of signal energy and to a decrease in center frequency of the pulse field. Often a shift in frequency of 100
- 200 kHz can be experienced on a 5 MHz pulse. The shift is relative to the pulse center frequency, and is, thus,
multiplied with the scaling factor 2vz /c giving rise to only a minor shift in the measured frequency for the sampled
signal. This is also the case for non-linear propagation and scattering effects, which then also scale proportionally
to the velocity. The multi pulse measurement technique is, thus, very robust to the different physical effects
encountered in medical ultrasound, and is therefore preferred to finding the Doppler shift of the pulse spectrum
during the interaction with the moving scatterer.
The velocity can be both towards and away from the transducer, and this should also be included in the estimation
of velocity. The sign can be found by using a pulse with a one-sided spectrum corresponding to a complex signal
with a Hilbert transform relation between the imaginary and real part of the signal. The one sided spectrum is then
scaled by 2vz /c and has a unique peak in the spectrum from which the velocity can be found. The complex signal
can be made by Hilbert transforming the received signal and using this for the imaginary part of the signal.
A Hilbert transform is difficult to make with analog electronics and two other implementations are shown in Fig.
4.4. The top graphs makes the demodulation by a complex multiplication with exp( j2π f0t) and then lowpass
filtration for removing the peak in the spectrum at 2 f0 . Matching the bandwidth of the low-pass filters to the
bandwidth of the pulse also gives an improvement in signal-to-noise ratio. The second solution obtains the signal
by quadrature sampling with a quarter wave delay between the two channels. Both implementations gives complex
signals that can readily be used in the estimators for determining the velocity with a correct sign [4].
cos(2π f0 t)
Transducer
S/H ADC I channel
sin(2π f0 t)
RF quadrature sampling
Sample
at 2 d0 /c
Transducer
S/H ADC I channel
Sample
at 2 d0 /c + 1/(4f0)
Figure 4.4: Demodulation schemes for obtaining a complex signal for determining the sign of the velocity (from
[4], Copyright Cambridge University Press).
Using a number of pulse-echo lines and sampling at the depth of interest, thus, gives a digital signal with a
frequency proportional to velocity. Having a movement of a collection of scatterers with different velocities then
gives a superposition of the contribution from the individual scatterers and gives rise to a spectral density of the
signal equal to the density of the velocities.
The profiles given in (4.9) are for a stationary flow in a rigid tube. More complicated flow patterns are found
in the human body since the flow is pulsatile and the vessels curve and branch (see Section 4.5). The power
density spectrum however still corresponds to the velocity distribution, and ultrasound measurements can be used
for revealing the velocity distribution over time.
An example for the carotid artery supplying the brain is shown in Fig. 4.5. The left part shows the anatomic
gray scale image of the artery with the placement of the range gate. The right side shows the spectrogram, which
gives the velocity distribution over time. The spectrogram is found from the Fourier transform of the measured
sampled signal. Usually a new spectral estimate is calculated every 5 to 10 ms and the estimates are presented in
a rolling display with the gray level corresponding to the number of scatterers with the particular velocity. The
non-stationary nature of the distribution is clearly seen as is the periodicity with the heart cycle.
Images of velocity can also be made using ultrasound. This is done by acquiring 8 to 16 lines of data in one
direction and then estimate the velocities along that direction. The beam is then moved to another direction and
the measurement is repeated. Doing this for all directions in an image gives a mapping of the velocity and that is
displayed in color flow mapping (CFM) systems.
A system diagram for a CFM system is shown in Fig. 4.6. The RF signal from the transducer is demodulated to
yield a complex signal, which is sampled by the ADC’s at all the positions along the direction where the velocity
must be found. The demodulation step also serves as a matched filter to suppress noise. The digital signal is then
passed on to the filter for removing stationary signals. The delay line canceler (DLC) subtracts the samples from
the previous line from the current line to remove the stationary signal. The tissue surrounding the vessels often
generate a scattered signal that is 40 dB larger than the blood signals, and this will seriously deteriorate the velocity
estimates. The tissue signal is, however, stationary and can therefore be removed by subtracting samples at the
same depth in tissue for consecutive pulse-echo lines.
The sampled data for one direction is then divided into a number of segment, one for each pixel in the velocity
image. Each segment holds 8 to 16 complex samples from which the velocity must be estimated. The mean
Depth
Figure 4.6: Block diagram of color flow mapping system (from [4], Copyright Cambridge University Press).
P(ω) is the power density spectrum of the received, complex signal. The velocity is then given by:
ω̄
vz = c (4.13)
4π f0
The velocity can also be calculated from the phase shift between the pulse emissions. The complex digital signal
can be written
r(i) = x(i) + jy(i) (4.14)
where x is the real signal and y the imaginary signal. Using an autocorrelation estimator the velocity estimate is
[42]
à !
c ∑N−1
i=1 y(i)x(i − 1) − x(i)y(i − 1)
vz = − arctan , (4.15)
4π f0 Tpr f ∑N−1
i=1 x(i)x(i − 1) + y(i)y(i − 1)
where N is the number of samples. This implementation is very efficient, since only few calculations are performed
for each estimate, and this is the approach preferred in most scanners. The method is also robust in terms of noise,
which is critical since the scattering from blood is quite weak compared to the surrounding tissue.
An example of a color flow image of the artery supplying the brain is shown in Fig. 4.7. Both positive and negative
velocities can be seen. The jugular vein on the top has a blue color coding showing that the movement is towards
the transducer, whereas the carotid artery below contains red colors for movement away from the transducer and
to the brain.
The estimation of velocity can also be done by finding the time-shift between two consecutive RF signals directly
[43], [44], [45]. This is the approach taken in the cross-correlation system shown in Fig. 4.8.
The received RF signal is amplified and filtered with a matched filter to remove noise. An RF sampling at 20 to 30
MHz for a 5 MHz transducer is then performed and the removal of stationary signals is done by subtracting two
consecutive RF lines.
The data is divided into segments as shown in Fig. 4.9. The velocity is found in each segment by cross-correlation
with data from the previous pulse-echo line. The received signal can be written as:
Transducer Esti-
S/H ADC DLC mator v
TGC
amplifier Matched RF sampling
filter
Pulser
Figure 4.8: System for cross-correlation flow estimation (from [4], Copyright Cambridge University Press).
Pulse Tprf
emission Segment
number
1
2
3
t 4
r0 r1 r2
Figure 4.9: Segmentation of RF data prior to cross-correlation (from [4], Copyright Cambridge University Press).
vo
where p(t) is the emitted pulse and s(t) is the scattering signal from the blood. Cross-correlation two received
signals then gives:
where R pp is the autocorrelation of the pulse and Rss is the autocorrelation of the blood scattering signal. s(t) can
be assumed to be a white, Gaussian stochastic process, so that the cross-correlation can be written as:
where σ2s is the power of the blood signal. The autocorrelation of the pulse has a unique maximum at R pp (0), and
the time-shift can thus be found from the unique maximum in the cross-correlation function. The velocity is then
given by:
c ts
vz = (4.19)
2 Tpr f
The velocity estimates are then presented by the same method as mentioned for the autocorrelation approach.
The cross-correlation approach is optimized by sending out a narrow pulse, which also makes is easier to obtain a
higher resolution in the color flow mapping images. The price is, however, that the peak intensities must be limited
due to the safety limits imposed on the scanners. Therefore cross-correlation scanners can send out less energy and
are often less sensitive than the scanners using the autocorrelation approach.
An other disadvantage of the cross-correlation approach is the large amount of calculations that must be performed
per second. For a real time system it can approach one billion calculations per second making it necessary to use
only the sign of the signals in the calculation of the cross-correlation [46].
Several different flow patterns can be found in the body and this section will give a brief overview of how to model
a simple pulsatile flow.
The steady flow of a Newtonian fluid in a long, rigid tube is shown in Fig. 4.10. The flow will be laminar, so that
The viscosity of the fluid will give a resistance to flow. A pressure difference is, thus, needed to overcome this for
maintaining a steady flow. The relation between velocity and pressure difference is given by
∆P = R f Q, (4.21)
where R f is the viscous resistance. The relation is named after Poiseuille, who studied flow in capillary tubes and
published his findings in 1846 [47].
For a laminar flow with a parabolic velocity profile the viscous resistance is
8µl
Rf = , (4.22)
πR4
where l is the distance over which the pressure drop is found. The resistance is highly dependent on vessel radius.
Decreasing the radius by a factor of two increases the resistance by a factor of 16. The resistance given in the
equation is only valid for a parabolic profile. Different values are encountered for other profiles.
Poiseuille’s law is equivalent to Ohm’s law for electrical circuits, and ∆P is equivalent to voltage and Q to current.
The flow in the human body is generated by the pulsatile action of the heart, and a pulsatile rather than steady flow
is encountered throughout the circulatory system. The velocity at a point in a vessel is, therefore, dependent on
time and experiences acceleration and deceleration. This affects the velocity profile, which cannot be considered
parabolic, even when a steady state of pulsation is reached.
Poiseuille’s law can be reformulated for pulsatile flow by assuming linearity, i.e., that the flow pattern can be
decomposed into sinusoidal components and then added to obtain the velocity variation in time and space. This
assumes that the fluid is Newtonian. Further, entrance effects are discarded, and it is assumed that the flow has
attained a steady state of pulsation. The relation between pressure difference and volume flow rate is then for a
single, sinusoidal component [48]:
0
8 M10
Q(t) = ∆P sin(ωt − φ + ε010 )
R f α2
r
ρ
α = R ω (4.23)
µ
8µL
Rf =
πR2
when the pressure difference is ∆P cos(ωt − φ). Here ε010 determines the phase of the wave and M10 0 its amplitude.
They are both complex functions of α. The expressions are given in Appendix A of [4]. Womersley’s number α
indicates how far removed the pulsatile flows pressure-flow rate relation is from that of steady flow. For α < 1 the
flow rate and pressure is in phase, whereas for α > 1 the flow rate will lag the pressure difference.
α2/M’10
12
α2/M’10
150
100 10
50
8
0
0 5 10 15 0 1 2 3
α α
100
80
60
ε’10
40
20
0
0 2 4 6 8 10 12 14
α
0 and ε0 as a function of α.
Figure 4.11: The Womersley’s parameters α2 /M10 10
ume flow rate. The phase difference between pressure and flow rate will also increase and approach 90◦ for high
frequencies.
Assuming a heart rate of 62 beats per minute, α is equal to 10.2 for the first harmonic in the aorta, and 1.69 in a
small peripheral vessel (R = 0.5 cm). The pressure difference and volume flow rate will, thus, be nearly 90◦ out of
phase in the aorta and will be in phase in the peripheral vessel. Higher harmonics of the flow rate in the aorta will
be severely attenuated due to viscosity, as can be seen in the graph for α2 /M10
0 . A discussion of the validity of the
From a knowledge of volume flow rate, Evans (1982b) has shown that it is possible to calculate the velocity profile
for the steady-state pulsatile flow, when entrance effects are neglected. The relation is given by:
1
vm (t, r/R) = Qm |ψm (r/R, τm )| cos(ωmt − φm + χm )
πR2
τα J0 (τm ) − τα J0 ( Rr τm )
ψm (r/R, τm ) = (4.24)
τα J0 (τm ) − 2J1 (τm )
χm = 6 ψ(r/R, τm )
r
ρ
τm = j R 3/2
ωm ,
µ
where Jn (x) is the nth order Bessel function. Also, 6 ψ( Rr , τm ) denotes the angle of the complex function ψ, and
|ψ| denotes its amplitude. The function ψ is dependent on radial position in the vessel, angular frequency, and the
fluid. It describes how the velocity changes with time and position over a cycle of the sinusoidal flow. Thus, from
one measurement of the volume flow rate the whole velocity profile can be calculated.
The evolution of 6 ψ and |ψ| for different radial positions and values of α is shown in Fig. 4.12. Note that the
velocity is always zero at the vessel boundary. For α low a nearly parabolic profile is obtained, and the profile
becomes progressively more blunt with an increase in α, so the central core of the fluid will move as a whole.
The volume flow rate is related to the mean spatial velocity by:
1.5
α=2
abs(Ψ)
1 α=5
α=10
0.5 α=20
0
0 0.2 0.4 0.6 0.8 1
Normalized radius
40
20 α=2
angle(Ψ)
α=5
0 α=10
α=20
−20
0 0.2 0.4 0.6 0.8 1
Normalized radius
Assuming that the fluid is Newtonian, so linearity can be assumed, it is possible to superimpose the different
sinusoidal components to obtain the time evolution of the pulsatile flow. The reverse process is also possible. Here
Q(t) or v̄(t) is observed and the individual sinusoidal components found by Fourier decomposition:
Z T
1
Vm = v̄(t) exp(− jmωt)dt. (4.26)
T 0
Using (4.24) and (4.27) it is now possible to reconstruct the time evolution of the velocity profile for the pulsatile
flow by: µ ³ r ´2 ¶ ∞
v(t, r/R) = 2v0 1 − + ∑ |Vm ||ψm | cos(mωt − φm + χm ). (4.28)
R m=1
The first term is the steady flow.
By observing the spatially averaged flow volume, the whole profile can be reconstructed. This can be attained
by ultrasound systems insonifying the whole vessel and processing the returned signal. Examples of the mean
velocities from the common femoral and carotid arteries are shown in Fig. 4.13 and the different values for Vp and
φ p are given in Table 4.1. The resulting velocity profiles are shown in Fig. 4.14. Profiles for the whole cardiac
cycle are shown with time increasing toward the top. The dotted lines indicate zero velocity.
The derivation given in this chapter generally assumes that the single scatterer stays within a region of uniform
insonation. This is quite a crude assumption for realistic beams, as shown in Chapter 2. Both continuous and
Velocity [m/s]
0.5
-0.5
0 50 100 150 200 250 300 350
Phase [deg.]
0.3
Velocity [m/s]
0.2
0.1
0
0 50 100 150 200 250 300 350
Phase [deg.]
Figure 4.13: Spatial mean velocities from the common femoral (top) and carotid arteries (bottom). The phase
indicated is that within a single cardiac cycle (from [4], Copyright Cambridge University Press).
Table 4.1: Fourier components for flow velocity in the common femoral and common carotid arteries (data from
[50])
Velocity
Velocity
-1 0 1 -1 0 1
Relative radius Relative radius
Figure 4.14: Velocity profiles from the common femoral and carotid arteries. The profiles at time zero are shown
at the bottom of the figure and time is increased toward the top. One whole cardiac cycle is covered and the dotted
lines indicate zero velocity (from [4], Copyright Cambridge University Press).
pulsed fields vary with position and this needs to be taken into account using the Tupholme–Stepanishen field
model.
where fm accounts for the scattering by the medium, h pe describes the spatial distribution of the ultrasound field,
and v pe is the one-dimensional pulse emanating from the pulse excitation and conversion from voltage to pressure
and back again. The model in Section 4.2 states that the scatterer will move during the interaction with the
ultrasound giving rise to a Doppler shift. The most important feature is, however, the interpulse movement, because
this is used by the pulsed scanners for detecting velocity. The approximation is then to include the small Doppler
shift into the one-dimensional pulse v pe , shifting its frequency content to f 0 = (1 + 2vz /c) f , and assuming that
the field interacting with the scatterer stays constant. Usually the pulse duration is a few microseconds, so that
the scatterers only move a fraction of a millimeter, even for high blood velocities, during the interaction, and the
field can safely be assumed to be constant over this distance. The received voltage trace is then found directly
from (4.29) with~r2 indicating the position of the scatterers. Note that a spatial convolution takes place and that the
received response is a summation of contributions from numerous scatterers. The scatterers move to the position
when the next field from the subsequent pulse emission impinges on the scatterers. Here i denotes the pulse
or line number and ~v(~r2 (i),t) is the velocity of the scatterer at the position indicated by ~r2 (i) at time t. This
assumes that the scatterers do not accelerate during the interaction. The movement between pulses is Tpr f~v(~r2 (i),t),
and the new position of the scatterers, ~r2 (i + 1), can then be inserted in (4.29) and used for calculating the next
voltage trace. The received signals for multiple pulse emissions can, thus, be found by these two equations. The
actual calculation is rather complicated in three dimensions, but is easy to handle by a computer. The different
velocities for the scatterers necessitate that separate Doppler shifts are included in v pe for each scatterer. This can
be done in a computer simulation of each scatterer before the contributions for all scatterers are summed, or a mean
Doppler shift can be used. The effect of this Doppler shift is minor in pulsed systems and can, at least to a first
approximation, be neglected. A fairly realistic simulation should, thus, be possible with this approach.
A phantom was developed for evaluating color flow imaging. It generates data for flow in vessels with properties
like the carotid artery, The velocity profile is close to parabolic, which is a fairly good approximation during most
of the cardiac cycle [4] for a carotid artery. The phantom generates 10 files with positions of the scatterers at
the corresponding time step. From file to file the scatterers are then propagated to the next position as a function
of their velocity and the time between pulses. The ten files are then used for generating the RF lines for the
different imaging directions and for ten different times. A linear scan of the phantom was made with a 192 element
transducer using 64 active elements with a Hanning apodization in transmit and receive. The element height was
5 mm, the width was a wavelength and the kerf 0.05 mm. The pulses where the same as used for the point
phantom mentioned Section 3.6. A single transmit focus was placed at 70 mm, and receive focusing was done at
20 mm intervals from 30 mm from the transducer surface. The resulting signals have then been used in a standard
autocorrelation estimator [4] for finding the velocity image. The resulting color flow image is shown in Fig. 4.15.
Note how the vessel is larger at the bottom than the top.
An other phantom is used for simulating data for a spectral Doppler system. It calculates RF data as measured
from arteria femoralis in the upper leg. The phantom generates a number of files for particular time instances. The
scatterers are then propagated between pulses according to the Womersley model mentioned in Section 4.5 thereby
generating a three-dimensional, pulsed model of the flow in the femoral artery. The scatterer position files are all
generated before the simulation of the fields take place. For a pulse repetition frequency of 5 kHz, 5000 files are
generated per second of simulation time, so a large amount of data is involved here. The scatterer position files are
then used during simulation for generating one RF signal for each pulse emission. These files can then be used for
doing either spectral analysis or for estimating the profile in the artery.
The simulation is made so multiple workstations can simutaneously work on the problem , as long as they have
access to the same directory. This is done by looking at which RF files that have been generated. The first one not
simulated is reserved by the program by writing a dummy file, and then doing the simulation. Multiple workstations
can then work simultaneously on the problem and generate a result quickly. The resulting spectrogram is shown in
in Fig. 4.16.
40
50
Axial distance [mm]
60
70
80
90
−20 −10 0 10 20
Lateral distance [mm]
Figure 4.15: Color flow image of vessel with a parabolic flow profile.
0.8
0.6
0.4
0.2
Velocity [m/s]
−0.2
−0.4
−0.6
−0.8
[1] J. A. Jensen. A model for the propagation and scattering of ultrasound in tissue. J. Acoust. Soc. Am., 89:182–
191, 1991a.
[2] J. A. Jensen. A new calculation procedure for spatial impulse responses in ultrasound. In press, JASA, 1999.
[3] J. A. Jensen, D. Gandhi, and W. D. O’Brien. Ultrasound fields in an attenuating medium. In Proc. IEEE
Ultrason. Symp., pages 943–946, 1993.
[4] J. A. Jensen. Estimation of Blood Velocities Using Ultrasound: A Signal Processing Approach. Cambridge
University Press, New York, 1996.
[5] A. D. Pierce. Acoustics, An Introduction to Physical Principles and Applications. Acoustical Society of
America, New York, 1989.
[6] P. M. Morse and K. U. Ingard. Theoretical Acoustics. McGraw-Hill, New York, 1968.
[7] P. R. Stepanishen. Pulsed transmit/receive response of ultrasonic piezoelectric transducers. J. Acoust. Soc.
Am., 69:1815–1827, 1981.
[8] L. E. Kinsler, A. R. Frey, A. B. Coppens, and J. V. Sanders. Fundamentals of Acoustics. John Wiley & Sons,
New York, third edition, 1982.
[9] P. R. Stepanishen. Wide bandwidth near and far field transients from baffled pistons. In Proc. IEEE Ultrason.
Symp., pages 113–118, 1977.
[10] L. Raade and B. Westergreen. β Mathematics handbook. Charwell-Bratt, Ltd., Kent, England, 1990.
[11] M. R. Spiegel. Mathematical handbook of formulas and tables. McGraw-Hill, New York, 1968.
[12] M. Arditi, F. S. Forster, and J. Hunt. Transient fields of concave annular arrays. Ultrason. Imaging, 3:37–61,
1981.
[13] A. Penttinen and M. Luukkala. The impulse response and nearfield of a curved ultrasonic radiator. J. Phys.
D: Appl. Phys., 9:1547–1557, 1976.
[14] J. C. Lockwood and J. G. Willette. High-speed method for computing the exact solution for the pressure
variations in the nearfield of a baffled piston. J. Acoust. Soc. Am., 53:735–741, 1973.
[15] J. L. S. Emeterio and L. G. Ullate. Diffraction impulse response of rectangular transducers. J. Acoust. Soc.
Am., 92:651–662, 1992.
[16] J. A. Jensen. Ultrasound fields from triangular apertures. J. Acoust. Soc. Am., 100(4):2049–2056, 1996a.
[17] W. H. Press, B. P. Flannery, S. A. Teukolsky, and W. T. Vetterling. Numerical receipes in C. The art of
scientific computing. Cambridge University Press, Cambridge, 1988.
69
[18] G. R. Harris. Transient field of a baffled planar piston having an arbitrary vibration amplitude distribution. J.
Acoust. Soc. Am., 70:186–204, 1981.
[19] P. R. Stepanishen. Acoustic transients from planar axisymmetric vibrators using the impulse response ap-
proach. J. Acoust. Soc. Am., 70:1176–1181, 1981.
[20] J. Naze Tjøtta and S. Tjøtta. Nearfield and farfield of pulsed acoustic radiators. J. Acoust. Soc. Am., 71:824–
834, 1982.
[21] J. W. Goodman. Introduction to Fourier optics. McGraw Hill Inc., New York, second edition edition, 1996.
[22] J.C. Gore and S. Leeman. Ultrasonic backscattering from human tissue: A realistic model. Phys. Med. Biol.,
22:317–326, 1977.
[23] M. Fatemi and A.C. Kak. Ultrasonic b-scan imaging: Theory of image formation and a technique for restora-
tion. Ultrason. Imaging, 2:1–47, 1980.
[24] L.A. Chernov. Wave propagation in a random medium. Academic Press, 1960.
[25] P.E. Chandler S. Leeman, V.C. Roberts and L.A. Ferrari. Inverse imaging with strong multiple scattering.
In M.A.Viergever and A. Todd-Pokropek, editors, Mathematics and computer science in medical imaging,
pages 279–289. Springer-Verlag, 1988.
[26] G. E. Tupholme. Generation of acoustic pulses by baffled plane pistons. Mathematika, 16:209–224, 1969.
[27] P. R. Stepanishen. The time-dependent force and radiation impedance on a piston in a rigid infinite planar
baffle. J. Acoust. Soc. Am., 49:841–849, 1971.
[28] P. M. Morse and K. U. Ingard. Methods of theoretical physics, part I. McGraw-Hill, New York, 1953.
[29] P. R. Stepanishen. Transient radiation from pistons in an infinte planar baffle. J. Acoust. Soc. Am., 49:1629–
1638, 1971.
[30] A. C. Kak and K. A. Dines. Signal processing of broadband pulse ultrasound: measurement of attenuation of
soft biological tissues. IEEE Trans. Biomed. Eng., BME-25:321–344, 1978.
[31] K. V. Gurumurthy and R. M. Arthur. A dispersive model for the propagation of ultrasound in soft tissue.
Ultrason. Imaging, 4:355–377, 1982.
[32] J. A. Jensen and P. Munk. Computer phantoms for simulating ultrasound b-mode and cfm images. In S. Lees
and L. A. Ferrari, editors, Acoustical Imaging, volume 23, pages 75–80, 1997.
[33] J. A. Jensen. Field: A program for simulating ultrasound systems. Med. Biol. Eng. Comp., 10th Nordic-Baltic
Conference on Biomedical Imaging, Vol. 4, Supplement 1, Part 1:351–353, 1996b.
[34] R. F. Wagner, S. W. Smith, J. M. Sandrick, and H. Lopez. Statistics of speckle in ultrasound B-scans. IEEE
Trans. Son. Ultrason., 30:156–163, 1983.
[35] J. A. Jensen. Ultrasound systems for blood velocity estimation. In MMAR 98 Conference proceedings, pages
815–820, 1998.
[36] Shigeo Satomura. Ultrasonic Doppler method for the inspection of cardiac functions. J. Acoust. Soc. Am.,
29:1181–1185, 1957.
[37] Shigeo Satomura. Study of the flow patterns in peripheral arteries by ultrasonics. J. Acoust. Soc. Jap.,
15:151–158, 1959.
[38] D. W. Baker. Pulsed ultrasonic Doppler blood-flow sensing. IEEE Trans. Son. Ultrason., SU-17:170–185,
1970.
[39] Pierre A. Peronneau and Frenand Leger. Doppler ultrasonic pulsed blood flowmeter. In Proc. 8th Int. Conf.
Med. Biol. Eng., pages 10–11, 1969.
70 Bibliography
[40] Peter N. T. Wells. A range gated ultrasonic Doppler system. Med. Biol. Eng., 7:641–652, 1969.
[41] K. Namekawa, C. Kasai, M. Tsukamoto, and A. Koyano. Realtime bloodflow imaging system utilizing
autocorrelation techniques. In R.A Lerski and P. Morley, editors, Ultrasound ’82, pages 203–208, New York,
1982. Pergamon Press.
[42] C. Kasai, K. Namekawa, A. Koyano, and R. Omoto. Real-time two-dimensional blood flow imaging using
an autocorrelation technique. IEEE Trans. Son. Ultrason., 32:458–463, 1985.
[43] O. Bonnefous and P. Pesqué. Time domain formulation of pulse-Doppler ultrasound and blood velocity
estimation by cross correlation. Ultrason. Imaging, 8:73–85, 1986.
[44] D. Dotti, E. Gatti, V. Svelto, A. Uggè, and P. Vidali. Blood flow measurements by ultrasound correlation
techniques. Energia Nucleare, 23:571–575, 1976.
[45] P. M. Embree and W. D. O’Brien. The accurate ultrasonic measurement of volume flow of blood by time-
domain correlation. In Proc. IEEE Ultrason. Symp., pages 963–966, 1985.
[46] J. A. Jensen. Implementation of ultrasound time-domain cross-correlation blood velocity estimators. IEEE
Trans. Biomed. Eng., 40:468–474, 1993a.
[47] W. W. Nichols and M. F. O’Rourke. McDonald’s Blood Flow in Arteries, Theoretical, Experimental and
Clinical Principles. Lea & Febiger, Philadelphia, 1990.
[48] J. R. Womersley. Oscillatory motion of a viscous liquid in a thin-walled elastic tube. I: The linear approxi-
mation for long waves. Phil. Mag., 46:199–221, 1955.
[49] David H. Evans. Some aspects of the relationship between instantaneous volumetric blood flow and continu-
ous wave Doppler ultrasound recordings III. Ultrasound Med. Biol., 9:617–623, 1982b.
[50] D. H. Evans, W. N. McDicken, R. Skidmore, and J. P. Woodcock. Doppler Ultrasound, Physics, Instrumen-
tation, and Clinical Applications. John Wiley & Sons, New York, 1989.
Bibliography 71