John J. Friel - Practical Guide To Image Analysis (2000, ASM International)
John J. Friel - Practical Guide To Image Analysis (2000, ASM International)
John J. Friel - Practical Guide To Image Analysis (2000, ASM International)
ASM International®
Materials Park, OH 44073-0002
www.asminternational.org
JOBNAME: PGIA−−spec 2 PAGE: 2 SESS: 10 OUTPUT: Thu Oct 26 15:57:06 2000
Copyright © 2000
by
ASM International®
All rights reserved
No part of this book may be reproduced, stored in a retrieval system, or transmitted, in any form or by any means,
electronic, mechanical, photocopying, recording, or otherwise, without the written permission of the copyright
owner.
Great care is taken in the compilation and production of this Volume, but it should be made clear that NO
WARRANTIES, EXPRESS OR IMPLIED, INCLUDING, WITHOUT LIMITATION, WARRANTIES OF
MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE, ARE GIVEN IN CONNECTION
WITH THIS PUBLICATION. Although this information is believed to be accurate by ASM, ASM cannot
guarantee that favorable results will be obtained from the use of this publication alone. This publication is
intended for use by persons having technical skill, at their sole discretion and risk. Since the conditions of product
or material use are outside of ASM’s control, ASM assumes no liability or obligation in connection with any use
of this information. No claim of any kind, whether as to products or information in this publication, and whether
or not based on negligence, shall be greater in amount than the purchase price of this product or publication in
respect of which damages are claimed. THE REMEDY HEREBY PROVIDED SHALL BE THE EXCLUSIVE
AND SOLE REMEDY OF BUYER, AND IN NO EVENT SHALL EITHER PARTY BE LIABLE FOR
SPECIAL, INDIRECT OR CONSEQUENTIAL DAMAGES WHETHER OR NOT CAUSED BY OR RESULT-
ING FROM THE NEGLIGENCE OF SUCH PARTY. As with any material, evaluation of the material under
end-use conditions prior to specification is essential. Therefore, specific testing under actual conditions is
recommended.
Nothing contained in this book shall be construed as a grant of any right of manufacture, sale, use, or
reproduction, in connection with any method, process, apparatus, product, composition, or system, whether or not
covered by letters patent, copyright, or trademark, and nothing contained in this book shall be construed as a
defense against any alleged infringement of letters patent, copyright, or trademark, or as a defense against liability
for such infringement.
Comments, criticisms, and suggestions are invited, and should be forwarded to ASM International.
ASM International staff who worked on this project included E.J. Kubel, Jr., Technical Editor; Bonnie Sanders,
Manager, Production; Nancy Hrivnak, Copy Editor; Kathy Dragolich, Production Supervisor; and Scott Henry,
Assistant Director, Reference Publications.
ISBN 0-87170-688-1
SAN: 204-7586
ASM International®
Materials Park, OH 44073-0002
www.asminternational.org
iii
JOBNAME: PGIA−−spec 2 PAGE: 4 SESS: 12 OUTPUT: Thu Oct 26 15:57:06 2000
iv
JOBNAME: PGIA−−spec 2 PAGE: 5 SESS: 12 OUTPUT: Thu Oct 26 15:57:06 2000
v
JOBNAME: PGIA−−spec 2 PAGE: 6 SESS: 11 OUTPUT: Thu Oct 26 15:57:06 2000
Contents
Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix
CHAPTER 1: Image Analysis: Historical Perspective . . . . . . . . . 1
Don Laferty, Objective Imaging Ltd.
Video Microscopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
Beginnings: 1960s . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
Growth: 1970s . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
Maturity: 1980s . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
Desktop Imaging: 1990s . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
Truly Digital: 2000 and Beyond . . . . . . . . . . . . . . . . . . . . . . . 12
CHAPTER 2: Introduction to Stereological Principles . . . . . . . 15
George F. Vander Voort, Buehler Ltd.
Sampling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
Specimen Preparation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
Volume Fraction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
Number per Unit Area . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
Intersections and Interceptions per Unit Length . . . . . . . . . . . . . 23
Grain-Structure Measurements . . . . . . . . . . . . . . . . . . . . . . . . 23
Inclusion Content . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
Measurement Statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
Image Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
CHAPTER 3: Specimen Preparation for Image Analysis . . . . . . 35
George F. Vander Voort, Buehler Ltd.
Sampling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
Sectioning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
Specimen Mounting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
Grinding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
Polishing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
Examples of Preparation Procedures . . . . . . . . . . . . . . . . . . . . 56
Etching . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
vi
JOBNAME: PGIA−−spec 2 PAGE: 7 SESS: 13 OUTPUT: Thu Oct 26 15:57:06 2000
vii
JOBNAME: PGIA−−spec 2 PAGE: 8 SESS: 14 OUTPUT: Thu Oct 26 15:57:06 2000
viii
JOBNAME: PGIA−−spec 2 PAGE: 9 SESS: 10 OUTPUT: Thu Oct 26 15:57:06 2000
Preface
Man has been using objects made from metals for more than 3000
years—objects ranging from domestic utensils, artwork, and jewelry, to
weapons made of brass alloys, silver, and gold. The alloys used for these
projects were developed by combining empirical knowledge developed
over centuries by trial and error. Prior to the late 1800s, engineers had no
concept of the relationship between a material’s properties and its
structure. In most human endeavors, empirical observations are used to
create things, and the scientific principles that govern how the materials
behave lag far behind. Also, once the scientific concepts are understood,
practicing metallurgists often have been slow to understand how to apply
the theory to advance the industries.
The origins of the art of metallography date back to Sorby’s work in
1863. While his metallographic work was ignored for 20 years, the
procedures he developed for revealing the microstructures of metals
directly lead to some of today’s well-established relationships between
structure and properties. During the past 140 years, metallography has
transformed from an art into a science. Concurrent with the advances in
specimen preparation techniques has been the development of method-
ologies to better evaluate microstructural features quantitatively.
This book, as its title suggests, is intended to serve as a “practical
guide” for applying image analysis procedures to evaluate microstructural
features. Chapters 1 and 2 present an historical overview of how
quantitative image analysis developed and the evolution of today’s
television computer-based analysis systems, and the science of stereology,
respectively. The third chapter provides details of how metallographic
specimens should be properly prepared for image analysis. Chapters 4
through 7 consider the principles of image analysis, what types of
measurements can be made, the characteristics of particle dispersions, and
methods for analysis and interpretation of the results. Chapter 8 illustrates
how macro programs are developed to perform several specific image
analysis applications. Chapter 9 illustrates the use of color metallography
for image analysis problems.
This book considers most of the aspects that are required to apply image
analysis to materials problems. The book should be useful to engineers,
scientists, and technicians that need to extract quantitative information
from material systems. The principles discussed can be applied to typical
quality control problems and standards, as well as to problems that may
be encountered in research and development probjects. In many image
ix
JOBNAME: PGIA−−spec 2 PAGE: 10 SESS: 10 OUTPUT: Thu Oct 26 15:57:06 2000
x
JOBNAME: PGIA−−spec 2 PAGE: 1 SESS: 10 OUTPUT: Thu Oct 26 14:43:15 2000
CHAPTER 1
Image Analysis:
Historical Perspective
Don Laferty
Objective Imaging Ltd.
Video Microscopy
1960s. From limited hardware systems first used for quantitative metal-
lographic characterizations to modern, highly flexible image processing
software applications, IA has found a home in an enormous range of
industrial and biomedical applications. The popularity of digital imaging
in various forms is still growing. The explosion of affordable computer
technologies during the 1990s coupled with recent trends in digital-image
acquisition devices places a renewed interest in how digital images are
created, managed, processed, and analyzed. There now is an unprec-
edented, growing audience involved in the digital imaging world on a
daily basis. Who would have imagined in the early days of IA, when an
imaging system cost many tens—if not hundreds—of thousands of
dollars, the degree of powerful image processing software that would be
available today for a few hundred dollars at the corner computer store?
Yet, beneath all these new technological developments, underlying
common elements that are particular to the flavor of imaging referred to
as “scientific image analysis” have changed little since their very
beginnings. Photographers say that good pictures are not “taken” but
instead are carefully composed and considered; IA similarly relies on
intelligent decisions regarding how a given subject—the specimen—
should be prepared for study and what illumination and optical configu-
rations provide the most meaningful information.
In acquiring the image, the analyst who attends to details such as
appropriate video levels and shading correction ensures reliable and
repeatable results, which build confidence. The myriad digital image
processing methods can be powerful allies when applied to both simple
and complex imaging tasks. The real benefits of image processing,
however, only come when the practitioner has the understanding and
experience to choose the appropriate tools, and, perhaps more impor-
tantly, knows the boundaries inside which tool use can be trusted.
The goal for IA is information from which to distill a manageable set of
meaningful quantitative descriptions from the specimen (or better, a set of
specimens). In practice, successful quantification depends on an under-
standing of the nature of these measurements so that, when the proper
parameters are selected, accuracy, precision, and repeatability, as well as
the efficiency of the whole process, are maximized.
Set against the current, renewed emphasis on digital imaging is a
history of TV-based IA that spans nearly four decades. Through various
generations of systems, techniques for specimen preparation, image
acquisition, processing, measurement, and analysis have evolved from the
first systems of the early 1960s into the advanced general purpose systems
of the 1980s, finally arriving at the very broad spectrum of imaging
options available into the 21st century. A survey of the methods used in
practice today shows that for microstructure evaluations, many of the
actual image processing and analysis techniques used now do not differ all
that much from those used decades ago.
JOBNAME: PGIA−−spec 2 PAGE: 4 SESS: 10 OUTPUT: Thu Oct 26 14:43:15 2000
4 / Practical Guide to Image Analysis
Beginnings: 1960s
information for a period longer than that of the video frame rate.
Typically, only a few lines of video were being stored at one time. In one
regard, these systems were very simple to use. For example, to gage the
area percentage result using the original Quantimet A, the investigator
needed to simply read the value from the continuously updated analog
meter. Compared with the tedium of manual point counting using grid
overlays, the immediate results produced by this new QTM gave a hint of
the promise of applying television technology to microstructure charac-
terization.
The first system capable of storing a full black and white image was the
Bausch and Lomb QMS introduced in 1968 (Ref 11). Using a light pen,
the operator could measure properties of individual objects, now referred
to as feature specific properties, for the first time.
The second major foundation of IA in these early days was mathemati-
cal morphology, developed primarily by French mathematicians J. Serra
and G. Matheron and at Ecole des Mines de Paris (Ref 12). The
mathematical framework for morphological image processing was intro-
duced by applying topology and set theory to problems in earth and
materials sciences. In mathematical morphology, the image is treated in a
numerical format as a set of valued points, and basic set transformations
such as the union and intersection are performed. This results in concepts
such as the erosion and dilation operations, which are, in one form or
another, some of the most heavily used processing operations in applied
IA even today.
Growth: 1970s
By the 1970s, the field of IA was prepared for rapid growth into a wide
range of applications. The micro-Videomat system was introduced by
Carl Zeiss (Ref 13), and the Millipore MC particle-measurement system
was being marketed in America and Europe (Ref 14). The first IA system
to use mathematical morphology was the Leitz texture analysis system
(TAS) introduced in 1974. Also, a new field specific system named the
Histotrak image analyzer was introduced by the British firm Ealing-Beck
(Ref 15).
In the meantime, Metals Research had become IMANCO (for Image
Analyzing Computers), and its Quantimet 720 system offered a great deal
more flexibility than the original systems of the 1960s (Fig. 2). Still
hardware based, this second generation of systems offered many new and
useful features. The Q720 used internal digital-signal processing hard-
ware, a built-in binary morphological image processor with selection of
structuring element and size via dials on the front panel, and advanced
feature analysis with the size and shape of individual objects measured
and reported on-screen in real time. The system was also flexible, due to
programmability implemented via a logic matrix configured using sets of
twisted pairs of wire. Other impressive innovations included a light pen
for direct editing of the image on the video monitor and automated control
of microscope stage and focus. Other systems offered in the day, such as
the TAS and pattern analysis system, (PAS) (Bausch and Lomb, USA),
had many similar processing and measurement capabilities.
The performance of early hardware-based systems was very high, even
by the standards of today. Using analog tube-video cameras, high-
Fig. 2 Q720 (IMANCO, Cambridge, U.K.) image analyzer with digital image
processing hardware
JOBNAME: PGIA−−spec 2 PAGE: 7 SESS: 10 OUTPUT: Thu Oct 26 14:43:15 2000
Image Analysis: Historical Perspective / 7
Maturity: 1980s
O Use the best procedures possible to prepare the specimens for analysis.
No amount of image enhancement can correct problems created by
poorly prepared specimens.
O Take care in acquiring high-quality images.
O Apply only the image processing necessary to reveal what counts.
O Keep the final goals in mind when deciding what and how to measure.
References
CHAPTER 2
Introduction to
Stereological Principles
George F. Vander Voort
Buehler Ltd.
Sampling
Volume Fraction
data are used to calculate the area covered by the second phase, which
then is divided by the image area to obtain the area fraction. All three
methods give a precise measurement of the area fraction of one field. An
enormous amount of effort must be extended per field. However, it is well
recognized that the field-to-field variability in volume fraction has a larger
influence on the precision of the volume fraction estimate than the error
in rating a specific field, regardless of the procedure used. So, it is not
JOBNAME: PGIA−−spec 2 PAGE: 6 SESS: 24 OUTPUT: Thu Oct 26 14:44:16 2000
20 / Practical Guide to Image Analysis
P␣
P⫽ (Eq 1)
PT
where P␣ is the number of grid points lying inside the feature of interest,
␣, plus one-half the number of grid points lying on particle boundaries
and PT is the total number of grid points. Studies show that the point
fraction is equivalent to the lineal fraction, LL, and the area fraction, AA,
and all three are unbiased estimates of the volume fraction, VV, of the
second-phase particles:
PP ⫽ LL ⫽ AA⫽ VV (Eq 2)
Point counting is much faster than lineal or areal analysis and is the
preferred manual method. Point counting is always performed on the
minor phase, where VV < 0.5. The amount of the major (matrix) phase can
be determined by the difference.
The fields measured should be selected at locations over the entire
polished surface and not confined to a small portion of the specimen
surface. The field measurements should be averaged, and the standard
deviation can be used to assess the relative accuracy of the measurement,
as described in ASTM E 562.
JOBNAME: PGIA−−spec 2 PAGE: 7 SESS: 13 OUTPUT: Thu Oct 26 14:44:16 2000
Introduction to Stereological Principles / 21
VV
A⫽ (Eq 3)
NA
Grain-Structure Measurements
1 1
l⫽ ⫽ (Eq 4)
PL NL
SV ⫽ 2PL (Eq 5)
and
LA ⫽ PL (Eq 6)
2
PL Ⲛ ⫺ PL 储
⍀⫽ (Eq 7)
PL Ⲛ ⫹ 0.571PL
t ⫽ r / 2 (Eq 8)
1
r ⫽ (Eq 9)
NL
The mean true spacing, t, is 1⁄2 r. To make accurate measurements, the
lamellae must be clearly resolved; therefore, use of transmission electron
microscope (TEM) replicas is quite common.
NL measurements also are used to measure the interparticle spacing in
a two-phase alloy, such as the spacing between carbides or intermetallic
precipitates. The mean center-to-center planar spacing between particles
over 360°, , is the reciprocal of NL. For the second-phase particles in the
idealized two-phase structure shown in Fig. 1, a count of the number of
particles intercepted by the horizontal and vertical test lines yields 22.5
interceptions. The total line length is 1743 µm; therefore, NL ⫽ 0.0129 per
µm or 12.9 per mm and is 77.5 µm, or 0.0775 mm.
The mean edge-to-edge distance between such particles over 360°,
known as the mean free path, , is determined in like manner but requires
knowledge of the volume fraction of the particles. The mean free path is
calculated from:
1⫺VV
⫽ (Eq 10)
NL
For the structure illustrated in Fig. 1, the volume fraction of the particles
was estimated as 0.147. Therefore, is 66.1 µm, or 0.066 mm.
The mean lineal intercept distance, l␣, for these particles is deter-
mined by:
l␣ ⫽ ⫺ (Eq 11)
For this example, l␣ is 11.4 µm, or 0.0114 mm. This value is smaller than
the caliper diameter of the particles because the test lines intercept the
particles at random, not only at the maximum dimension. The calculated
mean lineal intercept length for a circle with a 15 µm diameter is 11.78
µm. Again, stereological field measurements can be used to determine a
characteristic dimension of individual features without performing indi-
vidual particle measurements.
Grain Size. Perhaps the most common quantitative microstructural
measurement is that of the grain size of metals, alloys, and ceramic
JOBNAME: PGIA−−spec 2 PAGE: 12 SESS: 14 OUTPUT: Thu Oct 26 14:44:16 2000
26 / Practical Guide to Image Analysis
The metric grain size number, GM, is slightly lower than the ASTM grain
size number, G, for the same structure:
This very small difference usually can be ignored (unless the value is near
a specification limit).
JOBNAME: PGIA−−spec 2 PAGE: 13 SESS: 16 OUTPUT: Thu Oct 26 14:44:16 2000
Introduction to Stereological Principles / 27
Planimetric Method. The oldest procedure for measuring the grain size
of metals is the planimetric method introduced by Zay Jeffries in 1916
based upon earlier work by Albert Sauveur. A circle of known size
(generally 79.8 mm diameter, or 5000 mm2 area) is drawn on a
micrograph or used as a template on a projection screen. The number of
grains completely within the circle, n1, and the number of grains
intersecting the circle, n2, are counted. For accurate counts, the grains
must be marked off as they are counted, which makes this method slow.
The number of grains per square millimeter at 1⫻, NA, is determined by:
where f is the magnification squared divided by 5000 (the circle area). The
average grain area, A, in square millimeters, is:
1
A⫽ (Eq 16)
NA
1
d ⫽ (A) 1/2 ⫽ (Eq 17)
(NA)1/2
The ASTM grain size, G, can be found by using the tables in ASTM E 112
or by the following equation:
the test line, n2, however, is slightly different. In this method, grains will
intercept the four corners of the square or rectangle. Statistically, the
portions intercepting the four corners would be in parts of four such
contiguous test patterns. So, when counting n2, the grains intercepting the
four corners are not counted but are weighted as 1. Count all of the other
grains intercepting the test square or rectangle (of known size). Equation
15 is modified as follows:
where n1 is still the number of grains completely within the test figure
(square or rectangular grid), n2 is the number of grains intercepting the
sides of the square or rectangle, but not the four corners, 1 accounts for
the corner grain interceptions, and f is the magnification divided by the
area of the square or rectangle grid.
Fig. 2 The ferrite grain size of a carbon sheet steel (shown at 500⫻, 2% nital
etch) was measured by the planimetric method with images at 200,
500, and 1000⫻ using the Jeffries planimetric method (79.8 mm diameter test
circle). This produced NA values (using Eq 15) of 2407.3, 2674.2, and 3299
grains per mm2 (ASTM G values of 8.28, 8.43, and 8.73, respectively) for the 200,
500 and 1000⫻ images, respectively. The planimetric method was also per-
formed on these three images using the full rectangular image field and the
alternate grain counting method. This produced NA values of 2400.4, 2506.6,
and 2420.2 grains per mm2 (ASTM G values of 8.28, 8.34, and 8.29, respec-
tively). This experiment shows that the standard planimetric method is influenced
by the number of grains counted (n1 was 263, 39, and 10 for the 200, 500 and
1000⫻ images, respectively). In practice, more than one field should be
evaluated due to the potential for field-to-field variability.
JOBNAME: PGIA−−spec 2 PAGE: 15 SESS: 30 OUTPUT: Thu Oct 26 14:44:16 2000
Introduction to Stereological Principles / 29
The average NL value is obtained from the cube root of the product of
the three directional NL values. G is determined by reference to the tables
in ASTM E 112 or by use of Eq 20 (l is the reciprocal of NL; see Eq 4).
Two-Phase Grain Structures. The grain size of a particular phase in
a two-phase structure requires determination of the volume fraction of the
JOBNAME: PGIA−−spec 2 PAGE: 16 SESS: 24 OUTPUT: Thu Oct 26 14:44:16 2000
30 / Practical Guide to Image Analysis
Fig. 3 The ferrite grain size of the specimen analyzed using the Jeffries
method in Fig. 2 (shown at 200⫻ magnification) (2% nital etch), was
measured by the intercept method with a single test circle (79.8 mm diameter) at
200, 500 and 1000⫻ magnifications. This yielded mean lineal intercept lengths
of 17.95, 17.56, and 17.45 µm (for the 200, 500 and 1000⫻ images, respectively)
corresponding to ASTM G values of 8.2, 8.37, and 8.39, respectively. These are
in reasonably good agreement. In practice, more than one field should be
evaluated due to the field-to-field variability of specimens.
(VV)(L/M )
l␣ ⫽
N␣ (Eq 21)
where L is the line length and M is the magnification. The ASTM grain
size number can be determined from the tables in ASTM E 112 or by use
of Eq 20. The method is illustrated in Fig. 4.
JOBNAME: PGIA−−spec 2 PAGE: 17 SESS: 24 OUTPUT: Thu Oct 26 14:44:16 2000
Introduction to Stereological Principles / 31
Inclusion Content
Measurement Statistics
ts
95% CL ⫽ (Eq 22)
N1/2
JOBNAME: PGIA−−spec 2 PAGE: 19 SESS: 25 OUTPUT: Thu Oct 26 14:44:16 2000
Introduction to Stereological Principles / 33
Image Analysis
Conclusions
References
CHAPTER 3
Specimen Preparation
for Image Analysis
George F. Vander Voort
Buehler Ltd.
Sampling
Sectioning
Bulk samples for sectioning may be removed from larger pieces or parts
using methods such as core drilling, band and hack sawing, flame cutting,
and so forth. When these techniques must be used, the microstructure will
be heavily altered in the area of the cut. It is necessary to resection the
piece in the laboratory using an abrasive-wheel cutoff system to establish
the location of the desired plane of polish. In the case of relatively brittle
materials, sectioning may be accomplished by fracturing the specimen at
the desired location.
Abrasive-Wheel Cutting. By far the most widely used sectioning
devices in metallographic laboratories are abrasive cut-off machines (Fig.
1). All abrasive-wheel sectioning should be done wet; direct an ample
flow of water containing a water-soluble oil additive for corrosion
protection into the cut. Wet cutting produces a smooth surface finish and,
most importantly, guards against excessive surface damage caused by
overheating. Abrasive wheels should be selected according to the recom-
mendations of the manufacturer. In general, the bond strength of the
material that holds the abrasive together in the wheel must be decreased
with increasing hardness of the workpiece to be cut, so the bond material
can break down and release old dulled abrasive and introduce new sharp
JOBNAME: PGIA−−spec 2 PAGE: 4 SESS: 53 OUTPUT: Thu Oct 26 14:44:56 2000
38 / Practical Guide to Image Analysis
abrasive to the cut. If the bond strength is too high, burning results, which
severely damages the underlying microstructure. The use of proper bond
strength eliminates the production of burnt surfaces. Bonding material
may be a polymeric resin, a rubber-based compound, or a mixture of the
two. In general, rubber offers the lowest-bond-strength wheels used to cut
the most difficult materials. Such cuts are characterized by an odor that
can become rather strong. In such cases, there should be provisions to
properly exhaust and ventilate the saw area. Specimens must be fixtured
securely during cutting, and cutting pressure should be applied carefully
to prevent wheel breakage. Some materials, such as commercial purity
(CP) titanium (Fig. 2), are more prone to sectioning damage than many
other materials.
Precision Saws. Precision saws (Fig. 3) commonly are used in
metallographic preparation and may be used to section materials intended
for IA. As the name implies, this type of saw is designed to make very
precise cuts. They are smaller in size than the typical laboratory abrasive
cut-off saw and use much smaller blades, typically from 8 to 20 mm (3 to
8 in.) in diameter. These blades are most commonly of the nonconsumable
type, made of copper-base alloys and having diamond or cubic boron
nitride abrasive bonded to the periphery of the blade. Consumable blades
incorporate alumina or silicon carbide abrasives with a rubber bond and
only work on a machine that operates at speeds higher than 1500 rpm.
These blades are much thinner than abrasive cutting wheels. The load
applied during cutting is much less than that used for abrasive cutting,
and, therefore, much less heat is generated during cutting, and depth of
damage is very shallow.
While small section-size pieces that would normally be sectioned with
an abrasive cutter can be cut with a precision saw, cutting time is
appreciably greater, but the depth of damage is much less. These saws are
widely used to section sintered carbides, ceramic materials, thermally-
sprayed coatings, printed circuit boards, and electronic components.
Specimen Mounting
Fig. 4 Poor edge retention due to shrinkage gap between metal specimen and the resin mount caused by water cooling
a hot-ejected thermosetting resin mount. Specimen is carburized AISI 8620 alloy steel, etched using 2% nital.
(a) (b)
Fig. 6 Visibility problem caused by plating the specimen surface with a compatible metal (electroless nickel in this
case) to help edge retention. It is difficult to discern the free edge of (a) a plated nitrided AISI 1215 steel specimen,
due to poor image contrast between the nickel plate and the nitrided layer. By comparison, (b) the unplated specimen
reveals good image contrast between specimen and thermosetting epoxy resin mount, which allows clear distinction of the
nitrided layer. Etchant is 2% nital.
JOBNAME: PGIA−−spec 2 PAGE: 9 SESS: 68 OUTPUT: Thu Oct 26 14:44:56 2000
Specimen Preparation for Image Analysis / 43
Fig. 7 Etching stains emanating from gaps between the specimen and resin
mount. Specimen is M2 high-speed steel etched with Vilella’s reagent.
JOBNAME: PGIA−−spec 2 PAGE: 10 SESS: 86 OUTPUT: Thu Oct 26 14:44:56 2000
44 / Practical Guide to Image Analysis
(a)
(b)
Fig. 8 These nitrided 1215 specimens were prepared in the same holder as
those specimens shown in Fig. 6 but did not exhibit acceptable edge
retention due to the choice of mounting compound. Both thermosetting and
thermoplastic mounting resins can result in poor edge retention if proper
polishing techniques are not used, as seen in (a) thermosetting phenolic mount
and (b) thermoplastic methyl methacrylate resin mount. Specimens were etched
with 2% nital
JOBNAME: PGIA−−spec 2 PAGE: 11 SESS: 68 OUTPUT: Thu Oct 26 14:44:56 2000
45 / Practical Guide to Image Analysis
(a) (b)
Fig. 9 Examples of perfect edge retention of two different materials in Epomet (Buehler Ltd., Lake Bluff, IL)
thermosetting epoxy mounts. (a) Ion-nitrided H13 tool steel specimen etched with 2% nital. (b) Coated carbide
tool specimen etched with Murakami’s reagent
Grinding
Grinding should commence with the finest grit size that will establish an
initially flat surface and remove the effects of sectioning within a few
minutes. An abrasive grit size of 180 or 240 is coarse enough to use on
specimen surfaces sectioned using an abrasive cut-off wheel. Rough
surfaces, such as those produced using a hacksaw and bandsaw, usually
require abrasive grit sizes in the range of 60 to 180 grit. The abrasive used
for each succeeding grinding operation should be one or two grit sizes
JOBNAME: PGIA−−spec 2 PAGE: 13 SESS: 88 OUTPUT: Thu Oct 26 14:44:56 2000
Specimen Preparation for Image Analysis / 47
Polishing
rough polishing step may be required. For such materials, initial rough
polishing may be followed by polishing with 1 µm diamond on a napless,
low-nap, or medium-nap cloth. A compatible lubricant should be used
sparingly to prevent overheating and/or surface deformation. Intermediate
polishing should be performed thoroughly to keep final polishing to a
minimum. Final polishing usually consists of a single step but could
involve two steps, such as polishing using 0.3 µm and 0.05 µm alumina,
or a final polishing step using alumina or colloidal silica followed by
vibratory polishing, using either of these two abrasives.
(a) (b)
Fig. 13 Examples of residual sectioning/grinding damage in polished specimens. (a) Waspaloy etched with Fry’s
reagent. (b) Commercially pure titanium etched with Kroll’s reagent. Differential interference-contrast (DIC)
illumination
Fig. 15 Comet tailing at hard nitride precipitates in AISI HI3 tool steel.
Differential interference-contrast illumination emphasizes topi-
graphical detail.
(a) (b)
Fig. 17 Examples of relief (in this case, height differences between different constituents) at hypereutectic silicon
particles in Al-19.85% Si aluminum alloy. (a) Excessive relief. (b) Minimum relief. Etchant is 0.5 HF
(hydrofluoric acid).
(a) (b)
Fig. 18 Relief (in this case, height differences between constituents and holes) in microstructure of a braze. (a)
Excessive relief. (b) Low relief. Etchant is glyceregia.
JOBNAME: PGIA−−spec 2 PAGE: 19 SESS: 69 OUTPUT: Thu Oct 26 14:44:56 2000
Specimen Preparation for Image Analysis / 53
Fig. 19 Vibratory polisher for final polishing. Its use produces image-
analysis and publication-quality specimens.
JOBNAME: PGIA−−spec 2 PAGE: 20 SESS: 57 OUTPUT: Thu Oct 26 14:44:56 2000
54 / Practical Guide to Image Analysis
because coarse abrasive can carry over to a finer abrasive stage and
produce problems.
Automatic Polishing. Mechanical polishing can be automated to a
high degree using a wide variety of devices ranging from relatively
simple systems (Fig. 20) to rather sophisticated, minicomputer-controlled
or microprocessor-controlled devices (Fig. 21). Units also vary in
capacity from a single specimen to a half-dozen or more at a time. These
systems can be used for all grinding and polishing steps and enable an
Table 1 Traditional method used to prepare most metal and alloy metallographic
specimens
Abrasive Load
Polishing surface Type Grit size N lb Speed, rpm Direction(a) Time, min
Waterproof paper SiC (water cooled) 120 27 6 240–300 Complementary Until plane
SiC (water cooled) 240 27 6 240–300 Complementary 1–2
SiC (water cooled) 320 27 6 240–300 Complementary 1–2
SiC (water cooled) 400 27 6 240–300 Complementary 1–2
SiC (water cooled) 600 27 6 240–300 Complementary 1–2
Canvas Diamond paste with extender 6 µm 27 6 120–150 Complementary 2
Billiard or felt cloth Diamond paste with extender 1 µm 27 6 120–150 Complementary 2
Microcloth pad Aqueous ␣-alumina slurry 0.3 µm 27 6 120–150 Complementary 2
Aqueous ␥-alumina slurry 0.5 µm 27 6 120–150 Complementary 2
(a) Complementary, in the same direction in which the wheel is rotating.
papers through a series of grits, then rough polishing with one or more
sizes of diamond abrasive, followed by fine polishing with one or more
alumina suspensions of different particle size. This procedure will be
called the “traditional” method and is described in Table 1.
This procedure is used for manual preparation as well as using a
machine, but control of the force applied to a specimen in manual
preparation cannot be controlled as accurately and as consistently as with
a machine. Complementary motion means that the specimen holder is
rotated in the same direction as the platen and does not apply to manual
preparation. Some machines can be set so that the specimen holder rotates
in the direction opposite to that of the platen, called “contra.” This
provides a more aggressive action but was not part of the traditional
approach. This action is similar to the manual polishing procedure of
running the specimen in a circular path around the wheel in a direction
opposite to that of the platen rotation. The steps of the traditional method
are not rigid, as other polishing cloths may be substituted and one or more
of the polishing steps might be omitted. Times and pressures can be
varied, as well, to suit the needs of the work or the material being
prepared. This is the art side of metallography.
Contemporary Methods. During the 1990s, new concepts and new
preparation materials have been introduced that have enabled metallog-
raphers to shorten the process while producing better, more consistent
results. Much of the effort focused on reducing or eliminating the use of
silicon carbide paper in the five grinding steps. In all cases, an initial
grinding step must be used, but there is a wide range of materials that can
be substituted for SiC paper. If a central-force automated device is used,
the first step must remove the sectioning damage on each specimen and
bring all of the specimens in the holder to a common plane perpendicular
to the axis of the specimen-holder drive system. This first step is often
called planar grinding, and SiC paper can be used, although more than
one sheet may be needed. Alternatives to SiC paper include the following:
JOBNAME: PGIA−−spec 2 PAGE: 24 SESS: 82 OUTPUT: Thu Oct 26 14:44:56 2000
58 / Practical Guide to Image Analysis
O Alumina paper
O Alumina grinding stone
O Metal-bonded or resin-bonded diamond discs
O Wire mesh discs with metal-bonded diamond
O Stainless steel mesh cloths (diamond is applied during use)
O Rigid grinding discs (RGD) (diamond is applied during use)
O Lapping platens (diamond is applied and becomes embedded in the
surface during use)
(175 HV, for example), although some softer materials can be prepared
using them. This disc can also be used for the planar grinding step. An
example of such a practice applicable to nearly all steels (results are
marginal for solution annealed austenitic stainless steels) is given in Table
3. The first step of planar grinding could also be performed using the rigid
grinding disc and 30 µm diamond. Rigid grinding discs contain no
abrasive; they must be charged during use. Suspensions are the easiest
way to do this. Polycrystalline diamond suspensions are favored over
monocrystalline synthetic diamond suspensions for most metals and
alloys due to their higher cutting rate.
As examples of tailoring these types of procedures to other metals,
alloys, and materials, the following three methods are shown in Tables 4
to 6 for sintered carbides (these methods also work for ceramics),
aluminum, and titanium alloys. Because sintered carbides and ceramics
are cut with a precision saw that produces very little deformation and an
excellent surface finish, a coarser grit diamond abrasive is not needed for
planar grinding (Table 4). Pressure-sensitive-adhesive-backed silk cloths
are excellent for sintered carbides. Nylon is also quite popular.
A four-step practice for aluminum alloys is presented in Table 5. While
MgO was the preferred final polishing abrasive for aluminum and its
alloys, it is a difficult abrasive to use and is not available in very fine sizes,
and colloidal silica has replaced magnesia. This procedure retains all of
the intermetallic precipitates observed in aluminum and its alloys and
minimizes relief. Synthetic napless cloths may also be used for the final
step with colloidal silica, and they will introduce less relief than a low-nap
or medium-nap cloth but may not remove fine polishing scratches as well.
For very pure aluminum alloys, this procedure could be followed by
vibratory polishing to improve the surface finish, as these are quite
difficult to prepare totally free of fine polishing scratches.
The contemporary practice for titanium and its alloys (Table 6)
demonstrates the use of an attack-polishing agent added to the final
polishing abrasive to obtain the best results, especially for commercially
pure titanium, a rather difficult metal to prepare free of deformation for
color etching, heat tinting, and/or polarized light examination of the grain
structure. Attack-polishing solutions added to the abrasive slurry or
suspension must be treated with great care to avoid burns. (Caution: use
good, safe laboratory practices and wear protective gloves.) This
three-step practice could be modified to four steps by adding a 3 µm or 1
µm diamond step.
There are a number of attack polishing agents for use on titanium. The
simplest is a mixture of 10 mL, 30% concentration hydrogen peroxide
(caution: avoid skin contact) and 50 mL colloidal silica. Some metallog-
raphers add either a small amount of Kroll’s reagent to this mixture or a
few milliliters of nitric and hydrofluoric acids—these latter additions may
cause the suspension to gel. In general, these acid additions do little to
JOBNAME: PGIA−−spec 2 PAGE: 27 SESS: 85 OUTPUT: Thu Oct 26 14:44:56 2000
Specimen Preparation for Image Analysis / 61
Etching
(a) (b)
Fig. 23 Examples of conditions that obscure the true microstructure. (a) Improper drying of the specimen. (b) Water
stains emanating from shrinkage gaps between 6061-T6 aluminum alloy and phenolic resin mount. Both
specimens viewed using differential interference-contrast (DIC) illumination.
JOBNAME: PGIA−−spec 2 PAGE: 30 SESS: 84 OUTPUT: Thu Oct 26 14:44:56 2000
64 / Practical Guide to Image Analysis
(a) (b)
(c)
Fig. 24 Examples of different behavior of etchants on the same low-carbon steel sheet. (a) 2% nital etch reveals ferrite
grain boundaries and cementite. (b) 4% picral etch reveals cementite aggregates and no ferrite grain
boundaries. (c) Tint etching with Beraha’s solution colors all grains according to their crystallographic orientation. All
specimens are viewed using bright field illumination. For color version of Fig. 24(c), see endsheets of book.
JOBNAME: PGIA−−spec 2 PAGE: 31 SESS: 84 OUTPUT: Thu Oct 26 14:44:56 2000
Specimen Preparation for Image Analysis / 65
(a)
(b) (c)
Fig. 25 Examples of selective etching of ferrite-cementite-iron phosphide ternary eutectic in gray cast iron. (a)
Picral/nital etch reveals the eutectic surrounded by pearlite. (b) Boiling alkaline sodium-picrate etch colors
only the cementite phase. (c) Boiling Murakami’s reagent etch darkly colors the iron phosphide and lightly colors cementite
after prolonged etching. All specimens are viewed using bright field illumination.
JOBNAME: PGIA−−spec 2 PAGE: 32 SESS: 88 OUTPUT: Thu Oct 26 14:44:56 2000
66 / Practical Guide to Image Analysis
iron phosphide darkly and lightly colors the cementite after prolonged
etching. The ferrite could be colored preferentially using Klemm I
reagent.
Selective etching has been commonly applied to stainless steels to
detect, identify, and measure ␦-ferrite, ferrite in dual phase grades, and
-phase. Figure 26 shows examples of the use of a number of popular
etchants to reveal the microstructure of 7Mo Plus (Carpenter Technology
Corporation, Reading, PA) (UNS S32950), a dual-phase stainless steel, in
the hot-rolled and annealed condition. Figure 26(a) shows a well-
delineated structure when the specimen was immersed in ethanolic 15%
HCl for 30 min. All of the phase boundaries are clearly revealed, but there
is no discrimination between ferrite and austenite, and twin boundaries in
the austenite are not revealed. Glyceregia, a popular etchant for stainless
steels, is not suitable for this grade because it appears to be rather
orientation-sensitive (Fig. 26b). Many electrolytic etchants are used to
etch stainless steels, but only a few have selective characteristics. Of the
four shown in Fig. 26 (c to f), only aqueous 60% nitric acid produces any
gray level discrimination, which is weak, between the phases. However,
all nicely reveal the phase boundaries. Two electrolytic reagents are
commonly used to color ferrite in dual phase grades and ␦-ferrite in
martensitic grades (Fig. 26 g, h). Of these, aqueous 20% sodium
hydroxide (Fig. 26g) usually gives more uniform coloring of the ferrite.
Murakami’s and Groesbeck’s reagents also are used for this purpose. Tint
etchants developed by Beraha nicely color the ferrite phase, as illustrated
in Fig. 26(i).
Selective etching techniques have been more thoroughly developed for
use on iron-base alloys than other alloy systems but are not limited to
iron-base alloys. For example, selective etching of -phase in ␣- copper
alloys is a popular subject. Figure 27 illustrates coloring of -phase in
naval brass (UNS C46400) using Klemm I reagent. Selective etching has
long been used to identify intermetallic phases in aluminum alloys; the
method was used for many years before the development of energy-
dispersive spectroscopy. It still is useful for image analysis work. Figure
28 shows selective coloration of -phase, CuAl2, in the Al-33% Cu
eutectic alloy. Figure 29 illustrates the structure of a simple sintered
tungsten carbide (WC-Co) cutting tool. In the as-polished condition (Fig.
29a), the cobalt binder is faintly visible against the more grayish tungsten
carbide grains, and a few particles of graphite are visible. Light relief
polishing brings out the outlines of the cobalt binder phase, but this image
is not particularly useful for image analysis (Fig. 29b). Etching in a
solution of hydrochloric acid saturated with ferric chloride (Fig. 29c)
attacks the cobalt and provides good uniform contrast for measurement of
the cobalt binder phase. A subsequent etch using Murakami’s reagent at
room temperature reveals the edges of the tungsten carbide grains, which
is useful to evaluate grain size (Fig. 29d).
JOBNAME: PGIA−−spec 2 PAGE: 33 SESS: 87 OUTPUT: Thu Oct 26 14:44:56 2000
Specimen Preparation for Image Analysis / 67
(a) (b)
(c) (d)
(e) (f)
Fig. 26 Examples of selective etching to identify different phases in hot-rolled, annealed 7MoPlus duplex stainless steel
microstructure. Chemical etchants used were (a) immersion in 15% HCl in ethanol/30 min and (b) glyceregia/2
min. Electrolytic etchants used were (c) 60% HNO3/1 V direct current (dc)/20 s, platinum cathode; (d) 10% oxalic acid/6
V dc/75 s; (e) 10% CrO3/6 V dc/30 s; and (f) 2% H2SO4/5 V dc/30 s. Selective electrolytic etchants used were (g) 20%
NaOH/Pt cathode/4 V dc /10 s and (h) 10N KOH/Pt/3 V dc/4 s. (i) Tint etch 200⫻. See text for description of microstructures.
For color version of Fig. 26(i), see endsheets of book.
JOBNAME: PGIA−−spec 2 PAGE: 34 SESS: 72 OUTPUT: Thu Oct 26 14:44:56 2000
68 / Practical Guide to Image Analysis
(g) (h)
(i)
Fig. 26 Examples of selective etching to identify different phases in hot-rolled, annealed 7MoPlus duplex stainless steel
microstructure. Chemical etchants used were (a) immersion in 15% HCl in ethanol/30 min and (b) glyceregia/2
min. Electrolytic etchants used were (c) 60% HNO3/1 V direct current (dc)/20 s, platinum cathode; (d) 10% oxalic acid/6
V dc/75 s; (e) 10% CrO3/6 V dc/30 s; and (f) 2% H2SO4/5 V dc/30 s. Selective electrolytic etchants used were (g) 20%
NaOH/Pt cathode/4 V dc /10 s and (h) 10N KOH/Pt/3 V dc/4 s. (i) Tint etch 200⫻. See text for description of microstructures.
For color version of Fig. 26(i), see endsheets of book.
JOBNAME: PGIA−−spec 2 PAGE: 35 SESS: 72 OUTPUT: Thu Oct 26 14:44:56 2000
Specimen Preparation for Image Analysis / 69
(a) (b)
Fig. 27 Selective etching of naval brass with Klemm I reagent reveals the -phase (dark constituent) in the ␣-
copper alloy. (a) Transverse section. (b) Longitudinal section
JOBNAME: PGIA−−spec 2 PAGE: 36 SESS: 84 OUTPUT: Thu Oct 26 14:44:56 2000
70 / Practical Guide to Image Analysis
(a) (b)
(c) (d)
Fig. 29 Selective etching of sintered tungsten carbide-cobalt (WC-Co) cutting tool material. (a) Some graphite
particles are visible in the as-polished condition. (b) Light relief polishing outlines cobalt binder phase. (c)
Hydrochloric acid saturated with ferric chloride solution etch darkens the cobalt phase. (d) Subsequent Murakami’s
reagent etch reveals edges of WC grains. Viewed using bright field illumination
JOBNAME: PGIA−−spec 2 PAGE: 37 SESS: 64 OUTPUT: Thu Oct 26 14:44:56 2000
Specimen Preparation for Image Analysis / 71
Conclusions
References
Image Considerations
Fig. 1 Image analysis process steps. Each step has a decision point before the
next step can be achieved.
Fig. 2 Actual image area with corresponding magnified view. The individual
pixels are arranged in x, y coordinate space with gray level, or
intensity, associated with each one.
Principles of Image Analysis / 77
mid to late 1980s, 512 ⫻ 512 pixel arrays were the standard. Older
systems typically had 64 (26) gray levels, whereas at the time of this
publication, all commercial systems offer at least 256 (28) gray levels,
although there are systems having 4096 (212) and 65,536 (216) gray
levels. These are often referred to 6 bit, 8 bit, 12 bit, and 16 bit cameras,
respectively.
The process of converting an analog signal to a digital one has some
limitations that must be considered during image quantification. For
example, pixels that straddle the edge of a feature of interest can affect the
accuracy and precision of each measurement because an image is
composed of square pixels having discrete intensity levels. Whether a
pixel resides inside or outside a feature edge can be quite arbitrary and
dependent on positioning of the feature within the pixel array. In addition,
the pixels along the feature edge effectively contain an intermediate
intensity value that results from averaging adjacent pixels. Such consid-
erations suggest a desire to minimize pixel size and increase the number
of gray levels in a system—particularly if features of interest are very
small relative to the entire image—at the most reasonable equipment cost.
Resolution versus Magnification. Two of the more confusing aspects
of a digital image are the concepts of resolution and magnification.
Resolution can be defined as the smallest feature that can be resolved. For
example, the theoretical limit at which it is no longer possible to
distinguish two distinct adjacent lines using light as the imaging method
is at a separation distance of about 0.3 µm. Magnification, on the other
hand, is the ratio of an object dimension in an image to the actual size of
the object. Determining that ratio sometimes can be problematic, espe-
cially when the actual dimension is not known.
The displayed dimension of pixels is determined by the true magnifi-
cation of the imaging setup. However, the displayed pixel dimension can
vary considerably with display media, such as on a monitor or hard-copy
(paper) print out. This is because a typical screen resolution is 72 dots per
inch (dpi), and unless the digitized image pixel resolution is exactly the
same, the displayed image might be smaller or larger than the observed
size due to the scaling of the visualizing software. For example, if an
image is digitized into a computer having a 1024 ⫻ 1024 pixel array, the
dpi could be virtually any number, depending on the imaging program
used. If that same 1024 ⫻ 1024 image is converted to 150 dpi and viewed
on a standard monitor, it would appear to be twice as large as expected
due to the 72 dpi monitor resolution limit.
The necessary printer resolution for a given image depends on the
number of gray levels desired, the resolution of the image, and the
specific print engine used. Typically, printers require a 4 ⫻ 4 dot array for
each pixel if 16 shades of gray are needed. An improvement in output dpi
by a factor of 1.5 to 2 is possible with many printers by optimizing the
raster, which is a scanning pattern of parallel lines that form the display
78 / Practical Guide to Image Analysis
number of pixels (Fig. 4). If the user is doing more than just determining
whether or not a feature exists, the relative accuracy of a system is the
limiting factor in making any physical property measurements or corre-
lating a microstructure.
When small features exist within an array of larger features, increasing
the magnification to improve resolving power forces the user to system-
atically account for edge effects and significantly increases the need for a
larger number of fields to cover the same area that a lower magnification
can cover. Again, the tradeoff has to be balanced with the accuracy
needed, the system cost, and the speed desired for the application. If a
high level of shape characterization is needed, a greater number of pixels
may be needed to resolve subtle shape variations.
Fig. 3 Small features magnified over 25 times showing the differences in the size and number density of
pixels within features when comparing a 760 ⫻ 560 pixel camera and a 1024 ⫻ 1024 pixel
camera
Image Acquisition
O Do the size and shape of features change with position within the
camera?
O Is the feature gray-level range the same over time?
82 / Practical Guide to Image Analysis
Users generally turn to the use of dc power supplies, which isolate power
from house current to minimize subtle voltage irregularities. Also, some
systems contain feedback loops that continuously monitor the amount of
light emanating from the light source and adjust the voltage to compen-
sate for intensity fluctuations. Another way of achieving consistent
intensities is to create a sample that can be used as a standard when setting
up the system. This can be done by measuring either the actual intensity
or feature size of a specified area on the sample.
Image Processing
(a) (b)
(c) (d)
Fig. 5 Rank-order processing used to create a pseudoreference image. (a) Image without any features
in the light path showing dust particles and shading of dark regions to light regions going from
the upper left to the lower right. (b) Same image after shading correction. (c) Image of particles without
shading correction. (d) Same image after shading correction showing uniform illumination across the
entire image
84 / Practical Guide to Image Analysis
Fig. 6 Reflected bright-field image of an oxide coating before and after use of a gamma curve transformation that
translates pixels with lower intensities to higher intensities while keeping the original lighter pixels near the same
levels
O Sharpening an image
O Eliminating noise
O Smoothing edges
O Finding edges
O Accentuating subtle features
(a) (b)
(c) (d)
Fig. 8 Reflected-light image of an aluminum-silicon alloy before and after gray-level histogram equalization, which
significantly improves contrast of the subtle smaller silicon particles by uniformly distributing intensities
86 / Practical Guide to Image Analysis
Fig. 11 Defect shown with different image enhancements. (a) High-resolution image from a transmission electron
microscope of silicon carbide defect in silicon showing the alignment of atoms. (b) Power spectrum after
application of fast Fourier transform (FFT) showing dark peaks that result from the higher-frequency periodic silicon
structure. (c) Defect after masking the periodic peaks and performing an inverse FFT
88 / Practical Guide to Image Analysis
Feature Discrimination
(a) (b)
(c) (d)
There are more issues to consider when thresholding color images for
features of interest. Most systems use red, green, and blue (RGB)
channels to establish a color for each pixel in an image. It is difficult to
determine the appropriate combination of red, green, and blue signals to
distinguish features. Some systems allow the user to point at a series of
points in a color image and automatically calculate the RGB values,
which are used to threshold the entire image. A better methodology than
RGB color space for many applications is to view a color image in hue,
intensity, and saturation (HIS) space. The advantage of this method is that
color information (hue and saturation) is separated from brightness
(intensity). Hue essentially is the color a user observes, while the
saturation is the relative strength of the color. For example, translating
“dark green” to an HIS perspective would use dark as the level of
saturation (generally ranges as a value between 0 and 100%) and green as
Fig. 14 Thresholding gray levels in an image by selecting the gray-level peaks that are characteristic
of the features of interest
Principles of Image Analysis / 91
the hue observed. While saturation describes the relative strength of color,
intensity is associated with the brightness of the color. Intensity is
analogous to thresholding of gray values in black and white space. Hue,
intensity, and saturation space also is described as hue, lightness, and
saturation (HLS) space, where L quantifies the dark-light aspect of
colored light (see Chapter 9, “Color Image Processing”).
Nonuniform Segmentation. Selecting the threshold range of gray
levels to segment foreground features sometimes results in overdetecting
some features and underdetecting others. This is due not only to varying
brightness across an image, but also is often due to the gradual change of
gray levels while scanning across a feature. Delineation enhancement is
a useful gray-level enhancement tool in this situation (Fig. 15). This
algorithm processes the pixels that surround features by transforming
their gradual change in gray level to a much steeper curve. In this way, as
features initially fall within the selected gray-level range, the apparent
size of the feature will not change much as a wider band of gray levels is
selected to segment all features.
There are other gray-level image processing tools that can be used to
delineate edges prior to segmentation and to improve contrast in certain
regions of an image, and their applicability to a specific application can
be determined by experimenting with them.
Watershed Segmentation. Watershed transformations are iterative
processes performed on images that have space-filling features, such as
grains. The enhancement usually starts with the basic eroded point or the
last point that exists in a feature during successive erosions, often referred
to as the ultimate eroded point. Erosion/dilation is the removal and/or
addition of pixels to the boundary of features based on neighborhood
(a) (b)
relationships. The basic eroded point is dilated until the edge of the
dilating feature touches another dilating feature, leaving a line of
separation (watershed line) between touching features.
Another much faster approach is to create a Euclidean distance map
(EDM), which assigns successively brighter gray levels to each dilation
iteration in a binary image (Ref 2). The advantage of this approach is that
the periphery of each feature grows until impeded by the growth front of
another feature. Although watershed segmentation is a powerful tool, it is
fraught with application subtleties when applied to a wide range of
images. The reader is encouraged to refer to Ref 2 and 3 to gain a better
understanding of the proper use and optimization of this algorithm and for
a detailed discussion on the use of watershed segmentation in different
applications.
Texture Segmentation. Many images contain texture, such as lamellar
structures, and features of widely varying size, which may or may not be
the features of interest. There are several gray-level algorithms that are
particularly well suited to images containing texture because of the
inherent frequency or spatial relationships between structures. These
operators usually transform gradually varying features (low frequency) or
highly varying features (high frequency) into an image with significantly
less texture.
Algorithms such as Laplacian, Variance, Roberts, Hurst, and Frei and
Chen operators often are used either alone or in combination with other
processing algorithms to delineate structures based on differing textures.
Methodology to characterize banding and orientation microstructures of
metals and alloys is covered in ASTM E 1268 (Ref 4).
Pattern-matching algorithms are powerful processing tools used to
discriminate features of interest in an image. Usually, they require prior
knowledge of the general shape of the features contained in the image.
For instance, if there are cylindrical fibers orientated in various ways
within a two-dimensional section of a composite, a set of boundaries can
be generated that correspond to the angles at which a cylinder might occur
in three-dimensional space. The resulting boundaries are matched to the
actual fibers that exist in the section, and the resulting angles are
calculated based on the matched patterns (Fig. 16). Generally, pattern-
matching algorithms are used when required measurements cannot be
directly made or calculated from the shape of a binary feature of interest.
(a) (b)
These basic four often are combined in various ways to obtain a desired
result, as illustrated in Fig. 17.
A simple way to represent Boolean logic is by using a truth table, which
shows the criteria that must be fulfilled to be included in the output image.
When comparing two images, the AND Boolean operation requires that
the corresponding pixels from both images be ON (1 ⫽ ON, 0 ⫽ OFF).
Such a truth table would look like this:
compared between images (Fig. 18). The resultant image contains the
entire feature instead of just the parts of a feature that are affected by the
Boolean comparison. Feature-based logic uses artificial features, such as
geometric shapes, and real features, such as grain boundaries, to ascertain
information about features of interest.
There are a plethora of uses for Boolean operators on binary images and
also in combination with gray-scale images. Examples include coating
thickness measurements, stereological measurements, contiguity of
phases, and location detection of features.
Morphological Binary Processing. Beyond combining images in
unique ways to achieve a useful result, there also are algorithms that alter
individual pixels of features within binary images. There are hundreds of
specialized algorithms that might help particular applications and merit
further experimentation (Ref 2, 3). Several of the most popular algorithms
are mentioned below.
Hole filling is a common tool that removes internal “holes” within
features. For example, one technique completely fills enclosed regions of
features (Fig. 19a, b) using feature labeling. This identifies only those
features that do not touch the image edge, and these are combined with
the original image using the Boolean OR operator to reconstruct the
original inverted binary image with the holes filled in. There is no limit on
how large or tortuous a shape is. The only requirement for hole filling is
that the hole is completely contained within a feature.
A variation of this is morphological-based hole filling. In this technique,
the holes are treated as features in the inverted image and processed in the
desired way before inverting the image back. For example, if only holes
of a certain size are to be filled, the image is simply inverted, features
below the desired size are eliminated, and then the image is inverted back
(Fig. 19a, c, d). It also is possible to fill holes based on other shape
criteria.
Erosion and Dilation. Common operations that use neighborhood
relationships between pixels include erosion and dilation. These opera-
tions simply remove or add pixels to the periphery (both externally and
internally, if it exists) of a feature based on the shape and location of
neighborhood pixels. Erosion often is used to remove extraneous pixels,
which may result when overdetection during thresholding occurs, because
some noise has the same gray-level range as the features of interest. When
used in combination with dilation (referred to as “opening”), it is possible
to separate touching particles. Dilation often is used to connect features
by first dilating the features followed by erosion to return the features to
their approximate original size and shape (referred to as “closing”).
(a) (b)
(c) (d)
Further Considerations
The binary operations described in this chapter are only a partial list of
the most frequently used operations and can be combined in useful ways
to produce an image that lends itself to straightforward quantification of
features of interest. Today, image analysis systems incorporate many
processing tools to perform automated, or at least fast-feature, analysis.
Creativity is the final tool that must be used to take full advantage of the
power of image analysis. The user must determine if the time spent in
developing a set of processing steps to achieve computerized analysis is
justified for the application. For example, if you have a complicated
(a) (b)
(c) (d)
image that has minimal contrast but somewhat obvious features to the
human eye and only a couple of images to quantify, then manual
measurements or tracing of the features might be adequate. However, the
benefit of automated image analysis is that sometimes-subtle feature
characterizations can yield answers that the user might never have
guessed based on cursory inspections of the microstructure.
References
CHAPTER 5
Measurements
John J. Friel
Princeton Gamma Tech
Contrast Mechanisms
tion and the contrast mechanism itself that will be used to enhance and
quantify. In routine metallography, bright-field reflected-light microscopy
is the usual signal, but it may carry many varied contrast types depending
on the specimen, its preparation, and etching. The mode of operation of
the microscope also affects the selection of contrast mechanisms. A list of
some signals and contrast mechanisms is given in Table 1.
Direct Measurements
Field Measurements
Field measurements usually are collected over a specified number of
fields, determined either by statistical considerations of precision or by
compliance with a standard procedure. Standard procedures, or norms, are
published by national and international standards organizations to con-
form to agreed-upon levels of precision. Field measurements also are the
output of comparison chart methods.
Statistical measures of precision, such as 95% confidence interval (CI)
or percent of relative accuracy (%RA) are determined on a field-to-field
JOBNAME: PGIA−−spec 2 PAGE: 3 SESS: 54 OUTPUT: Thu Oct 26 14:55:36 2000
Measurements / 103
O Field number
O Field area
O Number of features
O Number of features excluded
O Area of features
O Area of features filled
O Area fraction
O Number of intercepts
O NL
O NA (number of features divided by the total area of the field)
equivalent to the volume fraction, VV, for a field of features that do not
have preferred orientation.
The average area of features can be calculated from feature-specific
measurements, but it also is possible to derive it from field measurements
as follows:
AA
A⫽ (Eq 1)
NA
(a) (b)
Fig. 1 Multiphase ceramic material in (a) gray scale and (b) pseudocolor.
Please see endsheets of book for color version.
LL
L3 ⫽ (Eq 2)
NL
by the selected parameter. For example, large features are not given any
more weight than small ones in a number histogram of area, but in an
area-weighted histogram, the dividing line between coarse and fine is
more easily observed. Using more than one operator to agree on the
selection point improves the precision of the analysis by reducing the
variance, but any inherent bias still remains. Determination of duplex
grain size described in the section “Grain Size” is an example of this
situation.
Feature orientation in an image constitutes a field measurement even
though it could be determined by measuring the orientation of each
feature and calculating the mean for the field. This is easily done using a
computer, but there is a risk that there might not be enough pixels to
sufficiently define the shape of small features. For this reason, orientation
measurements are less precise. Moreover, if all features are weighted
equally regardless of size, small ill-defined features will add significant
error to the results, and the measurement may not be truly representative.
Because orientation of features relates to material properties, measure-
ments of many fields taken from different samples are more representative
of the material than measurements summed from individual features. This
situation agrees nicely with the metallographic principle, do more less
well; that is, measurements taken from more samples and more fields give
a better representative result than a lot of measurements on one field. A
count of intercepts, NL, made in two or more directions on the specimen
can be used either manually or automatically to derive a measure of
preferred orientation. The directions, for example, might be perpendicular
and parallel to the rolling direction in a wrought metal. The term
orientation as used here refers to an alignment of features recognizable in
a microscope or micrograph. It does not refer to crystallographic
orientation, as might be ascertained using diffraction methods.
ASTM E 1268 (Ref 3) describes a procedure for measuring and
reporting banding in metals. The procedure calls for measuring NL
perpendicular and parallel to the observed banding and calculating an
anisotropy index, AI, or a degree of orientation, ⍀12, as follows (Ref 4):
NL Ⲛ
AI ⫽ (Eq 3)
NL 储
and
NL Ⲛ ⫺ NL 储
⍀12 ⫽ (Eq 4)
NL Ⲛ ⫹ 0.571NL 储
1 ⫺ AA
⫽ (Eq 5)
NL
In the case of space filling grains, the area fraction equals one, and,
therefore, the mean free path is zero. However, for features that do not
occupy 100% of the image, mean free path gives a measure of the
distance between features on a field-by-field basis.
Surface Area. There is at least one way to approximate the surface
area from two images using stereoscopy. If images are acquired from two
different points of view, and the angle between them is known, the height
at any point, Z, can be calculated on the basis of displacement from the
optic axis as follows:
P
Z⫽ (Eq 6)
2M sin(␣/2)
the scale represents the fractal dimension of the surface. Fractal dimen-
sion is discussed further in the section “Derived Measurements.”
Direct measurement of surface area using methods such as profilometry
or the scanning probe microscopy techniques of scanning tunneling
microscopy (STM) and atomic force microscopy (AFM), will not be
considered here because they are not image analysis techniques.
Feature-Specific Measurements
Feature-specific measurements logically imply the use of an AIA
system. In the past, so-called semiautomatic systems were used in which
the operator traced the outline of features on a digitizing tablet. This type
of analysis is time consuming and is only useful to measure a limited
number of features. However, it does have the advantage of requiring the
operator to confirm the entry of each feature into the data set.
Although specimen preparation and image collection have been dis-
cussed previously, it should be emphasized again that automatic image
analysis is meaningful only when the image(s) accurately reflects(s) the
properties of the features to be measured. The feature finding program
ordinarily detects features using pseudocolor. As features are found, their
position, area, and pixel intensity are recorded and stored. Other primitive
measures of features can be made by determining Feret diameters
(directed diameters, or DD) at some discrete number of angles. These
include measures of length, such as longest dimension, breadth, and
diameter. Theoretically, shape determination becomes more accurate with
increasing use of Feret diameters. However, in practice, resolution,
threshold setting, and image processing are more likely to be limiting
factors than is the number of Feret diameters. A list of some feature-
specific primitive measurements follows:
O Position x and y
O Area
O Area filled
O Directed diameters (including maximum, minimum, and average)
O Perimeter
O Inscribed x and y (including maximum and minimum)
O Tangent count
O Intercept count
O Hole count
O Feature number
O Feature angle
the threshold setting. For any given pixel resolution, the smaller the
feature, the less precise is its area measurement. This problem is even
greater for shape measurements, as described in the section “Derived
Measurements.”
If a microstructure contains features of significantly different sizes, it
may be necessary to perform the analysis at two different magnifications.
However, there is a magnification effect in which more features are
detected at higher magnification, which may cause bias. Underwood (Ref
7) states in a discussion of magnification effect that the investigator “sees”
more at higher magnifications. Thus, more grains or particles are counted
at higher magnification, so values of NA are greater. The same is true for
lamellae, but spacings become smaller as more lamellae are counted.
Other factors that can influence area measurement include threshold
setting, which can affect the precision of area measurement, and specimen
preparation and image processing, which can affect both the precision and
bias of area measurement.
Length. Feature-specific descriptor functions such as maximum, mini-
mum, and average are readily available with a computer, and are used to
define longest dimension (max DD), breadth (min DD), and average
diameter. Average diameter as used here refers to the average directed
diameter of each feature, rather than the average over all of the features.
Length measurements of individual features are not readily accommo-
dated using manual methods, but they can be done. For example, the
mean lineal intercept distance, L, can be determined by averaging the
chord lengths measured on each feature.
As with area measures, the precision of length measurements is limited
by pixel resolution, the number of directed diameters constructed by the
computer, and threshold setting. Microstructures containing large and
small features may have to be analyzed at two different magnifications, as
with area measurements.
Bias in area and length measurements is influenced by threshold setting
and microscope and image analyzer calibration. Calibration should be
performed at a magnification as close to that used for the analysis as
possible, and, for SEMs, x and y should be calibrated separately (Ref 8).
Perimeter. Measurement of perimeter length requires special consid-
eration because representation of the outer edge of features in a digital
image consists of steps between adjacent pixels, which either are square
or some other polygonal shape. The greater the pixel resolution, the closer
will be the approximation to the true length of a curving perimeter.
Because the computer knows the coordinates of every pixel, an approxi-
mation of the perimeter can be made by calculating the length of the
diagonal line between the centers of each of the outer pixels and summing
them. However, this approach typically still underestimates the true
perimeter of most features. Therefore, AIA systems often use various
adjustments to the diagonal distance to minimize bias.
JOBNAME: PGIA−−spec 2 PAGE: 12 SESS: 54 OUTPUT: Thu Oct 26 14:55:36 2000
112 / Practical Guide to Image Analysis
1
L ⫽ nr (Eq 7)
2 4
L⫽
1
2
[
r (n0 ⫹ n90) ⫹
r
公2
]
(n45 ⫹ n135)
4
(Eq 8)
Figure 5 shows two grids having the same arbitrary curve of unknown
length superimposed. The grid lines in Fig. 5(a) are equally spaced;
therefore, those at 45° and 135° do not necessarily coincide with the
diagonals of the squares. In Fig. 5(b), the squares outlined by black lines
represent pixels in a digital image, and the 45° and 135° lines in blue are
constructed along the pixel diagonals. Equation 7 applies to the intersec-
tion count from Fig. 5(a), and Eq 8 applies to Fig. 5(b).
For example, in Fig. 5(a), the number of intersections of the red curve
with the grid equals 56. Therefore, L ⫽ 22.0 in accordance with Eq 7. By
comparison, in Fig. 5(b), n⬜ ⫽ 31 and n⫻ ⫽ 36, where n⬜ refers to the
number of intersections with the black square gridlines, and n⫻ refers to
JOBNAME: PGIA−−spec 2 PAGE: 13 SESS: 54 OUTPUT: Thu Oct 26 14:55:36 2000
Measurements / 113
(a)
(b)
Fig. 5 Grids used to measure Crofton perimeter. (a) Equally spaced grid lines.
(b) Rectilinear lines and diagonal lines. Please see endsheets of book
for color versions.
JOBNAME: PGIA−−spec 2 PAGE: 14 SESS: 54 OUTPUT: Thu Oct 26 14:55:36 2000
114 / Practical Guide to Image Analysis
Derived Measurements
Field Measurements
Stereological Parameters. Stereology is a body of knowledge for
characterizing three-dimensional features from their two-dimensional
representations in planar sections. A detailed review of stereological
relationships can be found in Chapter 2, “Introduction to Stereological
Principles,” and in Ref 4 and 10. The notation uses subscripts to denote
a ratio. For example, NA refers to the number of features divided by the
total area of the field. A feature could be a phase in a microstructure, a
particle in a dispersion, or any other identifiable part of an image. Volume
fraction, VV, is a quantity derived from the measured area fraction, AA,
although in this case, the relationship is one of identity, VV ⫽ AA. There
are various stereological parameters that are not directly measured but
that correlate well with material properties. For example, the volume
fraction of a dispersed phase may correlate with mechanical properties,
JOBNAME: PGIA−−spec 2 PAGE: 16 SESS: 54 OUTPUT: Thu Oct 26 14:55:36 2000
116 / Practical Guide to Image Analysis
and the length per area, LA, of grain boundaries exposed to a corrosive
medium may correlate with corrosion resistance.
The easiest measurements to make are those that involve counting
rather than measuring. For example, if a test grid of lines or points is used,
the number of lines that intercept a feature of interest or the number of
points that lie on the feature are counted and reported as NL or point
count, PP. ASTM E 562 describes procedures for manual point counting
and provides a table showing the expected precision depending on the
number of points counted, the number of fields, and the volume fraction
of the features (Ref 1). Automatic image analysis systems consider all
pixels in the image, and it is left to the operator to tell the computer which
pixels should be assigned to a particular phase by using pseudocolor. It
also is easy to count the number of points of interest that intersect lines
in a grid, PL. If the objects of interest are discrete features, such as
particles, then the number of times the features intercept the test lines
gives NL. For space filling grains, PL ⫽ NL, and for particles, PL ⫽ 2 NL.
In an AIA system, the length of the test line is the entire raster; that is, the
total length of lines comprising the image in calibrated units of the
microstructure. Similarly, it is possible to count the number per area, NA,
but this has the added difficulty of having to rigorously keep track of each
feature or grain counted to avoid duplication.
All of the parameters above that involve counting are directly measur-
able, and several other useful parameters can be derived from these
measurements. For example, the surface area per volume, SV, can be
calculated as follows:
SV ⫽ 2PL (Eq 9)
LA ⫽ (Eq 10)
2PL
AA
A⫽ (Eq 11)
NA
grains in a given area of the specimen, such as the number of grains per
square inch at 100⫻ magnification. For more on grain size measurement,
see Chapter 2, “Introduction to Stereological Principles;” Chapter 7,
“Analysis and Interpretation;” Chapter 8, “Applications,” and Ref 11.
Realizing the importance of accurate grain size measurement, ASTM
Committee E 4 on Metallography took on the task of standardizing grain
size measurement. ASTM E 112 is the current standard for measuring
grain size and calculating an ASTM G value (Ref 12). The relationship
between G and the number of grains per square inch at a magnification of
100⫻, n, follows:
where NA is in mm–2.
The procedures prescribed in ASTM E 112 assume an approximately
log-normal grain size distribution. There are other conditions in which
grain size needs to be measured and reported differently, such as a
situation in which a few large grains are present in a finer-grained matrix.
This is reported as the largest grain observed in a sample, expressed as
ALA (as large as) grain size. The procedure for making this measurement
is described in ASTM E 930.
Duplex grain size is an example of features distinguished by their size,
shape, and position discussed previously. ASTM E 1181 describes various
duplex conditions, such as bimodal distributions, wide-range conditions,
necklace conditions, and ALA. Figure 7 shows an image containing
bimodal duplex grain size in an Inconel Alloy 718 (UNS N07718)
nickel-base superalloy. Simply counting grains and measuring their
average grain size (AGS) yields 1004 grains having an ASTM G value of
9.2. However, such an analysis completely mischaracterizes the sample
because the grain distribution is bimodal.
Figure 8 shows an area-weighted histogram of the microstructure in
Fig. 7, which suggests a division in the distribution at an average diameter
of approximately 50 µm (Ref 14). The number percent and area percent
histograms are superimposed, and the area-weighted plot indicates the
bimodal nature of the distribution. The number percent of the coarse
JOBNAME: PGIA−−spec 2 PAGE: 18 SESS: 54 OUTPUT: Thu Oct 26 14:55:36 2000
118 / Practical Guide to Image Analysis
grains is only 2%, but the area percent is 32%. Repeating the analysis for
grains with a diameter greater than 50 µm yields 22 grains having a G
value of 4.9. The balance of the microstructure consists of 982 grains
having a G value of 9.8. The report on grain size, as specified by ASTM
E 1181 on Duplex Grain Size, is given as: Duplex, Bimodal, 68% AGS
ASTM No. 10, 32% AGS ASTM No. 5.
The fractal dimension of a surface, such as a fracture surface, can be
derived from measurements of microstructural features. Although fractal
dimension is not a common measure of roughness, it can be calculated
from measurements such as the length of a trace or the area of a surface.
The use of fractal measurements in image analysis is described by Russ
(Ref 15) and Underwood (Ref 16).
The concept of fractals involves a change in some dimension as a
function of scale. A profilometer provides a measure of roughness, but the
scale is fixed by the size of the tip. A microscope capable of various
magnifications provides a suitable way to change the scale. An SEM has
a large range of magnification over which it can be operated, which makes
it an ideal instrument to measure length or area as a function of scale. Two
linear measurements that can be made are length of a vertical profile of a
rough surface and length of the outline of features in serial sections. These
and other measurements for describing rough surfaces are extensively
reviewed by Underwood and Banerji (Ref 6).
If you can measure or estimate the surface area, using, for example,
stereoscopy discussed above, then the fractal dimension can be calcu-
lated. Such an analysis on a fracture surface is described by Friel and
Pande (Ref 17). Figure 9 shows a pair of images of the alloy described in
Ref 17 taken at two different tilt angles using an SEM. From stereopairs
Fig. 10 Fractal plot of fracture surface area versus scanning electron micro-
scope magnification
JOBNAME: PGIA−−spec 2 PAGE: 21 SESS: 54 OUTPUT: Thu Oct 26 14:55:36 2000
Measurements / 121
Fig. 11 Images of clustered, ordered, and random features with their tessel-
lated counterparts
JOBNAME: PGIA−−spec 2 PAGE: 22 SESS: 54 OUTPUT: Thu Oct 26 14:55:36 2000
122 / Practical Guide to Image Analysis
(a) (b)
Fig. 12 Tessellation cells constructed from features in a microstructure. (a) Scanning electron microscope
photomicrograph of porosity in TiO2. (b) Tessellation cells constructed based on pores
JOBNAME: PGIA−−spec 2 PAGE: 23 SESS: 54 OUTPUT: Thu Oct 26 14:55:36 2000
Measurements / 123
Because the image consists only of space filling grains, the measurement
of area fraction and mean free path are not meaningful. However, in a real
digital image, the grain boundaries comprise a finite number of pixels.
Therefore, the measured area fraction is less than one, and the calculated
mean free path is greater than zero. Even when image processing is used
to thin the boundaries, their fraction is still not zero, and the reported
results should consider this. Tables 3 and 4 show typical field and feature
measurements, respectively, made on a binary version of the image in
Fig. 13.
Standard Methods
Throughout this Chapter, standard procedures have been cited where
appropriate. These procedures are the result of consensus by a committee
of experts representing various interested groups, and as such, they are a
good place to start. A list of various relevant standards from ASTM is
given in Tables 5 and 6. International Standards Organization (ISO) and
References
CHAPTER 6
Characterization of
Particle Dispersion
Mahmoud T. Shehata
Materials Technology Laboratory/CANMET
The results obtained for a particle dispersion by all five techniques are
compared in each case with results for ordered, random, and clustered
dispersions. In addition, the techniques are evaluated, and their usefulness
and limitations are discussed.
Fig. 1 Point patterns showing different point dispersions. (a) Ordered. (b)
Random. (c) Clustered
JOBNAME: PGIA−−spec 2 PAGE: 3 SESS: 24 OUTPUT: Thu Oct 26 15:09:28 2000
Characterization of Particle Dispersion / 131
The procedure involves counting the number of particles per unit area,
NA, in successive locations or fields. For example, the number of
inclusions are counted in various locations on a polished section of a steel
sample (Fig. 2). This procedure is easily achieved using an automatic
image analyzer equipped with an automated stage, so measurements on
successive locations of a large sample are achieved automatically. A
quantitative measure for the degree of inhomogeneity of the dispersion is
the standard deviation, , defined as:
⫽ 冑 1
n兺i
共NAi⫺ NA兲2 (Eq 1)
where NAi is the observed number of inclusions per unit area in the ith
location (field of view) and is the average number of particles per unit
area viewed on the sample. Maximum homogeneity is characterized by a
minimum standard deviation; thus, the degree of homogeneity increases
with a decreasing value of standard deviation. To compare relative
homogeneity of samples with different number densities, the standard
deviation must be normalized by the value of the mean, which is the
coefficient of variation, V, defined as V ⫽ / NA.
1
E() ⫽ (Eq 3)
公N
2 A
4⫺ 1
E共s兲2 ⫽ · (Eq 4)
4 NA
For example, the comparison shown in Fig. 3 indicates that the observed
distribution is composed of clusters superimposed on random dispersion
(Q ⫽ 0.8 and R ⫽ 2.83).
The nearest-neighbor spacing technique can be very useful in describ-
ing the observed dispersion as being ordered, random, clustered, or
Most digital image analysis systems have the capability to erode and
dilate features detected from an image. Because the detected image is a
binary image, erosion or dilation is simply removing or adding, respec-
tively, one pixel all around the detected feature. The dilation capability,
together with the capability of programming the image analysis system, is
used to characterize particle dispersion, a procedure called dilation and
counting technique. When any two particles are successively dilated, the
particles start to merge and overlap when the amount of dilation is equal
to, or greater than, one half the clear spacing between the two particles.
When particles overlap, they appear in the binary image as one larger
particle and are counted as one larger feature.
The procedure involves successive dilation and counting of the number
of features after each dilation step until all the features are joined and
become one feature. The procedure is shown schematically in Fig. 4 for
a random dispersion of points. Results are plotted as the loss in the
number of counted features for each increment of dilation or cycle. This
corresponds exactly to a frequency-versus-spacing distribution for the
particle dispersion. Figure 5 shows the results of the procedure for the
ordered, random, and clustered dispersions in Fig. 1.
For the ordered dispersion, a peaked distribution is centered around a
spacing D ⫽ 1/ NA. In fact, for a perfectly ordered hexagonal grid of
points, the distribution is a delta function at exactly the D spacing. For the
random dispersion shown in Fig. 1, the frequency spacing distribution is
roughly constant up to a value of 2D where it starts to decrease rapidly at
Fig. 4 Schematic of dilation procedure for a random point pattern after (a) 5,
(b) 10, and (c) 15 dilation cycles
JOBNAME: PGIA−−spec 2 PAGE: 7 SESS: 24 OUTPUT: Thu Oct 26 15:09:28 2000
higher spacings. For the clustered dispersion, on the other hand, the
frequency spacing distribution has a large peak at small spacings and a
small peak (or peaks) at very large spacings. The first peak corresponds
to spacings of particles inside the clusters, whereas the second peak(s)
correspond(s) to spacing among clusters.
The dilation and counting technique is very useful to characterize
particle dispersion and clustering. It is applied to reveal both the presence
and degree of clustering in several aluminum-silicon alloy sheet materials
where dispersion and clustering are critical to sheet formability (Ref 5).
Note that in this technique, the particle-to-particle spacing is the clear
(edge-to-edge) spacing between particles and not center-to-center spac-
ing. Both spacings are very similar in the case of very small particles and
very large spacings as, for example, inclusions in a steel sample.
However, they are significantly different in the case of larger particles and
smaller spacings like, for example, second-phase particles in a metal-
matrix composite (Fig. 6).
The edge-to-edge spacing obtained using the dilation and counting
technique has more relevance to modeling properties because the tech-
nique looks at clear spacing and takes into account particle morphologies
(long stringer versus round particles). Particle morphology is ignored
when center-to-center spacing is considered. A limitation of the dilation
and counting technique is that it does not provide local volume fraction
values, which are useful in modeling fracture. Such information can only
be provided using tessellation techniques described below.
Fig. 8 Dirichlet network for inclusion dispersion in the steel sample in Fig. 2
constructed for the steel sample and the random dispersion is shown in
Fig. 10. Note that the results of the area distribution of the cells
constructed for a randomly generated point dispersion follow a Poisson’s
distribution (Ref 6). Therefore, comparisons can be made directly with the
Poisson’s distribution without generating random dispersions.
An important advantage of the Dirichlet tessellation method is that it
can be used to yield parameters that relate more directly to fracture
properties, namely local volume fraction of particles, which is equivalent
to the local area fraction. The local area fraction takes into account two
important parameters that relate directly to the fracture process. The size
of the particle (inclusion or void) and the size of the Dirichlet cell give an
indication of near-neighbor distances, which relate to crack propagation.
The area fraction for each particle is calculated as the ratio of the area of
the particle to the area of the cell around it. For this reason, the areas of
the particles are recorded together with the coordinates of the centroids,
and the computer program was expanded to obtain the local area fraction
for the inclusion dispersion.
In addition, random dispersions of local area fractions are also
generated (Ref 6), and the local area fraction distributions for both the
random dispersion and the steel sample are shown in Fig. 11. Note that the
local area fraction distributions follow a log-normal distribution (approxi-
Fig. 10 Area distribution for Dirichlet networks for inclusion dispersion and
corresponding random dispersion for the steel sample in Fig. 2
JOBNAME: PGIA−−spec 2 PAGE: 12 SESS: 24 OUTPUT: Thu Oct 26 15:09:28 2000
140 / Practical Guide to Image Analysis
mately a straight line on the logarithmic probability plot in Fig. 11). The
standard deviation and coefficient of variation, as represented by the slope
of the line (Fig. 11), are higher for the steel sample than for the random
case and are a measure of inhomogeneity. Another advantage of the
Dirichlet tessellation method is that one can identify clustered regions and
the degree of clustering from local area fraction values of the Dirichlet
regions for the whole inclusion dispersion. An example is shown in Fig.
12, where clustered regions are identified by comparing the local area
fraction for each Dirichlet region with the average local area fraction for
the particle dispersion shown in Fig. 8.
All the limitations of the Dirichlet tessellation technique stem from the
fact that the network is based on particle centroids and, therefore, is not
affected by either particle size or shape. This usually is acceptable in the
case where particles are round and very small compared with spacings. A
problem arises when particles are as large as or larger than the free
spacing between them. For example, if a small particle is close to a large
particle, a particle larger than the free spacing between them, then the cell
boundary based on centroids will cut across the large particle. Therefore,
for a metal-matrix composite like that shown in Fig. 6, cell boundaries
must be constructed at the spacing of mid-edge to edge rather than
mid-spacing between centroids. This procedure is achieved using tessel-
lation by dilation.
Fig. 12 Dirichlet network for the inclusion dispersion of the steel sample, as
in Fig. 8, identifying clustered regions having local area fraction
higher than the average local area fraction
Fig. 13 Tessellation by conditional dilation using a square grid for inclusion dispersion
JOBNAME: PGIA−−spec 2 PAGE: 14 SESS: 24 OUTPUT: Thu Oct 26 15:09:28 2000
142 / Practical Guide to Image Analysis
(matrix); that is, the background is successively eroded, but not the last
pixel, until it is eroded to a line (skeleton). For this reason, the technique
also is called background erosion technique, background thinning tech-
nique, and background skeletonization technique. For simplicity, here it is
referred to as dilation technique. It should be noted that the cell
boundaries using the dilation technique are constructed at the mid-edge-
to-edge spacing between a particle and all near neighbors of that particle.
Therefore, it does take into account the particle size and shape, as
discussed above, and this is one of the better advantages of the dilation
technique. Figure 11 in Chapter 5, “Measurements,” shows an example of
a tessellation network produced using this technique corresponding to
clustered, ordered, and random point patterns.
It is arguable that the boundary constructed between any particles is not
the normal bisector line, as in Dirichlet tessellation, but rather a number
of segments of lines approximating the normal bisector. The reason is that
the dilation follows a particular square, hexagonal, or octagonal grid.
Therefore, boundary lines can only be constructed at particular angles
depending on the grid used. For a square grid, the boundary lines can be
constructed only at angles every 45° (0, 45, 90 and 135°), and, therefore,
the normal bisector (that can take any angle) is approximated by segments
of lines at those particular angles, as shown in Fig. 13. This can result in
polygons with irregular shapes. The shape irregularity decreases by using
a hexagonal or an octagonal grid, where line segments can be constructed
every 30 and 22.5°, respectively. However, in any case, the area of the
polygons constructed using the dilation technique is very similar to that
constructed by the Dirichlet tessellation technique. This also results in a
local area fraction (area of the particle divided by the area of the cell
around it) that is very similar. In addition, the situation improves
significantly when the particles are larger and spacings are smaller, as in
the case of metal matrix composites, for example (Fig. 6). In this case, the
dilation technique becomes the most appropriate technique to measure
local area fractions in the particle dispersion.
Fig. 14 Use of a measuring frame smaller than the image frame to overcome
edge effects
JOBNAME: PGIA−−spec 2 PAGE: 15 SESS: 24 OUTPUT: Thu Oct 26 15:09:28 2000
Characterization of Particle Dispersion / 143
One of the limitations of the dilation technique is the edge effect, where
polygons around particles at the edge of the field cannot be constructed
based on the particles present in neighboring fields. However, this is
overcome by eliminating in each field the measurements for those
particles lying at the edge of the field. This is achieved by making the
measuring frame smaller than the image frame, as shown in Fig. 14. In
this case, only particles that are inside the measuring frame are measured.
The measuring frame can then be chosen so that only the particles that
have the correct cells constructed around them are contained in the
measuring frame. In some cases, this can significantly reduce the
measuring frame compared with the image frame. Then it becomes a
matter of measuring more fields to cover the sample. This is accomplished
rapidly by automatic image analysis systems.
Conclusions
References
CHAPTER 7
Analysis and
Interpretation
Leczek Wojnar
Cracow University of Technology
Krzysztof J. Kurzydłowski, Warsaw University of Technology
Microstructure-Property Relationships
Various physical and mathematical models are used to link each set of
properties with relevant microstructural features. Assumed randomness of
the microstructure implies that the models predict relationships that
concern basically mean values (and eventually other statistical moments)
of the measured parameters. Most relationships currently used in mate-
rials science are relatively simple and fall into one of three categories:
O Linear: Y ⫽ aX ⫹ b
O Square root: Y ⫽ a(X)1/2 ⫹ b
O Inverse square root: Y ⫽ b ⫹ a/(X)1/2
of the matrix. The tensile and yield strengths of the ductile iron
considered here (the volume fraction of the graphite is 11%) should be
approximately 89% of the corresponding strength of silicon-ferrite, which
is confirmed (Fig. 3) over a wide range of temperatures.
Both phases in question have entirely different ductilities, which also
can be explained using the model presented in Fig. 1. Assume the fracture
process is controlled by the energy absorbed in the material during its
deformation. In nodular cast iron, the graphite nodules are very weakly
bonded with the metallic matrix and, therefore, function almost similar to
pores. These pores break the metallic matrix into the elements, the sizes
of which are defined by the distance between neighboring nodules, as
illustrated in Fig. 2 (Ref 2). The distance is equal to the mean free path
between graphite nodules (mean free path is described in Chapter 2,
“Introduction to Stereological Principles”).
Based on the hypothesis mentioned above and theoretical consider-
ations lying outside the scope of this text, the lower and upper limits for
the relationship between the mean free path and fracture toughness
(measured by the J-integral) are established. Almost all the experimental
data and literature available at the time of the study fits nicely to these
theoretically calculated bounds, as is illustrated in Fig. 4.
Fig. 4 Effect of graphite mean free path on the fracture toughness of ferritic
ductile iron
JOBNAME: PGIA−−spec 2 PAGE: 6 SESS: 49 OUTPUT: Thu Oct 26 15:48:29 2000
150 / Practical Guide to Image Analysis
O VV: volume fraction, the total volume of features analyzed per unit
volume of a material
O SV: specific surface area, total surface area of features analyzed per unit
volume of a material
O LV: specific length, total length of lineal features analyzed per unit
volume of a material
O NV: numerical density, mean number of features analyzed per unit
volume of a material
O AA: area fraction, total surface area of intercepted features per unit test
area of a specimen
O LA: total length of lineal features analyzed per unit test area of a
specimen
JOBNAME: PGIA−−spec 2 PAGE: 7 SESS: 49 OUTPUT: Thu Oct 26 15:48:29 2000
Analysis and Interpretation / 151
initial particle. Similar to the convex hull is the bounding rectangle (Fig.
5k), suitable for shape characterization. Figure 5(l) illustrates the number
of holes. Other characteristics also are available, for example, number of
Euler points (suitable for convexity/concavity quantification and devia-
tion moments). These are the most common measurements, and most
image analysis specific software can be used to obtain their values.
Quantification of some of these measurements is very difficult, if not
impossible, without the use of a computer.
One of the most important tasks during microstructure quantification is
to develop appropriate links between the classical stereological param-
eters (which usually have a well-elaborated theoretical background) with
parameters offered by image analysis. Specific problems arise when
taking into account the digital (discontinuous) nature of computerized
images. Examples of application difficulties are measurements along
curvilinear test lines (used in the method of vertical sections) and errors
in perimeter evaluation.
Any microstructure should be described in a qualitative way prior to
quantitative characterization because the latter simply expresses the
former in numbers. Otherwise, large sets of meaningless values are
Fig. 5 Basic measures of a single particle: (a) initial particle, (b) area, (c)
perimeter, (d) and (e) Feret diameters, (f) maximum width, (g) intercept,
(h) coordinates of the center of gravity, (i) coordinates of the first point, (j) convex
hull, (k) bounding rectangle, and (l) number of holes
JOBNAME: PGIA−−spec 2 PAGE: 9 SESS: 49 OUTPUT: Thu Oct 26 15:48:29 2000
Analysis and Interpretation / 153
(a) (b)
(c) (d)
(e) (f)
black phase. Note that there is no difference in size (even the size
distribution is identical), shape, and arrangement (in both cases the
arrangement is statistically uniform) between the images in Fig. 6(a) and
6(b). The structure in Fig. 6(c) is obtained by changing only the size of the
objects, without altering their amount, shape, and arrangement. Changing
the arrangement of structure in Fig. 6(c) results in a structure shown in
Fig. 6(d), which differs from Fig. 6(a) in size and arrangement of the
objects. Keeping the amount, size, and arrangement of the black phase in
Fig. 6(a) and changing only the shape yields results shown in Fig. 6(e).
Finally, the structure shown in Fig. 6(f) differs from that in Fig. 6(a) in all
the characteristics; that is, amount, size, shape, and arrangement.
Rationale for classifying into four basic characteristics presented above
is not proven theoretically. However, in all cases known to the authors,
characterization of the structure can be performed within the framework
of amount, size, shape, and arrangement. The most important advantage
of this simplification is that structural constituents usually require only
four characteristics versus a host of parameters (such as those listed at the
beginning of this Chapter) supplied by classical stereological methods,
and even more parameters offered by contemporary image analysis. The
following discussion provides some guidelines both on how to choose the
optimal parameter subsets to quantify a structure and how to interpret the
results obtained.
Fig. 7 Illustration of the effect of the lost grain boundary lines (denoted by
circles) on the results of grain size quantification. The lost grain bound-
ary segments significantly increase the mean section area (by ⬃25%), whereas the
mean intercept length (intercepts enlarged due to the absence of some grain
boundaries plotted using broken lines) is increased approximately 5%.
JOBNAME: PGIA−−spec 2 PAGE: 13 SESS: 49 OUTPUT: Thu Oct 26 15:48:29 2000
Analysis and Interpretation / 157
a
f1 ⫽ (Eq 1)
b
JOBNAME: PGIA−−spec 2 PAGE: 14 SESS: 49 OUTPUT: Thu Oct 26 15:48:29 2000
158 / Practical Guide to Image Analysis
where a and b are the length and width of the minimum bounding
rectangle, or a is the maximum Feret diameter, while b is the Feret
diameter measured perpendicular to it. These two sets of values used to
determine elongation can produce slightly different values of the shape
factor due to the digital nature of computer images.
Aspect ratio reaches a minimum value of 1 for an ideal circle or square
and has higher values for elongated shapes. Unfortunately, elongation is
not useful to assess irregularity, where all particles have an f1 value very
close to 1, despite the fact that the particles have profoundly different
shapes. This is illustrated in Fig. 8(b). Circularity—one of the most
popular shape factors—offers a good solution to this situation:
L2
f2 ⫽ (Eq 2)
4A
where L is the perimeter and A is the surface area of the analyzed particle
(Fig. 9). The f2 shape factor is very sensitive to any irregularity of the
shape of circular objects. It has a minimum value of 1 for a circle and
higher values for all other shapes. However, it is much less sensitive to
elongation.
d2
f3 ⫽ (Eq 3)
d1
affect the results. Errors include improper specimen sampling (the first
possible source of bias), polishing artifacts, over-etching, staining,
specimen corrosion, errors induced by the optical system, and the effects
of magnification and inhomogeneity.
Improper polishing is the main source of bias in images. Soft materials
and materials containing both very soft and very hard constituents are
especially difficult to prepare and are sensitive to smearing and relief (see
Fig. 13 and also Chapter 3, “Specimen Preparation for Image Analysis”).
Ferritic gray cast iron is such a material; graphite is very soft, ferrite is
soft and easily deformed, and the iron-phosphorous eutectic is hard and
brittle, characteristics which tend to produce relief, even in the early
stages of grinding on papers (again, see Chapter 3, “Specimen Preparation
for Image Analysis,” for further information). Such polishing artifacts
cannot be corrected during further polishing. Also, plastic deformation
recognizes at least 256 gray levels, and its sensitivity to different gray
levels is stable, as opposed to nonlinear in the case of humans. Therefore,
an image analysis system registers all these subtle variations, which
affects the results of the analysis. New-generation optics in contemporary
microscopes compensate for the errors mentioned, leading to perfect
images (Fig. 16).
While sources of bias are difficult to quantify objectively, the following
guidelines help to minimize bias:
1.22 ·
d⫽ (Eq 4)
2 · NA
modern microscopes is 22 mm (0.9 in.) (20 mm, or 0.8 in., for earlier
models).
Next, it is necessary to compute the physical resolution of a CCD
image. A typical low-cost camera containing a 13 mm (0.5 in.) CCD
element with an image size of 640 ⫻ 480 pixels yields 15,875 µm per
pixel. Comparing this value with the theoretical cell size in Table 2
suggests that the simplest camera is sufficient if no additional lenses are
placed between the camera and the objective (this is the most common
case). Many metallographic microscopes allow an additional zoom up to
2.5⫻, which, if applied, leads to a smaller theoretical cell size than that
offered by the camera discussed above, even if it is ideally inscribed into
the field of view.
For light microscopy, using a camera having a resolution higher than
1024 ⫻ 1024 pixels generally results only in a greater amount of data and
longer analysis time—without providing any additional information!
Figure 18 illustrates the effects of camera resolution. An “over-sampled”
image (for example, a very high-resolution camera was used) provides a
smooth image, but nothing more is observed than that observed using an
optimally grabbed image (Table 2). Reducing resolution loses some data;
however, major microstructural features still are detected using half the
optimal resolution.
Image processing can begin as soon as the image is placed in the
computer memory. Note that almost all image processing procedures lose
a part of initial information. This occurs because the largest amount of
information always is in the initial image, even if its quality is poor (this
reinforces why the image quality is so important). Also, some information
is lost at every step of image processing, even if the image looks
satisfactory to the human eye. Thus, it is strongly recommended that the
number of processing steps be minimized.
Two tricks can be used to improve the image quality without significant
(if any) loss of information. In the case of poor illumination, the image
grabbed by a camera can be very noisy, but grabbing a series of frames
and subsequently averaging them produces much better image quality
(this solution is sometimes built into the software for image acquisition).
In the case of a noisy image, it often is helpful to digitize the image using
a resolution two times higher than necessary and, after appropriate
filtering, decrease the resolution to obtain a clean image. This technique
is used, for example, in scanning printed images, which removes the
printer raster. In general, image quality can be significantly improved, or,
in other words, some information can be recovered by knowing exactly
how the image was distorted prior to digitization.
There is no perfect procedure to detect microstructural features.
Depending on the particular properties of the image, as well as the
algorithms applied, some grain boundaries or particles will be lost and
some nonexistent grain boundaries or particles will be detected. This is
illustrated in the example of polystyrene foam, shown in Fig. 19. There is
no specific solution to this problem, as each operator will draw the grain
boundaries in a different manner, and the resulting number of grains will
show some scatter.
Figure 20 compares results of the number of grains in an austenitic steel
measured manually and automatically, which shows very good agreement
(a) (b)
(c) (d)
O They are at least one order of magnitude faster than manual methods;
thus, they follow the well-known rule of stereology⫺do more less
well—mentioned elsewhere in this book.
O They are completely repeatable; that is, repeated analysis of any image
always yields the same results.
O They require almost no training or experience to execute once the
proper procedures have been established.
(a) (b)
(a)
(b)
(c)
Fig. 21 Effect of different methods of choosing the binary threshold on binarization results. (a) Initial image; (b)
profile with threshold level indicated by an arrow and corresponding binary image; (c) histogram with
threshold level indicated by an arrow and corresponding binary image
JOBNAME: PGIA−−spec 2 PAGE: 33 SESS: 49 OUTPUT: Thu Oct 26 15:48:29 2000
Analysis and Interpretation / 177
(a) (b)
(c) (d)
For an unbiased particle count, all particles totally included within the
guard frame and touching or crossing its right and bottom edges are
considered, while all particles touching or crossing the left and upper
frame edge, as well as the upper right-hand or lower left-hand corners, are
removed. This particle selection method (gray particles in Fig. 23b) may
seem artificial, but it ensures that size distribution is not distorted. Note
that when using a series of guard frames touching each other, the selection
rules ensure that every particle is counted only once.
Other methods of particle selection for further analysis are given in Ref
4. However, the general idea is always the same, and the results obtained
are equivalent. If only the number of particles (features) are of interest,
this is easily determined using the Jeffries planimetric method, described
in Chapter 2, “Introduction to Stereological Principles,” and Chapter 5,
“Measurements.” It is not recommended to count particles in images after
removing particles crossed by the image edge (even if recommended in
standard procedures) because this always introduces a systematic error.
Moreover, the error is impossible for a priori estimation because it is a
function of the number, size, and shape of analyzed features.
Area Fraction. The classical stereological approach to measure area
fraction is based on setting a grid of test points over the image and
counting the points hitting the phase constituent of interest. With image
analysis, the digital image is a collection of points, so it is unnecessary to
overlay a grid of test points, and it is sufficient to simply count all the
points belonging to the phase under consideration. This counting method
is used to evaluate surface area. The statistical significance of such
analysis is different from that performed using classical stereological
methods. In other words, the “old” rules for estimating the necessary
number of test points are no longer valid, and other rules for statistical
evaluation of the scatter of results (for example, based on the standard
deviation) should be applied.
(a) (b)
Fig. 23 Concept of a guard frame. Schematic of (a) initial image and guard
frame and (b) selection based on particles crossing the edge of the
frame
JOBNAME: PGIA−−spec 2 PAGE: 35 SESS: 49 OUTPUT: Thu Oct 26 15:48:29 2000
Analysis and Interpretation / 179
n
兺 i
i⫽1
x⫽ (Eq 5)
n
s⫽ 冑 1
兺
n
(x ⫺ x)2
n ⫺ 1 i⫽1 i
(Eq 6)
CL ⫽ ta,n⫺1 ў s (Eq 7)
where t␣,n–1 is the value of the t-statistics (Student) for the significance
level ␣, and n is 1 degree of freedom. Exact values of t-statistics can be
found in statistical tables, or are easily accessible in any software for
statistical evaluation, including most popular spreadsheets.
Estimating the confidence level for ␣ ⫽ 0.05, for example, means that
when repeating the measurements, 95% (1 – ␣) of the results will be
greater than x̄ – CL and smaller than x̄ ⫹ CL. Another useful conclusion
is that if the number of measurements is large (>30), then approximately
99% of the results should not exceed the confidence level defined as
follows:
CL0.99 ⫽ 3s (Eq 8)
s
CV ⫽ (Eq 9)
x
Both samples have the same mean graphite content, but the deviation in
content (individual results) for sample B is four times higher than for
sample A. Moreover, the scatter of results for sample B is so large that the
99% confidence level is approximately equal to half of the measured
value. One interpretation of results is that the graphite content in sample
B is inhomogeneous. Another is that the measurements for sample B were
performed with different precision. If these interpretations cannot be
JOBNAME: PGIA−−spec 2 PAGE: 42 SESS: 49 OUTPUT: Thu Oct 26 15:48:29 2000
186 / Practical Guide to Image Analysis
Data Interpretation
(a) (b)
Fig. 28 Experimental data illustrating the relationships between material properties and microstructures.
(a) Plot of ceramic density against volume fraction of pores. Courtesy P. Marchlewski. (b) Plot of
Brinell hardness of an austenitic stainless steel against the inverse of square root of the mean intercept length,
or ( l̄ )–1/2, µm–1/2. Courtesy J.J. Bucki
JOBNAME: PGIA−−spec 2 PAGE: 45 SESS: 49 OUTPUT: Thu Oct 26 15:48:29 2000
Analysis and Interpretation / 189
Fig. 29 Binary image of grains in austenitic steel ready for digital measure-
ments. All artifacts are removed, and the continuous network of
grain boundaries is only one pixel wide.
Fig. 30 Grain size area (in pixels) distribution obtained from binary image in
Fig. 29
JOBNAME: PGIA−−spec 2 PAGE: 46 SESS: 49 OUTPUT: Thu Oct 26 15:48:29 2000
190 / Practical Guide to Image Analysis
In conclusion, note that the scope and depth of interpretation of the data
describing the microstructural features of a material also depends on the
purpose of quantification, including:
O Quality control
O Modeling materials properties
O Quantitative description of a microstructure
the types of plots in Fig. 33. The distribution functions can be used to
determine the degree to which they agree with theoretical functions, and
changes in distribution functions can be interpreted in terms of a shift in
the mean value and possible increase in their width. In the case of grain
size, the shift is an indication of grain growth. If the shape of the
normalized distribution function does not change (compare Fig. 33a and
b), the shift could be interpreted as normal grain growth. By comparison,
changes in the shape of normalized distribution functions indicate
abnormal grain growth (compare Fig. 33a and c).
Due to the randomness of the size of grain sections, the relatively large
grain sections that occupy a significant fraction of the image account for
a small fraction of the total population; two large sections in Fig. 32(c)
account for approximately 15% of the image area. Weighted plots of
relevant values make it easier to visualize the effect of such grains
appearing with low frequency. The area-weighted plot of A, or A · f(A),
is more sensitive to the presence of large grain sections (Fig. 34).
The results of grain section area frequently are converted into circle
equivalent grain section diameters. Both the mean value of grain section,
Ā, and equivalent diameter, d̄, can be used as a measure of grain size.
However, it should be pointed out that these two parameters do not
account for the 3-D character of grain and can be used only in
comparative studies.
Mean intercept length is a more rigorous stereological measure of grain
size. In the case of an isotropic (uniform properties in all directions)
structure, it can be measured using a system of parallel test lines on
random sections of the material. Anisotropic (different properties in
different axial directions) structures require the use of vertical sections
and a system of test cycloids, which provide only a mean value, and not
a distribution of intercept length.
Fig. 32 Three typical microstructures of a single-phase polycrystalline material, differing in the grain size (grain size in
images a, b, and c is in ascending order) and in grain-size homogeneity (grains in image c are more
inhomogeneous compared to those in a and b).
JOBNAME: PGIA−−spec 2 PAGE: 49 SESS: 49 OUTPUT: Thu Oct 26 15:48:29 2000
Analysis and Interpretation / 193
(a)
(b)
(c)
Fig. 33 Histograms of grain section areas for microstructures in Fig. 32. (a)
Plotted in linear form; (b) normalized by the mean area; (c) in log
scale
JOBNAME: PGIA−−spec 2 PAGE: 50 SESS: 50 OUTPUT: Thu Oct 26 15:48:29 2000
194 / Practical Guide to Image Analysis
Fig. 35). Mean values of shape factors can be obtained for grain sections
and appropriate distribution functions. As a rule of thumb, 10% or less
variation of the mean value of shape factors can be viewed as insignifi-
cant.
Two-Phase Materials. Figure 36 shows examples of two-phase mi-
crostructures, which basically are described in terms of the volume
fraction of phases, (VV)a and (VV)b. Measurements of volume fractions
usually are easily carried out using image analyzers, and the precision of
such measurements are estimated using model reference images. In
practical applications, the major source of an error in estimating volume
fraction is biased selection of images for analysis. Images must be
randomly sampled in terms of sectioning of material and positioning the
observation field. Some other source of errors, such as over etching, have
been discussed previously.
Estimates of volume fraction are directly used to interpret the physical
and mechanical properties of several materials; for example, density,
conductivity, and hardness. Some properties also are influenced by the
degree of phase dispersion (see also Chapter 6, “Characterization of
Particle Dispersion”). Under the condition of constant volume fractions in
particulate systems, the dispersion of phases can be described by size of
individual particles (concept of the size of particles cannot be applied to
interpenetrated structures of some materials). Another approach to the
description of dispersion can be based on measurements of specific
surface area, SV, of interphase boundaries. This parameter defines SV for
interphase boundaries in unit volume.
Values of specific SV for interphase boundaries can be obtained with the
help of image analyzers relatively easily for isotropic structures. In this
case, one can apply a grid of parallel test lines. On the other hand,
(a)
(b)
O Contiguity of phases
O Morphological anisotropy
O Randomness in spatial distribution
O Plastic deformation influences not only the inclusions, but also the
metallic matrix, and material anisotropy affects the fracture toughness.
O Inclusions tested on transverse sections appear significantly smaller
than those on longitudinal sections, which may result in underestima-
tion of the projected length on transverse sections.
sections. On the other hand, small scatter of the points indicates that the
total projected length is a well-chosen parameter for analysis of the
fracture process of HSLA steels.
Effect of Boron Addition on the Microstructure of Sintered Stain-
less Steel. Sintering metal powders allows obtaining exotic and tradi-
tional (e.g., tool steels) materials having properties impossible to obtain
via other processing routes. One of the problems arising during sintering
of stainless steels is the presence of porosity in the final product. This
problem can be overcome by liquid-phase sintering by means of the
addition of boron. This requires determining the optimum boron content,
which is discussed subsequently based on part of a larger project devoted
to this problem (Ref 18).
A typical example of a microstructure of a material produced via
liquid-phase sintering is shown in Fig. 40(a), which contains both pores
and a eutectic phase. From a theoretical point of view, it is important to
quantify the development of the eutectic network, and it is evaluated by
counting the number of closed eutectic loops per unit test area. Measure-
ment results of detected loops (denoted in Fig. 40b as white regions) are
shown in Fig. 41. The amount of closed loops rapidly increases with the
increasing boron content. Therefore, this unusual parameter seems to be
a good choice for interpretation of the process of network formation. Note
that no eutectic phase is detected for boron additions smaller than 0.4%.
Liquid-phase sintering causes a decrease in porosity, as shown in Fig. 42.
The amount of pores is easily quantified using simple area fraction
measurements. Based on these relatively simple measurements, it is
clearly visible that a boron addition greater than 0.6% does not result in
further decrease in porosity. This observation was also confirmed by
means of classical density measurements (Ref 18).
(a) (b)
Fig. 42 Effect of boron addition on the volume fraction of pores. White bars
denote sintering at 1240 °C (2265 °F) and black bars denote
sintering at 1180 °C (2155 °F). Source: Ref 18
Conclusions
References
Polish)
15. “Standard Practice for Determining Inclusion Content of Steel and
Other Metals by Automatic Image Analysis,” E 1245-89, Annual
Book of ASTM Standards, ASTM, 1989
16. “Standard Practice for Preparing and Evaluating Specimens for
Automatic Inclusion Assessment of Steel,” E 768-80, Annual Book of
ASTM Standards, ASTM, 1985
17. A. Zaczyk, W. Dziadur, S. Rudnik, and L. Wojnar, Effect of
Non-Metallic Inclusions Stereology on Fracture Toughness of HSLA
Steel, Acta Stereol., 1986, 5/2, p 325–330
18. R. Chrza˛szcz, “Application of Image Analysis in the Inspection of the
Sintering Process,” master’s thesis, Cracow University of Technology,
Cracow, 1994 (in Polish)
CHAPTER 8
Applications
Dennis W. Hetzner
The Timken Co.
Gray Images
The quality and integrity of the specimens that are to be evaluated are
probably the most critical factors in IA. No IA system can compensate for
poorly prepared specimens. While IA systems of today can perform gray
image transformations at very rapid rates relative to systems manufac-
tured in the 1970s and 1980s, gray image processing should be used as a
last resort for metallographic specimens. Different etches and filters (other
than the standard green filter used in metallography) should be evaluated
prior to using gray image transformations. To obtain meaningful results,
the best possible procedures for polishing and etching the specimens to be
observed must be used. Factors such as inclusion pullout, comet tails, or
poor or variable contrast caused by improper etching cannot be eliminated
by the IA system. There is no substitute for properly prepared metallo-
Applications / 205
(a) (b)
Fig. 2 Analysis of ideal gray level square in Fig. 1: (a) histogram; (b) cumulative gray level histogram
Applications / 207
(a)
(b)
Fig. 3 Analysis of ideal gray level ellipse in Fig. 1: (a) histogram; (b)
cumulative gray level histogram
208 / Practical Guide to Image Analysis
particular, consider the pixel with a gray level of 120 (the center pixel in
the 3 ⫻ 3 matrix shown). Applying a filter of the form
a b c
d e f
g h i
240 110 19
255 120 20
270 130 21
(a) (b)
Fig. 4 Scanner response: (a) electron-beam machined circles; (b) AM 350 precipitation-hardenable stainless steel (UNS
S35000)
Applications / 209
the indentation where a contrast change occurs, the image is bright. Thus,
the Sobel filter reveals the edges of the indentation, but not the remainder
of the image. Table 1 lists several common filters and corresponding
kernels; these are just a few of the possible kernels that can be
constructed. For example, at least five different 3 ⫻ 3 kernels can be
(a) (b)
Fig. 5 Vickers microindentation in ingot-iron specimen. (a) Regular image. (b) Sobel transformation of image
Gaussian 1 2 1 Smoothing
2 4 2
1 2 1
Laplacian 0 −1 0 Edge enhancement
−1 4 −1
0 −1 0
Gradient 0 1 0 Edge enhancement
1 0 −1
0 −1 0
Smooth 0 1 0 Noise reduction
1 0 1 (1⁄4)
0 1 0
210 / Practical Guide to Image Analysis
Image Measurements
Consider again only the square consisting of pixels with a gray level of
125 in Fig. 1. If all the pixels with a gray level of less than 200 were
counted, only the pixels within the square would be used for measure-
ments. A binary image of this field of view can be formed by letting the
detected pixels be white and the remaining pixels be black. Assume that
each pixel is 2 µm long and 2 µm high. Thus, the perimeter of the square
is 5 ⫹ 5 ⫹ 5 ⫹ 5 pixels, and the actual perimeter of the square is 20 pixels
by 2 µm, or 40 µm. Similarly, the area of the square is 5 ⫻ 5, or 25, square
pixels, and the actual area is 25 ⫻ 4 µm2 for each pixel, which equals
100 µm2.
This is the simplest problem that could be encountered. Because most
images contain more than one object or feature and a range of gray levels,
different parameters describing the image can be measured. Two types of
generic measurements can be made for all IA problems: field measure-
ments and feature-specific measurements (see also Chapter 5, “Measure-
ments,” and Chapter 7, “Analysis and Interpretation”). Field measure-
ments refer to bulk properties of all features or portions of features
contained within a measurement frame. Feature-specific measurements
refer to properties associated with individual objects within a particular
measurement frame.
There are several different methods of defining the measurement frame.
The simplest approach is to measure everything that is observed on the
TV monitor. The measured features are represented by the white dots in
Fig. 6(a). The use of this type of measuring frame is well suited for field
measurements but has some limitations when feature-specific measure-
ments are required. When making feature-specific measurements, it is
very important to understand how the selection of the measurement frame
can affect the accuracy of measurements. Even if the features being
measured do not vary greatly in size or shape, measuring everything
within the frame border will bias the results because any feature on the
frame border is truncated in size, as shown in Fig. 6 (a). Similarly, if only
the objects totally inside the image frame are measured, long objects that
extend beyond the boundaries of the measurement frame are disregarded,
as shown in Fig. 6(b). All objects can be properly measured if the
measuring frame is sized to be somewhat smaller than the input frame, as
shown in Fig. 6(c). However, in situations where contiguous fields of
view are analyzed, use of the frame in Fig. 6(c) may result in some
features being measured two times. A possible solution in this situation is
to measure objects extending beyond the top and right side of the frame
(Fig. 6d) or to measure objects extending beyond the bottom and left side
of the frame (Fig. 6e). These types of measuring frames are well suited for
analysis using a motorized stage on the microscope.
212 / Practical Guide to Image Analysis
Fig. 6 Measurement frames used for image analysis. (a) All features displayed
on the monitor are measured. (b) Features only within the frame are
measured. (c) Features inside and touching the frame are measured. (d) Features
inside and touching the right and top sides of the frame are measured. (e) Features
inside and touching the left and bottom sides of the frame are measured. (f)
Features only inside the circular frame are measured. (g) Features inside and
touching the circular frame are measured.
Applications / 213
(a) (b)
(c) (d)
where x corresponds to the gray level and y is the gray level frequency
value.
The first derivative of this curve is y' ⫽ a1 ⫹ 2a2x. Setting the value of
y' at 0 yields the relative minimum point of the curve. The relative
minimum is the proper threshold setting. This type of analysis can be
performed by exporting the gray level distribution to a spreadsheet.
Similarly, the relative minimum can be visually approximated by care-
fully observing the gray level histogram.
When a line scan from this specimen is compared to that from the
electron-beam machined circles in Fig. 4(a), the phase to be detected
varies in gray level in addition to beam overshoot and undershoot. Despite
a wide amount of scatter in the gray level histogram (Fig. 8a), it can be
represented by a second-degree polynomial (Fig. 8b). Using this analysis,
the threshold setting to use to measure the area fraction of delta ferrite in
AM 350 is 137. No image editing or image amendment was used for the
Applications / 215
(a) (b)
Fig. 8 AM 350 stainless steel gray levels: (a) histogram; (b) second-degree
polynomial curve
(a)
(b)
Fig. 10 Annealed AISI type 1018 carbon steel etched with picral: (a) gray
histogram; (b) second-degree least-squares fit
Applications / 217
(a)
(b) (c)
Fig. 11 Steps in threshold setting AISI type 1018 carbon steel: (a) microstructure revealed using picral etch; (b) pearlite
detected; (c) holes filled
218 / Practical Guide to Image Analysis
1. SYSTEM SETUP
a. Clear any previously used images, graphics, and databases.
b. Magnification or calibration constant
c. Image frame to be used (Fig. 6a)
d. Field measurements to be made, area percentage
e. Load shading corrector image
f. Define parameters to control the stage and auto-focus.
g. Define number of fields to be evaluated (Field Number ⫽ 100).
2. ACQUIRE TV INPUT
3. APPLY SHADING CORRECTION
4. IMAGE SEGMENTATION: based on previously described methods
5. BINARY IMAGE PROCESSING - FILL HOLES
6. MEASUREMENTS: field pearlite area percent
7. STORE MEASUREMENT IN DATABASE
In fact, this very simple program may not properly operate on every
system. The early computer control systems used on IA systems were
quite slow when compared with the high-speed processor-based systems
of today. Furthermore, since the early stages of automation through
today’s systems, system hardware components, such as the automatic
stage, automatic focus, and TV camera, have always responded at a much
slower rate than the computer can send instructions to them. It often is
necessary to artificially slow down the IA system to achieve proper
performance.
For example, consider what happens as the computer instructs the stage
to move to the next position. The stage rapidly responds and some
vibration occurs when the stage stops moving. Even before the stage has
started to move, the computer is instructing the microscope to focus or
move to the proper focusing position. As this instruction is being carried
out, the computer is further instructing the system to acquire an image.
Image processing begins at this point, and soon the stage is again moving.
Several problems can develop as the program runs. Once the stage
changes position, no instructions should be processed until any vibrations
created by the stage motion have subsided. At this point, it may be
necessary for the TV camera to have one or two seconds exposure to the
next field of view to allow the automatic gain control (if included in the
camera system) to set up the proper white level for the input signal. In
addition, automatic focusing should not be attempted until any possible
vibrations from the stage movement have subsided and the camera has
stabilized. After a little time passes, the auto-focus instruction can be
initiated and the image can be captured. This may be accomplished by
replacing instruction 2 in the macro by the following group of instruc-
tions:
is used to analyze CDA brass (Fig. 13a), the minimum gray level between
the two phases is not well defined (Fig. 13b). However, illuminating the
same field of view with red light yields a much sharper distinction
between the constituents (Fig. 13c). To appreciate the difference between
the two sources of illumination, the reader should try the following
experiment. Using a 32⫻ objective, first focus the microscope using a
green filter. Then, replace the green filter with a red filter and refocus the
microscope. Notice that there is a difference in the focal plane (z position)
of the stage of approximately 8 µm.
(a)
(b) (c)
Fig. 13 Thresholding brass: (a) microstructure of cold-drawn and annealed brass etched with Klemm’s reagent; (b) gray
level histogram using green filter; (c) gray level histogram using red filter
222 / Practical Guide to Image Analysis
Image Amendment
(a)
(b) (c)
Fig. 14 Steps in threshold setting AISI type 1018 carbon steel: (a) microstructure revealed using 2% nital ecth; (b) image
detected; (c) holes filled followed by a square morphology of open 2
Applications / 223
Function Instruction
Accept Analyze selected features
Reject Disregard selected features
Line Draw a line
Cut Separate feature
Cover Add to the detected image
Erase Delete part of the detected feature
(a) (b)
(c) (d)
Fig. 16 Ingot-iron image reconstruction: (a) microstructure of ingot iron etched with Marshall’s reagent; (b) grain
boundary detection and image inversion; (c) holes filled; (d) image inverted; (e) close 2, octagon performed;
(f) image inverted; (g) binary exoskeleton performed; (h) grain boundary detection and three circles; (i) grain boundaries and
circle intercepts; (j) grain areas within a rectangular image frame are measured; (k) number of grains within a circular image
are counted
Having obtained the grain boundary image, the area of each grain can
easily be measured by inverting the binary image. This process makes the
boundaries black and the grains white. The area of each grain can then be
measured as a feature-specific property. For this type of analysis, selection
of the proper measuring frame is important. Only grains that are
Applications / 227
(h) (i)
(j) (k)
Fig. 16 (continued) Ingot-iron image reconstruction: (a) microstructure of ingot iron etched with Marshall’s
reagent; (b) grain boundary detection and image inversion; (c) holes filled; (d) image
inverted; (e) close 2, octagon performed; (f) image inverted; (g) binary exoskeleton performed; (h) grain boundary detection
and three circles; (i) grain boundaries and circle intercepts; (j) grain areas within a rectangular image frame are measured;
(k) number of grains within a circular image are counted
or
Similarly, the number of grains within a known image frame can be used
to obtain the stereological parameter NA, or the mean number of
interceptions of features divided by the total test area. In this analysis, a
circular image frame with a radius of 215 pixels (165.44 µm) is used. The
area of this frame is 85,989 µm2 (or 0.0860 mm2). When this type of
analysis is performed using manual methods, the grains completely within
the circular measuring frame are counted as one grain. The grains that are
only partially within the frame are counted as one-half of a grain. The
number of grains to use for NA is the sum of all the grains inside the frame
and the sum of the grains having values of 1⁄2. When using automated IA
procedures, only the grains that are completely within the measuring
frame are counted. NA is determined by the number of grains within the
measuring frame and the actual sum of the area of these grains. Because
the IA system can measure the actual area of the grains, this method
produces more accurate results than counting grains that are partially
within the measuring frame as one half grain.
The average number of grains within the test frame is 122.6, and the
standard deviation is 14.4. The average number of grains within the
measuring frame for the 10 fields of view is 73,859 µm2, and the standard
deviation is 3,762 µm2 (Fig. 16k). Therefore, NA⫽ 1659. The grain size
is calculated from:
In this particular example, the average area of the grains and grain
boundaries is 77,424 µm2. This is approximately 4.8 % of the area of the
grains. If this value were used for the measured grain area, the calculated
grain size would be 7.68. Thus, the grain boundary area could account for
approximately 0.06 of a grain size number.
To ensure the same fields of view are used to measure grain size using
each of the three methods, a set of 10 images is first stored on the
computer hard drive. After storage, the images are recalled to proceed
with the grain size analysis.
Macro Development. Before developing the macros used to solve
these different problems, the different types of variables used are
considered. Four types of variables commonly encountered in IA macro
routines, and for most computer languages, are:
O Integer
O Real
Applications / 229
O Character
O Boolean
Integer and real variables are used in situations where numbers are
required to describe a certain property. Examples of integer variables in
IA applications are an image number, field of view number, indexing
variable controlling a loop, and a line in a database. Variables such as the
system calibration constant or the area, length, and perimeter of an object
are examples of real variables. Real variables often are referred to as
floating-point variables, which means there is a decimal point associated
with the variables. Character-type variables include letters of the alphabet
and other symbols used to describe the properties of something. Character
often is abbreviated as “Char.” A group of characters placed together in
a specific order is referred to as a string. There are only two Boolean
variables, true and false. Boolean variables are used in macros to help
control how instructions are performed as the macro is running. Some
examples of the different types of variables are:
The following examples briefly discuss how these variables may be used
in a macro routine. Again, each system may handle these types of
variables in a slightly different manner. For a more complete discussion of
variable types, the reader should refer to a basic computer textbook.
MACRO STORE IMAGES
1. SYSTEM SETUP
a. Delete previous images and graphics.
b. Load the shading corrector.
c. Set up the path for storing the images:
“C:\ASM_BOOK\GRAINS.”
2. Define String Variables:
a. s1 ⫽ ‘grain’
b. s2 ⫽ ‘.TIF’
FOR LOOP ⫽1; loop <⫽10; loop ⫽ loop ⫹ 1
3. TVINPUT 1
4. Apply shading correction, resulting image is 2
5. s ⫽ s1 ⫹ string(loop) ⫹ s2
230 / Practical Guide to Image Analysis
6. Write s
7. Save Image 2, s
8. Move to next field of view
END FOR
1. Delete from the system any images and graphics from previous
applications. Load shading corrector for later use. Set path to instruct
the computer where to store the images that will be used for further
analysis. In this example, images will be stored on the C-drive in
folder “ASM_BOOK \ GRAINS.”
2. Define two string variables (their use is explained later). Create a loop
instruction; the loop control variable, or loop, varies from 1 through
10.
3. Once in the loop, acquire a live TV image numbered 1.
4. Apply the shading corrector to the live image (the resulting image is
identified as number 2).
5. Define a new string variable s, which is the sum of three other string
variables.
6. Computer is instructed to write the variable s. The first time through
the loop, loop ⫽ 1. Thus s ⫽ grain1.TIF. The second time through the
loop, s has the value grain2.TIF; the third time through the loop, s ⫽
grain3.TIF, and so forth.
7. Rename image 2 as s and store to the computer C-drive in the
appropriate folder. Referring to the setup instructions, the first image
stored is “C:\ASM_BOOK\GRAINS\grain1.TIF,” the second image is
stored as “C:\ASM_BOOK\GRAINS\grain2.TIF,” and so forth.
8. Start the process again.
Stored images can later be recalled for grain size analysis. Because the
grain size is determined using three different methods, grain boundaries
are reconstructed and the reconstructed grain images are saved to disk for
additional analysis. The macro used to reconstruct the grains is:
MACRO RECONSTRUCT GRAINS
1. SYSTEM SETUP
a. Delete previous images and graphics.
b. Load the shading corrector.
c. Set up the path for storing the images:
“C:\ASM_BOOK \GRAINS.”
2. Define String Variables:
a. s1 ⫽ ‘grain’
Applications / 231
b. s2 ⫽ ‘.TIF’
c. b1 ⫽ “bin_gr”
d. b2 ⫽ “.img”
FOR LOOP ⫽1; loop <⫽10; loop ⫽ loop ⫹ 1
s ⫽ s1 ⫹ string( loop ) ⫹s2
bb ⫽ b1 ⫹ string(count) ⫹ b2
write s, b
System setup for this macro is similar to that for MACRO STORE
IMAGES. Two additional defined string variables, b1 and b2 are used to
store the reconstructed images, while s1 and s2 are used to retrieve the
previously stored gray images of the grain boundaries. A loop is created
to do ten different fields of view using the same variables from the
previous macro. Step 3 loads the previously stored images from the
computer; the image to be loaded is defined as s, and the new image is
referred to as 1. The terminology used for image processing is image
process input, output. For any image processing step, the first number or
name is the input image, and the second number or name is the output
image. The new image is segmented so the grains are detected (instruction
4b). Next, a binary hole filling operation (instruction 4c) helps remove
undetected regions within the grains. The image is then inverted so that
boundaries are an operational part of the image (instruction 4d). The
octagonal binary close by 2 operator first dilates the grain boundaries by
two pixels. This joins any pixels that are within four pixels of each other.
Then the image is eroded by two pixels restoring most of the grain
boundaries to their original thickness (instruction 4e). Another image
inversion (instruction 4f) makes the grains the portion of the image that
can be processed. A binary exoskeletonize operation thins the grain
boundaries that were enlarged as a result of the earlier operations
(instruction 4g). The reconstructed binary image is saved to the computer
drive, and the image number is defined by the value of the variable bb.
232 / Practical Guide to Image Analysis
1. SYSTEM SETUP
a. Previously used variables are created.
b. Calibration factor for 32⫻ objective lens; Calconst ⫽ 0.7695
µm/pixel
c. FIELD FEATURES to be measured are defined as “COUNT.”
d. DRAW measured field features in color.
e. LOAD IMAGE “3_circles.tif,” “circles.”
f. b1 ⫽ “bin_gr”
g. b2 ⫽ “.img”
FOR loop ⫽ 1; loop <⫽ 10; loop ⫽ loop ⫹ 1
bb ⫽ b1 ⫹ string(loop) ⫹ b2
Feature-Specific Distributions
total observed area of 54.3 mm2 (2.1 in2). Length (maximum Feret),
thickness (minimum Feret), area, perimeter, anisotropy (length divided by
thickness), and roundness (R ⫽ P2/4A) are measured for each inclusion.
These types of measurements are referred to as feature specific because
properties associated with each individual constituent in the image frame
are measured. In contrast, field measurements represent the entire sum of
a particular parameter for an entire field of view.
The specimen is oriented on the microscope stage so the longitudinal
plane of deformation (i.e., the rolling direction) is coincident with the
y-axis of the stage (Fig. 17). Shading correction is applied after acquiring
an image, and binary operations are used following image segmentation.
Any holes within the inclusions are filled (Fig. 17b), and a vertical close
operation of 3 pixels is used to join inclusions separated from each other
by a distance of approximately 4.6 µm in the deformation direction (Fig.
17c). A vertical open of 1 pixel is used to eliminate from the population
any inclusions less than 1.5 µm long (Fig. 17d), and a square binary close
of 2 pixels is used to join any inclusions within 3 µm of each other in any
direction (Fig. 17e). Results of measurements for each field of view are
written to a database, and statistical parameters, such as sums of
parameters and sums of the squares of the parameters, are tabulated (see
Table 4).
The inclusion length histogram (Fig. 18) is skewed to the right. That is,
there are many more class sizes of long inclusions than short inclusions
relative to the mode of the distribution (see data in Table 5). A plot of the
log of inclusion length as a function of cumulative probability (Fig. 19)
suggests that the inclusion length has a log-normal distribution. The
statistical parameters of the log-normal distribution are calculated using
the length values listed in Table 4. The mean value of the inclusion length
is given by:
n
兺 Li
i⫽1
µ⫽ (Eq 9)
n
236 / Practical Guide to Image Analysis
(a)
(b) (c)
(d) (e)
Fig. 17 AISI type 8620 alloy steel bar (UNS G86200); (a) inclusions; (b) holes filled; (c) vertical close of three pixels;
(d) vertical open of 1 pixel; (e) square close of 2 pixels
Applications / 237
i⫽1
i
2
2 ⫽ (Eq 10)
n(n ⫺ 1)
Using these parameters, the mean of the log-normal distribution, ␣, can be
calculated by:
␣ ⫽ ln [冑( ) ]2
µ
µ
⫹1
(Eq 11)
⫽ 冑ln [(µ) ⫹ 1]
2
(Eq 12)
Fig. 18 Inclusion-length (Feret max) histogram for AISI type 8620 alloy steel
containing 0.029% sulfur
238 / Practical Guide to Image Analysis
Table 5 Inclusion length distribution data of an AISI 8620 alloy steel containing 0.029 % sulfur
Cumulative Cumulative
Length, µm Frequency Cumulative frequency probability, % Length, µm Frequency Cumulative frequency probability, %
and
1. SETUP
a. Load calibration factors.
b. Load shading corrector.
c. Define feature properties to be measured.
d. Define color set to display measured features.
e. Measurement frame definition
f. Define stage scanning parameters (15 X fields, 20 Y fields).
g. Focus stage at corner positions.
WHILE (STAGE)
END WHILE
D⫽ 冑4 · A (Eq 16)
Table 7 Statistical parameters from feature-specific grain data for ingot iron
Perimeter, Feretmin, Feretmax, Anisotropy, Roundness,
Parameter Area, µm2 µm µm µm length/thickness P2/4 · A
Mean 537.87 90.02 20.18 33.06 1.64 1.43
Stdev 560.16 47.74 9.89 17.54 0.36 0.20
Sum X 1,034,315 173,105 38,812 63,568 3156.8 2754.3
Sum X2 1,159,395,781 19,963,446 971,502 2,692,816 5429.1 4023.4
Min 7.11 9.86 2.33 3.84 1.09 1.09
Max 5660.59 370.76 65.45 133.82 3.50 2.62
P, perimeter; A, area
Applications / 243
(a)
(b)
(a) (b)
20 µm
:
(c) (d)
Fig. 21 Carbide distribution in AISI M 42 high-speed tool steel: (a) microstructure revealed by Marble’s reagent etch
(4% nital) in optical micrograph; (b)secondary electron image of etched specimen; (c) backscattered electron
image of unetched specimen; (d) digitized backscattered electron image (four grays)
Applications / 245
NL Ⲛ
AI ⫽ (Eq 17)
NL 储
NL Ⲛ ⫺ NL 储
⍀12 ⫽ (Eq 18)
NL Ⲛ ⫺ 0.571 · NL 储
1
SBⲚ ⫽ (Eq 19)
NL Ⲛ
Ⲛ, or mean free path (mean edge to edge spacing of the bands) is given
by:
1 ⫺ VV
Ⲛ ⫽ (Eq 20)
NL Ⲛ
(a) (b)
F90
AI ⫽ (Eq 21)
F0
For a field of view containing n objects, the anisotropy index for all the
objects is:
Applications / 247
i⫽n
兺 F90
i⫽1
AI ⫽ i⫽n
(Eq 22)
兺 F0
i⫽1
However, for simple geometric shapes such as this, the anisotropy index
can be represented by the corresponding field parameters:
HP
AI ⫽ (Eq 23)
VP
Because one horizontal scan line contains M pixels, the true length of a
horizontal scan line follows (Note that braces, { }, are used in the
Fig. 23 Schematic view of monitor having M pixels along the horizontal axis
and N pixels along the vertical axis
248 / Practical Guide to Image Analysis
HP
Y⫽ , {pixels} (Eq 27)
cc
For this field of view, the number of intercepts per unit length is the total
number of intercepts divided by the total length of the test line:
Y HP /cc
nL ⫽ ⫽ (Eq 28)
L M · N · cc
or
HP
nL ⫽ (Eq 29)
M · N · cc2
Because M ⫻ N is the total number of pixels in the frame and cc2 is the
true area of each pixel:
HP
nLⲚ ⫽ , {µm⫺1} (Eq 30)
Frame Area
VP
nL储 ⫽ , {µm⫺1} (Eq 31)
Frame Area
Consequently, by making two rapid field measurements and using the true
area of the image frame, the stereological parameters described in Eq 17
through 20 are easily calculated using an IA system. Other stereological
Applications / 249
1. SETUP
a. Clear old images and graphics.
b. Define variables.
xstart ⫽ 0
ystart ⫽ 0
xend ⫽ 0
yend ⫽ 480
xend ⫽ N
Draw Vector: xstart, ystart, xend, yend, color ⫽ 15
END FOR
This is a very simple macro to construct. For all operations in the loop,
ystart equals 0 and yend equals 480. The first time through the loop, xstart
is N and xend is N. The draw vector function draws a line from the pixel
having screen coordinates P1 ⫽ (xstart, ystart) to the pixel having
coordinates P2 ⫽ (xend, yend). Numerically, the coordinates are P1 (1,0)
and P2 (1, 480). The second time through the loop, N ⫽ 3 and the new
vector is drawn through points P1 (3,0) and P2 (3, 480). Repeating the
process eventually results in a grid of 320 vertical lines. If the particular
IA system being used does not have these graphic functions, a measuring
frame having a width of one pixel can be created. A similar grid of lines
is constructed by detecting everything within this frame and moving it
along.
Carpet Fibers. Manufacturers of carpet fibers routinely use IA
techniques to check quality control (Ref 18). Carpet fibers have a trilobal
shape in a cross-sectional, or transverse, view (Fig. 25a). One measure of
carpet-fiber quality is the modification ratio, or MOD ratio (related to
carpet-fiber wear properties), wherein two circles are placed on a fiber
using either manual or automated procedures. The larger circle is sized so
it circumscribes the fiber, while the smaller circle is inscribed within the
fiber (Fig. 25b), and the MOD ratio is the area of the large circle divided
by the area of the small circle:
The MOD ratio decreases with decreasing sharpness of the lobes (Fig.
25c).
Large-Area Mapping (LAM). In most IA applications, looking at one
field of view of a particular specimen is like being in a submarine;
microstructural details are observed through a 100⫻ porthole, but the big
picture is missing. In more scientific terms, high resolution is required to
detect, record, and evaluate microstructural details (Ref 19), but lower
magnification is required to present and retain the big picture, that is, the
context of the details on the macroscopic scale.
An ideal solution would be a “megamicroscope” to present a complete
picture of the sample at a low magnification, such as 100⫻. Large-area
mapping, frequently referred to as image tiling, is a step in this direction.
The specimen is completely analyzed at high magnification, storing the
Applications / 251
(a)
(b) (c)
(d) (e)
Fig. 24 Coating thickness determination: (a) plasma-sprayed coating; (b) graphically created vertical measuring lines;
(c) detected image and holes filled; (d) vertical close of 10 pixels; (e) coating thickness formed by Boolean
operator image (d) <AND> image (b)
(a)
(b)
(c)
Fig. 25 Carpet fiber analysis: (a) bundle of carpet fibers; (b) trilobal analysis, circles inscribed in and circumscribed
around fibers; (c) trilobal analysis, modification ratios of two fibers
Applications / 253
images in the memory of the computer. The images then are displayed at
a reduced resolution to show the big picture, while regions of particular
interest can be analyzed at the higher resolution. The following examples
illustrate the use of this procedure.
Detection of Alumosilicate Inclusions. During the process of continuous
casting of steel, the metal flowing out of the weir is “washed” constantly
by blowing inert argon gas through it. Gas bubbles physically draw
nonmetallic material with them towards the surface where the slag can be
separated away from steel. Optimizing this process brings certain cost
advantages. Blowing too much gas is expensive, but decreasing the gas
flow too much results in lower steel quality (that is, higher inclusion
content). Proper process control minimizes gas use and maintains product
quality.
Alumosilicate inclusions appear as finely dispersed clusters, visible
only at 100⫻ or higher magnification. Typical frequency is 1 cluster/cm2,
which means that only one field in about two hundred checked will
possibly be of interest.
The usual method to check material quality is to ultrasonically test the
product after hot working, which is very costly and time consuming.
Thus, this situation provides an excellent opportunity to apply LAM. A
large number of fields of view are scanned automatically, and field
analysis results are digitized and put into a composite picture to show the
microcharacteristics of the sample at a macroscale (Fig. 26a). After
completing the scan, the analyst can use the composite picture as a sort
of a map and position the sample to analyze points of interest (Fig. 26b
and c).
Detecting and Evaluating Cracks. A similar analysis can be applied to
long, thin objects, such as a crack, where the features are too long to be
detected completely as one single object at a magnification required to see
the thickness satisfactorily. The image shown in Fig. 27 illustrates a
typical case, and the solution to the problem is discussed subsequently.
The crack in the image is approximately 5 mm (0.2 in.) long with an
average thickness of approximately 50 µm, so a macro-objective must be
used to see the complete crack. However, a macro-objective reduces the
thickness of the crack to a few pixels or can, in some cases, make the
crack disappear, causing artifacts.
The problem is solved using a scanning stage and tiling neighboring
images into one complete image. The positioning accuracy of the
motorized stage generally is satisfactory; however, if accuracy is a
concern or if the stage is not motorized, then it is possible to let the
computer decide on the optimal tiling arrangement, as shown in the
sequence of images in Fig. 28.
The three images show a traverse over a steel sample containing
hydrogen-induced cracks. The tile positions (rectangular inserts) are
adjusted to get the optimal overlap, the quality of which is indicated in the
254 / Practical Guide to Image Analysis
(a)
(b) (c)
Fig. 26 Detection of aluminosilicate inclusions in steel using large-area mapping. (a) Composite picture of 2000 fields
of view at 100⫻ from a specimen with dimensions ⬃4 ⫻ 2 cm (1.5 ⫻ 0.75 in.). Arrows point to areas having
easily discernable inclusion clusters along with polishing artifacts, mostly scratches. (b) and (c) Inclusion clusters indicated
by arrows. Source: Ref 19
oval inserts. In most cases, the automated scanning stage is used to move
the sample from one field to another and to ensure that a statistically
significant number of measurements, either in terms of images analyzed
or in terms of objects of interest is observed and evaluated. Automated
stage control is used in the preceding two cases to do large-area mapping
to achieve the required resolution while retaining the big picture.
References
CHAPTER 9
Color Image Processing
Vito Smolej
Carl Zeiss Vision
Modeling Color
(a) (b)
Fig. 1 Color wheels. (a) Additive. (b) Subtractive. Please see endsheets of
book for color versions.
Color Spaces
(a) (b)
(c) (d)
Fig. 2 Color space models. (a) Red-green-blue (RGB) and cyan-magenta-yellow (CMY). (b) Lab space. (c)
Hue-lightness-saturation (HLS) space. (d) Munsell space. Please see endsheets of book for color
versions.
The RGB space model is the color system on which all the TV
cameras and, thus, color digitalization and color processing systems, are
based and can be viewed as a cube with three independent directions—
red, green, and blue—spanning the available space. The CMY system of
coordinates is a straightforward complement of RGB space.
JOBNAME: PGIA−−spec 2 PAGE: 4 SESS: 40 OUTPUT: Thu Oct 26 15:56:12 2000
260 / Practical Guide to Image Analysis
Lab Space Model. The body diagonal of the RGB cubic contains grays
(in other words, colorless information) about how light or dark the signal
is. The Lab coordinate system is obtained by turning the body diagonal of
the RGB cube into a vertical position. The Lab color system is one of the
standard systems in colorimetry (i.e., the science of measuring colors).
Here L, or lightness coordinate, quantifies the dark-light aspect of the
colored light. The two other coordinate axes, a and b, are orthogonal to
the L axis. They are based on the opponent-colors approach. In fact, a
color cannot be red and green at the same time, or yellow and blue at the
same time, but can very well be, for example, both yellow and red, as in
oranges, or red and blue, as in purples. Thus, redness or greenness can be
expressed as a single number a, which is positive for red and negative for
green colors. Similarly the yellowness/blueness is expressed by the
coordinate b.
The YIQ and YUV systems used in television broadcasting are defined
in a similar manner. The Y component, called luminance, stands for the
color-independent component of the signal (what one would see on a
black and white TV screen) and is identical to the lightness of the Lab
system. The two remaining coordinates define the specific color. The
following set of equations defines, for instance, the RGB-YIQ conver-
sion:
Y ⫽ 0.299 R ⫹ 0.587 G ⫹ 0.114 B
I ⫽ 0.596 R – 0.274 G – 0.322 B
Q ⫽ 0.211 R – 0.523 G ⫹ 0.312 B
Note that Y is a weighted sum of the three intensities, with green standing
for 58.7% of the final value. Note also, that I is roughly the difference
between the red and the sum of green and blue. In other words, the I
coordinate spans colors between red and cyan. Q is analogously defined
as the difference between combination of red and blue (i.e., magenta) and
green. If all three signals are equal, that is, R ⫽ G ⫽ B ⫽ L, then the
matrix will provide the following values: Y ⫽ L, I ⫽ 0, and Q ⫽ 0. This
corresponds to a black and white image of intensity L, as expected. A
similar matrix is used to transform from RGB to YUV. Here the U and V
axes span roughly the green-magenta and blue-yellow directions.
As mentioned above, the RGB-to-YIQ signal conversion is useful for
transmission: the I and Q signals can be transmitted at lower resolutions
without compromising the signal quality, eventually improving the use of
the broadcast channel or, alternatively, allowing for enhanced Y signal
quality. When the signal is received, it can be transformed back into the
RGB space, for instance, to display it on the TV monitor.
HLS Space Model. Both RGB and YIQ/YUV coordinate systems are
hardware based. RGB comes from the way camera sensors and display
phosphors work, while YIQ/YUV stems from broadcast considerations.
Both are intricately connected—there is no RGB camera without broad-
casting and vice versa.
JOBNAME: PGIA−−spec 2 PAGE: 5 SESS: 40 OUTPUT: Thu Oct 26 15:56:12 2000
Color Image Processing / 261
Two aspects of human vision come up short in this context. The first one
is the hue, or color tone. For example, a specific blue color might be used
to color-stain a sample for image analysis. The results from staining may
be stronger or weaker, but it is still the same color hue. The second aspect,
which is, in a sense, complementary to the color hue, is color saturation,
that is, how brilliant or pure a given color is as opposed to washed-out,
weak, or grayish. In this case, we would be looking for a specific color
hue without any specific expectations regarding the color saturation and
lightness.
HLS color space, which takes into account these two aspects of human
vision, can be viewed as a double cone, hexagonal or circular, as shown
in Fig. 2. The axis of the cone is the gray scale progression from black to
white. The distance from the central axis is the saturation, and the
direction or the angle is the hue.
The transformation from RGB to HLS space and back again is
straightforward and easy with digital images. Thus, a lot of interactive
image analysis is done in HLS space given the fact that HLS tries to
match the human perception.
Munsell Space. This system is mentioned for the sake of complete-
ness. It consists of a set of standard colored samples, which can be used
for visual comparison of colors, with wide use in the textile industry,
printing, and forensics. The colors of the standard samples have been
chosen in an “equidistant” fashion; that is, a close (enough) match in the
Munsell set of samples can be found for every imaginable color (see Fig.
2). Thus, the Munsell system is based on how the human eye, and not how
some electronic component, sees colors. Note that the Munsell system has
more red than green samples: the human eye resolves reds far better than
greens.
Color Images
It has been mentioned elsewhere in this volume that black and white
(B/W) images are digitized at 8 bit resolution; that is, they are digitized
at 256 gray levels per image pixel. In case of a color image, essentially
three detectors are used: one for each of the red, green, and blue primary
colors. Thus, a color image is a triplet of gray-level images.
If every color component consists of 28 (256) gray levels, then the
number of possible colors equals 224(⬃16 million) different colors. In
reality, the number is much smaller, such as in the color image in Fig. 3,
where the number of colors is approximately 249. Note that these 249
colors should not be confused with 256 gray levels of a typical B/W
image; they contain information that would not be available were it not
for the color.
JOBNAME: PGIA−−spec 2 PAGE: 8 SESS: 40 OUTPUT: Thu Oct 26 15:56:12 2000
264 / Practical Guide to Image Analysis
(a) (b)
(c) (d)
Fig. 3 Color image and its red-green-blue (RGB) components. (a) Original image. (b) Red channel. (c) Green channel.
(d) Blue channel. The original color image is a test image used to check for color blindness; not seeing the
number 63 is one of manifestation of color blindness. Please see endsheets of book for color versions.
JOBNAME: PGIA−−spec 2 PAGE: 9 SESS: 41 OUTPUT: Thu Oct 26 15:56:12 2000
Color Image Processing / 265
Fig. 4 Conversion between red-green-blue (RGB) and hue-lightness-saturation (HLS) color space. (a) Original image. (b)
Red. (c) Green. (d) Blue. (e) Hue. (f) Lightness. (g) Saturation. Please see endsheets of book for color versions.
JOBNAME: PGIA−−spec 2 PAGE: 10 SESS: 44 OUTPUT: Thu Oct 26 15:56:12 2000
266 / Practical Guide to Image Analysis
1. ConvertToHLS Input,2
2. SatImage ⫽ “2[1,3]”
3. LitImage ⫽ “2[1,1]”
4. Normalize SatImage, SatImage
5. Binnot LitImage, LitImage
6. Multiply LitImage, LitImage, LitImage, 255
7. Binnot LitImage, LitImage
8. ConvertToHLS 2,Output
(a) (b)
(c) (d)
Fig. 5 Color image processing. (a) Original image. (b) After image processing. (c) Detail before image processing. (d)
Detail after image processing. Please see endsheets of book for color versions.
Color Discrimination
(a)
(b) (c)
Fig. 6 Interactive color discrimination. (a) Original image; the color to be discriminated is circled by the user. (b) Points
with the indicated range of colors are marked green. The user now adds another region. (c) Interactive
discrimination is complete. Please see endsheets of book for color version.
Fig. 7 Example of a user interface for color discrimination. The user interface
offers all the quantitative data that might be of interest. Please see
endsheets of book for color version.
Color Measurement
Thus far in the discussion, color has been used only to detect the actual
phase or part of the sample to be analyzed. Color also can be measured
and quantified by itself, in a manner similar to densitometric measure-
ments used to determine gray levels in gray images. For color-densito-
metric measurement, the original color image (Fig. 8a) and the image of
regions of objects (the binary image of spots in Fig. 8b) are required. Data
extracted (one set for every spot) include the x and y position, RGB
averages, and RGB standard deviations.
Given the RGB values, it is possible to calculate the values for
alternative color models, for instance, CMY or Lab. Note that to be able
to compare measurements, the experimental conditions, such as the color
temperature of the illumination, should be kept constant.
(a) (b)
Fig. 8 Color densitometry. Spot positions and their average colors (together
with the standard deviations) can be extracted from the two images. (a)
Color image to be evaluated. (b) Binary image of spots. Please see endsheets of
book for color version.
(a) (b)
(c) (d)
Fig. 9 Quantitative analysis of color sample to determine volume percent of phases. (a) Original sample. (b) Selection
of brown phase using interactive discrimination; brown phase is selected by circling a typical part of the phase.
(c) Results of color discrimination in hue-lightness-saturation (HLS) space; phase is black, background is white. (d) Two other
phases are detected and color coded to allow easy comparison. Please see endsheets of book for color versions.
JOBNAME: PGIA−−spec 2 PAGE: 15 SESS: 43 OUTPUT: Thu Oct 26 15:56:12 2000
Color Image Processing / 271
Phase Vol%
Red 8.80
Green 3.59
Blue 19.98
Conclusions
Acknowledgment
Index
A polishing of ............................................... 60
relief .............................................. 49, 52(F)
Aberration Aluminum-silicon alloys
chromatic ................................................ 220 dilation and counting technique ............. 136
spherical .................................................. 220 image processing ........................... 84, 85(F)
Abram’s three-circle method in manual Amount of microstructure
analysis (ASTM E 112) .................... 225 constituents .................. 153(F), 154–155
Abrasive grinding ....................................... 36 Analog-camera technology .............. 262, 263
Abrasive grit sizes ................................ 46–47 advantages .............................................. 262
Abrasives, for polishing .............................. 56 disadvantages .......................................... 262
Abrasive-wheel cutting ........ 37–38(F), 39(F) Analog devices .............................................. 2
Absorbed electrons, contrast Analog meter ................................................. 5
mechanisms associated with ......... 102(T) Analog tube-video cameras ..................... 6–7
Accuracy, measurements in applied Anisotropic structures .............................. 192
methods ............................................... 181 Anisotropy, morphological ........................ 197
Acid digestion method ............................... 19 Anisotropy index (AI) ..... 108, 245, 246–247
Acrylic resins, for mounting ................. 41–42 determined by using measurement
Additive color theory ............................... 258 frame ............................................... 214
Advanced statistical analysis .......... 186, 187 Annealing twins .................................... 26, 63
Agglomeration ....................................... 52–53
ALA (as large as) grain size .................... 117 Anodizing .......................................... 68–70(F)
measurement procedure metals affected by ................... 69, 70, 71(F)
(ASTM E 930) ............................... 117 Anosov, P.P. ................................................... 1
Alloys, metallographic specimen Apple-class computers ................................. 9
preparation contemporary practice .. 58(T) Applications .............................. 203–255(F,T)
Alloy steels, mounting of ....................... 41(F) Area ................................. 110, 123(T), 126(T)
Alpha-alumina slurries .............................. 52 Area
Alpha-beta copper alloys, etching .. 66, 69(F) average ...................................................... 22
Alpha grains ..................................... 29–31(F) definition, as feature-specific
Alumina parameter ................................... 213(T)
as abrasive used during polishing ........... 50 Area equivalent diameter ......... 101, 123(T),
acidic suspensions .................................... 53 126(T)
calcined abrasives ............................... 52–53 Area filled .................................................. 110
inclusions ........................................ 217–218 Area fraction ... 1, 20, 22, 115, 126, 150, 178
Alumina slurries, for final polishing ......... 52 average ............................................... 105(F)
Aluminosilicate inclusions, detection by determined by using measurement
large-area mapping ................ 253, 254(F) frame ............................................... 214
Aluminum for space filling grains ........................... 109
anodizing .................................................. 69 manual measurement by point counting
etching ...................................................... 62 (ASTM E 562) ............................... 104
grinding ..................................................... 48 standard notation for ........................... 16(T)
Aluminum alloys method ...................................................... 19
anodizing ....................................... 69, 71(F) Areal density, standard notation for ...... 16(T)
etching .......................... 62, 63(F), 66, 70(F) Area of features ........... 104, 110–111, 126(T)
grinding and embedded SiC filled ........................................................ 104
particles ........................................ 48(F) Area-weighted distributions ...... 190(F), 192,
metallographic specimen preparation, 194(F)
contemporary practice .......... 59(T), 60 Arithmetic processing of images .............. 86
JOBNAME: PGIA−−spec 2 PAGE: 2 SESS: 7 OUTPUT: Thu Oct 26 15:57:10 2000
274 / Index
Densitometer, QTM system as ..................... 4 Duplex grain size ..... 108, 117, 118–119, 125
Depth measurements .................................. 16 Duplex grain size measurement (ASTM
Desktop imaging, in 1990s .............. 10–12(F) E 1181) .......................................... 127(T)
Deviation moments ................................... 152 Duplex steels, microstructural parameters
Diallyl phthalate ................................... 40, 43 effect on hardness .......................... 196(F)
Diameter of grain areas, equivalent Dyes .............................................................. 41
spherical .............................................. 241
Differential interference-contract (DIC)
illumination .............. 50(F), 51(F), 63(F)
Digital-camera technology .............. 8, 12–13 E
advantages ...................................... 262–263
disadvantages .......................................... 263 Ealing-Beck ................................................... 6
Ecole des Mines de Paris ............................. 5
Digital-image acquisition devices ............... 3
Edge effects .................................. 142(F), 143
Digital imaging ............................................. 3
Edge of features, sharpening of ............... 207
Digital measurements
Edge preservation ............................ 42–46(F)
bias introduction ..................... 180–182(F,T)
Edge retention ........................... 42–46(F), 72
problems of ..................................... 179–180 by automatic polishing device ................. 56
Digital-signal processing hardward ........... 6 guidelines ............................................ 45–46
Dilate (binary image amendment
Edge rounding ................................. 41(F), 42
operator) ....................................... 223(T)
Electroless nickel plating ...................... 42(F)
Dilation ............. 96–97(F), 98(F), 134–136(F)
Electroless plating ...................................... 46
Dilation and counting technique ........... 130,
Electrolytic plating ..................................... 46
134–136(F), 143
Electronic components, precision saws
Dilation cycles .............................. 134–136(F)
for sectioning ........................................ 39
Dilation operation ................................ 5, 7, 8
Electron microscopes ................ 81–82, 87(F)
Dilation technique ....................... 141–143(F)
Electropolishing .................................... 53–54
Directed diameters (DD) .......................... 110
standard (ASTM E 1558) ................. 127(T)
Directionality ......................................... 23–24
Directional operators used to amend vs. electrolytic etching ............................. 68
image ............................................. 224(F) Ellipse, ideal
Dirichlet cell ......................................... 137(F) gray level
size of .................................................... 139 analyzed as histograms ............. 206, 207(F)
Dirichlet network ............................. 130, 143 defined ............................................ 205–206
for a particle dispersion ......................... 137 Elongation ............................ 157, 158(F), 159
Dirichlet tessellations .......... 120–122(F), 143 Embedding .................................. 48(F), 49(F)
Dirichlet tessellation Energy-dispersive spectroscopy ................ 66
technique ...................... 137–140(F), 142 Epoxy resins ............ 40, 41–42, 43, 45(F), 46
Discrimination of colors ............. 267–269(F) for mounting ............................ 41–42(F), 43
Disector method ........................................ 164 Equivalent diameter ................................. 192
Distance function ...................................... 151 Equivalent grain diameters ..................... 241
Distribution functions ...................... 184–187 cumulative distribution for
for grain section area ........ 191–193(F), 194 ingot iron ................................... 243(F)
Distributions ...................................... 184–187 for ingot iron ..................................... 242(T)
area-weighted ............... 190(F), 192, 194(F) histogram for ingot iron .................... 243(F)
bimodal grain size .................................. 190 Erode (binary image amendment
log-normal .............................................. 237 operator) ....................................... 223(T)
nonmetallic inclusion ............. 234–241(F,T) Eroded point ......................................... 91–92
normal ...... 183–184, 185, 191–192, 193(F) Erosion .......................... 96–97(F), 98(F), 134
number-weighted ............................... 190(F) Erosion/dilation, definition ................... 91–92
Disturbed metal, introduction of .... 49, 50(F) Erosion operation ................................ 5, 7, 8
Dots per inch (dpi) ..................................... 77 Estimates, weighted, of properties ........... 189
Dragging out, of graphite Estimating basic characteristics ..... 183–186
and inclusions ............................ 49, 51(F) Estimators ................................................. 186
Draw-vector function ............................... 250 Etchant developments (ASTM E 407) ..... 62
Dry abrasive cutting .................................. 36 Etchants ..................................... 18, 36, 45(F)
Ductile iron, mechanical Barker’s reagent ............................ 69, 71(F)
properties ............................... 147–148(F) Beraha’s solution ................................. 64(F)
Duplex feature size ........................... 107–108 Beraha tint etchant ............. 66, 67(F), 68(F)
JOBNAME: PGIA−−spec 2 PAGE: 6 SESS: 5 OUTPUT: Thu Oct 26 15:57:10 2000
278 / Index
Printed circuit boards, precision saws conversion to HLS model ................. 265(F)
for sectioning ........................................ 39 Reference circle ........................... 157, 158(F)
Probabilities of cumulative distribution Reference image ......................................... 82
(inclusion lengths) ............................. 237 Reference space ........................................ 151
Profiles .................................. 174–175, 176(F) Reference surface ..................................... 109
Profilometer ............................................... 119 Reference systems ..................................... 164
Profilometry ............................................... 110 Reflected light, contrast mechanisms
Pruning ........................................... 97–100(F) associated with .............................. 102(T)
Pseudobackground image .......................... 82 Reflected-light microscopy (ASTM E
Pseudocolor .................. 104, 105(F), 110, 114 883) ................................................ 127(T)
to distinguish which pixels should be Regions of influence (SKIZ) operator . . . 121
assigned to a particular phase ........ 116 Relative accuracy, desired degree at 95%
Pseudocolor LUT enhancement .......... 83–84 CL ......................................................... 33
Pseudoreference image ................... 82, 83(F) Relative accuracy (%RA) of
Psi ()-phases ............................................. 69 measurement ....................................... 33
Relative minimum point of gray level
frequency value ................................. 214
Relief .......... 49, 52(F), 53, 58, 60, 72, 166(F)
Q Removal rates ............................................. 72
Resolution ...................................... 77–78, 110
Q720 (IMANCO) ..................................... 7(F) camera ................................................ 78, 80
QTM. See Quantitative television definition ................................................... 77
microscope. of digital equipment ............... 171–175(F,T)
Quality control
objective, and theoretical size of CCD
and data interpretation ........................... 191
cell ..................................... 171, 172(T)
inclusion content of HSLA steels .......... 197
theoretical, of an optical microscope .... 171
Quantimet A ....................................... 4, 5, 12
Quantimet B ........................................ 4, 5(F) Reticule ...................................................... 104
Quantimet 720 (Q720) system ............... 6(F) RGB averages .............................. 269, 271(F)
Quantitative description of the RGB-HLS model conversion .............. 265(F)
microstructure .................................. 145 RGB primary colors ........................ 257–258
Quantitative microscopy .............................. 1 RGB standard deviations ........... 269, 271(F)
definition ..................................................... 1 RGB-to-YIQ signal conversion,
television broadcasting ..................... 260
Quantitative television microscope
Rise time of video signal ......................... 210
(QTM) ........................................... 4, 5(F)
Roberts algorithm ...................................... 92
Quasi-homogeneous distribution ............ 132
Rosiwal, A. .............................................. 1, 20
Queens Award to Industry .......................... 4
Roughness ............................... 123(T), 126(T)
Roundness ................................................. 213
Round-robin testing, delta ferrite
R measurement ........................ 214–215(F)
Saturation image ...................................... 265 in x-ray intensity maps ............. 114, 115(F)
Sauveur, Albert ....................................... 1, 27 Silicon carbide grinding paper
Saws, precision .................................. 38–39(F) alternatives used for grinding ............ 57–58
Scanning, of printed images, and pressure-sensitive adhesive backed ......... 46
resolution ............................................ 173 Silicon-ferrite, mechanical
Scanning electron microscope ............ 80–81, properties ............................... 147–148(F)
99(F), 104 Sintered carbides
Scanning electron microscope beam metallographic specimen preparation,
size characterization contemporary practice .......... 59(T), 60
(ASTM E 986) ............................. 127(T)
precision saws for sectioning ................... 39
Scanning electron microscope
magnification calibration (ASTM E Sintered powder materials, and bias in
766) ................................................ 127(T) specimen preparation ............ 166(F), 167
Scanning electron microscopy ................ 109 Sintered powder products, observation
Scanning probe microscopy .................... 110 of ......................................................... 167
Scanning tunneling microscopy (STM) .. 110 Sintered stainless steel, boron addition
Scientific image analysis .............................. 3 effect on microstructure ........ 199–200(F)
Scratches, elminated in specimen Sintered tungsten-carbide-cobalt,
preparation .......................................... 210 etching ....................................... 66, 70(F)
Secondary electrons, contrast Size of microstructural
mechanisms associated with ......... 102(T) constituents .............. 153(F), 155–156(F)
Second-degree least-squares Skeleton by influence zones
fit function .................................... 216(F) (SKIZ) ..................................... 97–100(F)
Second-phase particles ............................. 164 image amendment procedure ............ 223(T)
Sectioning ................................... 36, 37–39(F) Skeleton invert, image amendment
damage created by ................................... 72 procedure ....................................... 223(T)
grinding to remove burrs from ................ 48 Skeletonization ........................... 8, 97–100(F)
Skeleton normal, image amendment
Segmentation .................................... 88–91(F)
procedure ....................................... 223(T)
and bias introduction ....... 171, 172–174(F),
SKIZ. See Skeleton by influence zones.
175(F) Smearing .................................................... 166
nonuniform .......................................... 91(F) Smooth gray processing
texture ....................................................... 92 function ................................. 209–210(T)
watershed ............................................ 91–92 Sobel edge detection filter .......... 208–209(F)
Selection criterion(C) ............................... 182 Sobel operator ........................................ 87(F)
Semiautomatic digital tracing devices ..... 33 Sodium hydroxide ................ 66, 67(F), 68(F)
Semiautomatic systems ............................ 110 Soft ceramic shot ................................... 45(F)
Semiautomatic tracing tablets .................. 17 Software, image analysis specific ............. 152
Serial sectioning method ............ 164, 165(F) Software packages, for image analysis ...... 10
Series distribution of phases ........... 146–147 Sol-gel alumina ..................................... 52–53
Serra, J. ......................................................... 5 Sol-gel process ............................................. 52
Setup of system ......................................... 204 Solid-state devices ....................................... 81
information used to construct macro ......... 218 Sorby .............................................................. 1
Shade correction ....................................... 167 Space filling grains .......... 106, 109, 116, 126
Shading corrector ..................................... 205 Spacing .................................................. 24–25
Shape .................................................. 122–123 Sparse sampling technique ...................... 130
Shape factors ......... 157, 158, 159(F), 160(F), Spatial distribution
196, 197 of microstructural constituents ......... 153(F),
definition ................................................. 159 160–161
of grain sections ................................ 194(F) randomness in ......................................... 197
Shape of microstructural Spatial resolution ...................................... 262
constituents .............. 153(F), 157–160(F) Specialized real-time machine-vision .. 11–12
Shrinkage cavities ....................................... 18
Specific length ........................................... 150
Sigma-phase .......................................... 66, 69
Significance level ....................................... 184 Specific surface area ................................ 150
Silica, basic colloidal suspensions .............. 53 Specimen preparation ................................ 18
Silicon, x-ray intensity maps of bias introduction ........................ 165–171(F)
carbides .................................. 114, 115(F) for image analysis ...................... 35–74(F,T)
Silicon carbide for metallography (ASTM E 3) .............. 61,
image processing ................................. 87(F) 127(T)
JOBNAME: PGIA−−spec 2 PAGE: 16 SESS: 5 OUTPUT: Thu Oct 26 15:57:10 2000
288 / Index
Tessellation technique ...................... 136, 143 grain size revealed by etching ................. 61
Textiles, microscopy of material metallographic specimen preparation,
standard (ASTM D 629) ............. 127(T) contemporary practice .......... 59(T), 60
Test circle ..................................................... 27 Titanium alloys
Test lines ................... 27–28, 29, 30, 103, 179 ␥-phase grain size determination ........ 31(F)
curvilinear ............................................... 152 attack polishing solution added to
length measurement ....................... 104, 106 slurry ................................................. 58
used to measure intercepts ................ 106(F) image processing ................................. 87(F)
Thermal barrier coatings .......................... 84 metallographic specimen preparation,
Thermally sprayed coatings, precision
contemporary practice .......... 59(T), 60
saws for sectioning ............................... 39
Thermally sprayed metallic specimen staining from polishing solution ... 49, 51(F)
preparation guidelines for stereopair of fracture surface ............ 119(F)
metallography (ASTM E 1920) ........ 61 Tool steels
Thermoplastic mounting resins ......... 40, 43, carbide distribution analysis,
44(F), 46 feature-specific ............... 241–244(F,T)
Thermosetting mounting resins .... 40, 41(F), comet tailing from polishing ........ 49, 51(F)
43, 44(F), 45(F), 46 mounting .............................................. 45(F)
Theta-phase, in copper alloys ......... 66, 70(F) Top-hat processing ................................. 87(F)
Thinning (skeletonization) ............ 97–100(F) Total projected length of inclusions ....... 198
Thompson, E. ............................................ 1–2 Transformed image .................................. 208
Three-circle method ......................... 232–234 Transition zone, between matrix and
Three-circle method of intercept etched constituent ............................... 207
counting (ASTM E 1382) ................ 225 Transmission electron microscope
3-D reconstructions ..................... 164, 165(F) (TEM) ........................... 25, 80–81, 87(F)
Thresholding ... 88, 89(F), 91(F), 104–105(F), Transmitted electrons, contrast
171, 174–176(F) mechanisms associated with ......... 102(T)
automatic .................................................. 89 Transmitted light, contrast mechanisms
of beta-phase in CDA brass ................... 214 associated with .............................. 102(T)
brass ........................................... 220–221(F) Trepanning .................................................. 17
color images ............................................. 90 Trilobal analysis .......................... 250, 252(F)
definition ................................................... 88 Trisector method ........................................ 37
of delta ferrite area fraction in stainless True line length .......................................... 29
steel .................................... 214–215(F) True surface area ..................................... 109
Truth tables ................................................. 93
of delta ferrite in stainless
t-statistics (Student) .................................. 184
steel .................................... 214–215(F) Tungsten, anodizing .................................... 69
of pearlite in annealed steel ...... 215–218(F) Tungsten carbide-cobalt, etching ... 66, 70(F)
steps in ....................................... 222–223(F) Turnkey systems ......................................... 11
to detect dark etching grain TV scanner, alignment, resolution, and
boundaries ....................................... 225 response .............................................. 205
Threshold setting .............................. 110–111 Two-phase materials, data
Through-hardened steels, polishing .... 49–50 interpretation .......................... 194–197(F)
Tilt axis ...................................................... 109
Time resolution ......................................... 262
Tin, grinding ................................................ 48 U
Titania, tessellation cells constructed
from features in a microstructure ..... 121, Ultimate dilation, image amendment
122(F) procedure ....................................... 223(T)
Titanium, anodizing .................................... 69 Ultimate eroded point ................................ 91
Titanium Ultimate erosion, image amendment
commercially pure procedure ....................................... 223(T)
residual sectioning, grinding damage Ultimate tensile strength
from polishing ....................... 49, 50(F) of ferritic ductile iron .. 147–148(F), 149(F)
heat tinting ................................ 71, 72(F) of silicon ferrite ............ 147–148(F), 149(F)
polishing of ........................................... 60 Ultrasonic cleaning ..................................... 54
sectioning damage ..................... 38, 39(F) Unbiased characterization ....................... 162
etching ...................................................... 62 Units per pixel ..................................... 126(T)
JOBNAME: PGIA−−spec 2 PAGE: 18 SESS: 6 OUTPUT: Thu Oct 26 15:57:10 2000
290 / Index