MorphoLibJ-manual-v1 6 2
MorphoLibJ-manual-v1 6 2
MorphoLibJ-manual-v1 6 2
2. Installation instructions 6
2.1. Installation in ImageJ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
2.2. Installation in Fiji . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
4. Morphological filtering 13
4.1. Principles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
4.2. Grayscale morphological filters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
4.3. Directional filters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
4.4. Plugin Usage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
6. Watershed segmentation 30
6.1. Classic Watershed plugin . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
6.2. Marker-controlled Watershed plugin . . . . . . . . . . . . . . . . . . . . . . . . . 33
6.3. Interactive Marker-controlled Watershed plugin . . . . . . . . . . . . . . . . . . 35
6.4. Morphological Segmentation plugin . . . . . . . . . . . . . . . . . . . . . . . . . . 38
6.5. Distance Transform Watershed . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
7. Measurements 47
7.1. Region Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
7.2. Region Analysis 3D . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
7.3. Intensity measurements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
7.4. Label Overlap measures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
7.5. Microstructure analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
8. Binary images 69
8.1. Distance transforms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
2
Contents
9. Label images 79
9.1. Visualization of label images . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80
9.2. Visualization of region features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
9.3. Label image processing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
9.4. Region adjacencies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86
9.5. Label Edition plugin . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
10.Library interoperability 90
10.1.Library organization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90
10.2.Scripting MorphoLibJ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91
Bibliography 93
A. List of operators 97
• morphological filtering for 2D/3D and binary or grayscale images: erosion & dilation,
closing & opening, morphological gradient & Laplacian, top-hat...
The home page of the project is located at http://ijpb.github.io/MorphoLibJ/, and the source
code can be found on GitHub: http://github.com/ijpb/MorphoLibJ. The exhaustive code
documentation includes many use-case examples, and can be found online1 .
The reference publication for MorphoLibJ is Legland et al. (2016).
1
http://ijpb.github.io/MorphoLibJ/javadoc/
5
2. Installation instructions
2.1. Installation in ImageJ
To install the MorphoLibJ toolkit in ImageJ, you only need to download the latest release (in
the form of a JAR file) into ImageJ’s plugins folder and restart ImageJ.
All released versions can be found at https://github.com/ijpb/MorphoLibJ/releases.
1. Open Fiji and select Help > Update... from the Fiji menu to start the updater.
6
2.2 Installation in Fiji
3. This brings up a dialog where you can activate additional update sites:
4. Activate the MorphoLibJ update site and close the dialog. Now you should see an
additional jar file for download:
5. Finally, click on Apply changes and restart Fiji. All the plugins should be available after
restarting.
Contents
3.1. Digital images . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
3.1.1. Data types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
3.1.2. Image types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
3.1.3. 3D images . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
3.2. Coordinate system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
3.3. Digital connectivity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
3.3.1. 2D connectivities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
3.3.2. 3D connectivities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
9
3.1 Digital images
3.1.3. 3D images
Most algorithms from mathematical morphology (and more generally, many image process-
ing algorithms) extends naturally from two dimensions to three dimensions. Within ImageJ,
3D images are represented as “image stacks”, seen as stacks of 2D arrays. Elements of a 3D
image are usually refered to as “voxels”, for volumetric picture element.
3.3.1. 2D connectivities
For planar images, typical choices are the 4- and the 8-connectivities. The 4-connectivity
considers only the orthogonal neighbors of a given pixel (Figure 3.1-a). The 8-connectivity
also considers the diagonal pixels (Figure 3.1-b).
The discrete nature of images results in potential problems when considering geometric
properties on reconstructed structures. A typical example is the Jordan curve theorem, which
states than any curve of the plane that do not self-intersects divides the plane into exactly
two regions: one interior and one exterior (Figure 3.2-a).
When choosing a digital connectivity and considering reconstructed curves, the theorem
is not always true, as a 4-connected curve may create several disconnected interior regions
Figure 3.2.: Jordan curve theorem in continuous plane and in digital plane. The digitization
of a 4-connected curve results in the creation of two inner regions that are not 4-connected.
(Figure 3.2-b). The common solution is to consider pairs of adjacencies, one for the fore-
ground (here the curve), and one for the background. One can check that the two interior
regions are 8-connected, making the Jordan curve theorem valid for the (4,8)-adjacency pair.
3.3.2. 3D connectivities
In 3D, two connectivities are commonly used. The 6-connectivity considers only orthogonal
neighbors of the center voxel along the three principal directions (Figure 3.3-a). The 26-
connectivity considers all the possible neighbors within a 3 × 3 × 3 cube centered around the
reference voxel (Figure 3.3-b). As for the 2D case, one often considers pairs of complemen-
tary adjacencies for the foreground and for the background (Kong & Rosenfeld, 1989; Ohser
& Schladitz, 2009).
Contents
4.1. Principles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
4.2. Grayscale morphological filters . . . . . . . . . . . . . . . . . . . . . . . . 14
4.2.1. Grayscale erosion and dilation . . . . . . . . . . . . . . . . . . . . . 14
4.2.2. Opening and closing . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
4.2.3. Morphological gradients . . . . . . . . . . . . . . . . . . . . . . . . 16
4.2.4. Top-hats . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
4.3. Directional filters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
4.4. Plugin Usage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
4.4.1. Planar images . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
4.4.2. 3D images . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
4.4.3. Directional Filters . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
13
4.1 Principles
4.1. Principles
The original idea was to define a methodology to describe shapes by using another shape
as test probe (Serra, 1982; Serra & Vincent, 1992). The test probe is usually referred to
as structuring element. Common structuring element include squares, discrete disks and
octagons. Linear structuring element of various orientations may also be used to assess local
orientation of the structures.
The most basic morphological operators are the morphological dilation and the mor-
phological erosion. The principle of morphological dilation is to test for each point of the
plane, if the structuring element centred on this point intersects the structure of interest
(Fig. 4.1). It results in a set larger than the original set.
The principle of morphological erosion is to test for each point of the plane if the struc-
turing element centred on this point is contained within the original set. It results in a set
smaller than original set.
(a) Binary set X and structuring el- (b) Dilation of X by B (c) Erosion of X by B
ement B
Figure 4.1.: Principle of morphological dilation and erosion on a binary set, using a disk-shaped
structuring element.
Morphological dilation and erosion change the size and the resulting set. It may also
change its topology: after a dilation, components may merge and holes be filled. After an
erosion, components may disappear, or components be separated into several parts.
(defined by the structuring element), whereas the morphological erosion considers the min-
imum value within the neighborhood (Fig 4.2).
Figure 4.2.: Some examples of morphological filters on a grayscale image. Original image, re-
sult of dilation with a square structuring element, and result of erosion with the same structuring
element.
Applying a dilation or an erosion changes the size of the structures in the image: the grains
in the result of the dilated image are larger. As for binary sets, these operations may also
merge, separate, or make disappear some components of the image.
Figure 4.3.: Grayscale morphological closing and opening. (a) Original image. (b) Result of
morphological closing. (c) Result of morphological opening.
(a) Original image (b) Raw segmentation (c) Morphological closing (d) Morphological opening
Figure 4.4.: Enhancing segmentation with morphological filters. (a) Original image represent-
ing a section of maize tissue. (b) Binarisation by thresholding (inverted LUT). (c) Morphological
closing removes most holes within the bundles (inverted LUT). (d) Morphological opening re-
moves small dirts outside of vascular bundles (inverted LUT).
The application of a morphological closing on the binary images allows to consolidate the
structures of interest by removing small holes within them (Figure 4.4-c). Then, a morpho-
logical opening removes the small particles that do not correspond to the vascular bundles
(Figure 4.4-d). The result is a binary image showing only the vascular bundles.
Figure 4.5.: Some examples of morphological filters resulting from the combination of elemen-
tary morphological filters. (a) Morphological gradient on a grayscale image. (b) Morphologi-
cal gradient on the binary image represented in Figure 4.4-d (inverted LUT). (c) Morphological
Laplacian on a grayscale image.
4.2.4. Top-hats
The white top-hat operator first computes a morphological opening (resulting in removing
bright structures smaller than structuring elements), and removes the result from the original
image. When applied with a large structuring element, the result is an homogenization of
the background, making bright structures easier to segment (Figure 4.6-c).
(a) Original image (b) White Top-Hat (c) Profile plots along
vertical line
Figure 4.6.: Top-hat filtering. (a) Original grayscale image, showing inhomogeneous back-
ground. (b) Result of the White Top-Hat operator using square with radius 20 as structuring
element. (c) Profile plots along line region of interest for each image.
Similarly, the dark top-hat can be used to enhance dark structures observed on an non-
homogeneous background.
(a) Original image (b) Gaussian filtering (c) Median filtering (d) Directional filtering
Figure 4.7.: Filtering of a thin structure. (a) Original image representing apple cells observed
with confocal microscopy (Legland & Devaux, 2009). The application of a Gaussian filter (b)
or median filter (c) results in noise reduction, but also in a loss of the signal along the cell walls.
The directional filtering (d) better preserves the thickness of the structure.
(a) Horizontal direction (b) Vertical direction (c) Two directions (d) Four directions
Figure 4.8.: Principle of directional filtering of a thin structure. (a) and (b): result of median
filter using an horizontal and a vertical linear structuring element. (c) and (d): combination of
the results obtained from two directions (horizontal and vertical) and four directions (by adding
diagonal directions).
The results of oriented filters for each direction can be combined by computing the max-
imum value over all orientations (Fig. 4.7-d). Figures 4.8-c and 4.8-d show the results ob-
tained when combining two or four directions. Here, 32 orientations of line with length 25
were used. This results in the enhancement of the image while preserving the thickness of
the bright structures.
Similar results may be obtained for enhancing dark curvilinear structures, by using mor-
phological closing or median filters, and combining the results by computing the minimum
over all directions.
erosion keeps the minimum value within the neighborhood defined by the structuring ele-
ment.
dilation keeps the maximum value within the neighborhood defined by the structuring el-
ement.
closing consists in the succession of a dilation with an erosion. Morphological closing makes
dark structures smaller than the structuring element disappear.
black top-hat consists in subtracting the original image from the result of a morphological
closing, and results in the enhancement of dark structures smaller than structuring
element.
white top-hat consists in subtracting the result of a morphological opening from the origi-
nal image, and results in the enhancement of bright structures smaller than structuring
element.
• disk
• square
• octagon
• diamond
4.4.2. 3D images
Morphological filters for 3D images are available under “Plugins . MorphoLibJ . Morphological
filters (3D)”. The dialog let the user choose the structuring element shape and radius. The
same list of operations as for planar images is provided. Planar structuring elements can be
used (the operation is simply repeated on each slice), as well as a cubic or spherical struc-
turing element. For most structuring elements, the size can be chosen for each direction.
Type to specify how to combine the results for each oriented filter
Operation the operation to apply using each oriented structuring element
Line Length the approximated length of the structuring element.
Direction Number the number of oriented structuring elements to consider. To be in-
creased if the length of line is large.
Contents
5.1. Morphological reconstruction . . . . . . . . . . . . . . . . . . . . . . . . 22
5.1.1. Principle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
5.1.2. Applications to binary images . . . . . . . . . . . . . . . . . . . . . 22
5.1.3. Applications to grayscale images . . . . . . . . . . . . . . . . . . . . 23
5.1.4. Application to the segmentation of chromocenters in a 3D image . 23
5.1.5. Usage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
5.2. Regional and extended extrema . . . . . . . . . . . . . . . . . . . . . . . 25
5.2.1. Regional minima and maxima . . . . . . . . . . . . . . . . . . . . . 25
5.2.2. Extended minima and maxima . . . . . . . . . . . . . . . . . . . . . 26
5.2.3. Minima or maxima imposition . . . . . . . . . . . . . . . . . . . . . 26
5.2.4. Plugins usage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
5.3. Attribute filtering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
5.3.1. Application to binary images . . . . . . . . . . . . . . . . . . . . . . 28
5.3.2. Application to grayscale images . . . . . . . . . . . . . . . . . . . . 28
5.3.3. Usage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
21
5.1 Morphological reconstruction
5.1.1. Principle
The principle of morphological reconstruction canbe illustrated by applying conditional di-
lations or erosions until idempotence. Conditional dilation is the result of a dilation, com-
bined with a mask image using a logical operation. Conditional dilations are repeated until
no more modification occur (idempotence condition).
Figure 5.1.: Principle of the morphological reconstruction algorithm. Original image is repre-
sented in gray, with marker and result of conditional dilations with increasing sizes superim-
posed in black.
The figure 5.1 shows several steps of a morphological reconstruction by dilation on a pair
of binary images. The mask image is shown in gray, and the marker image is shown in black
on the first image. The reconstructed images at each step are shown in black. The markers
propagates within the mask until an additional conditional dilation will not modify the image
any more. The result is the set of regions that contains the markers, with the same shape as
in the original image.
In practice, morphological reconstruction is often implemented using more efficient algo-
rithms based on queues, by a adding pixels or voxels to the queue based on the connectivity
with processed elements (see Section 3.3).
Figure 5.2.: Some applications of morphological reconstruction. From left to right: original
image, result of kill borders, result of fill holes.
Figure 5.3.: Some applications of morphological reconstruction on grayscale images. From left
to right: original image with markers superimposed in red, result of morphological reconstruc-
tion by dilation, result of border kill operation.
The border kill operation can also be applied on grayscale images, making possible to
rapidly remove structures touching the image borders, while keeping the intensity informa-
tion within the structures.
the original image used as mask. The result of the morphological reconstruction retains the
shape of the nucleus, while “clearing” the bright chromocenters (Fig. 5.4-c).
By computing the difference of the original image with the result of the morphological
reconstruction, the chromocenters can be easily isolated and segmented (Fig. 5.4-d).
5.1.5. Usage
The reconstruction algorithm is often used within other operators. It is however provided as
a plugin to allow its inclusion in user-designed macros or plugins:
Morphological Reconstruction
Computes the reconstruction by erosion or dilation using a marker image and a mask image,
and a specified connectivity.
Morphological Reconstruction 3D
Computes the reconstruction by erosion or dilation on a 3D image (marker and mask images
are defined by the user among the open ones).
The kill borders and fill holes operations are also provided as plugins. Both work for 2D
and 3D images of 8, 16 or 32 bit-depth images.
Kill Borders
Removes the particles touching the border of a binary or grayscale image. See also the
Remove Border Labels Plugin (Section 9.3.1), that performs the same operation on label
maps but without requiring reconstruction.
Fill Holes
Removes holes inside particles in binary images, or remove dark regions surrounded by
bright crests in grayscale images.
(a) Original image (b) Regional maxima (c) Profile of gray levels
Figure 5.5.: Regional maxima on a grayscale image. (a) Original image with a line ROI su-
perimposed. (b) Result of regional maxima, showing many spurious maxima. (c) Gray-level
profile along the line ROI shown in (a).
MorphoLibJ uses the algorithm described in Breen & Jones (1996) for computing regional
minima and maxima.
For images, extended maxima are defined as a connected region containing elements such
that the difference of the value of each element within the region with the maximal value
within the region is lower than the tolerance, and such that the neighbors of the regions all
have values smaller than the maximum within the region minus the tolerance. This definition
allows the identification of larger extrema, that better takes into account the noise within
the image (Fig. 5.7). The extended minima are defined in a similar way, and are efficiently
used as pre-processing step for watershed segmentation.
Both extended maxima and minima are computed using the morphological reconstruction
algorithm. More details can be found in the book of Soille (2003).
(a) Original image (b) Extended maxima (c) Profile of gray levels
Figure 5.7.: Extended maxima on a grayscale image. (a) Original image with a line ROI super-
imposed. (b) Result of extended maxima computed with a dynamic of 10. (c) Gray-level profile
along the line ROI shown in (a).
input image. The result is a grayscale image whose regional minima or maxima are the same
as the specified ones.
Figure 5.8.: Example of area opening on a binary image. (a) Original binary image. (b)
Identification of connected components. (c) Only the connected components with a sufficient
size (defined by the area), have been retained.
Figure 5.9.: Example of area opening on a grayscale image. (a) Original grayscale image of
a leaf (image courtesy of Eric Biot, INRA Versailles). (b) Grayscale size opening making bright
spots disappear. (c) Comparison with morphological closing using a square structuring element
of radius 1: bright spots are removed, but some veins also disappear.
smaller than a specified value. White [resp. Black] Attribute Top-Hat considers the difference
of the attribute opening [resp. closing] with the original image, and can help identifying
bright [resp. dark] structures with small size.
5.3.3. Usage
So far, the following attribute filtering plugins are available within MorphoLibJ (under “Plu-
gins . MorphoLibJ”):
Gray Scale Attribute Filtering opens a dialog to perform between attribute opening, clos-
ing, and black or white top-hat on a planar (2D) grayscale image. Two size criteria
can be used: the area (number of pixels), or the diameter (length of the diagonal of
the bounding box).
Gray Scale Attribute Filtering 3D opens a dialog to perform between attribute opening,
closing, and black or white top-hat on a 3D grayscale image. The size criterion is the
number of voxels.
Contents
6.1. Classic Watershed plugin . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
6.1.1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
6.1.2. Usage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
6.1.3. Over-segmentation . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
6.2. Marker-controlled Watershed plugin . . . . . . . . . . . . . . . . . . . . 33
6.2.1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
6.2.2. Usage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
6.3. Interactive Marker-controlled Watershed plugin . . . . . . . . . . . . . . 35
6.3.1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
6.3.2. Usage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
6.4. Morphological Segmentation plugin . . . . . . . . . . . . . . . . . . . . . 38
6.4.1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
6.4.2. Usage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
6.4.3. Macro language compatibility . . . . . . . . . . . . . . . . . . . . . 42
6.5. Distance Transform Watershed . . . . . . . . . . . . . . . . . . . . . . . . 43
6.5.1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
6.5.2. Distance Transform Watershed . . . . . . . . . . . . . . . . . . . . . 43
6.5.3. Distance Transform Watershed (3D) . . . . . . . . . . . . . . . . . . 45
30
6.1 Classic Watershed plugin
h=40
h=90
h=160
Intensity
6.1.2. Usage
The Classic Watershed plugin runs on any grayscale image (8, 16 and 32-bit) in 2D and 3D.
At least one image needs to be open in order for the plugin to run. If that’s the case, a dialog
like the following will pop up:
• Image parameters:
– Input image: grayscale image to flood, usually the gradient of an image.
– Mask image (optional): binary image of the same dimensions as the input image
which can be used to restrict the areas of application of the algorithm. Set to
"None" to run the method on the whole input image.
• Morphological parameters:
– Use diagonal connectivity: select to allow the flooding in diagonal directions
(8-connectivity in 2D and 26-connectivity in 3D, see Section 3.3).
– Min h: minimum grayscale value to start flooding from (by default, set to the
minimum value of the image type).
– Max h: maximum grayscale value to reach with flooding (by default, set to the
maximum value of the image type).
Output:
• Labeled image containing the resulting catchment basins (with integer values 1, 2,
3...) and watershed lines (with 0 value).
6.1.3. Over-segmentation
Normally, Classic Watershed will lead to an over-segmentation of the input image, especially
for noisy images with many regional minima. In that case, it is recommended to either pre-
process the image before running the plugin, or merge regions based on a similarity
criterion afterwards. Several de-noising methods are available in Fiji/ImageJ, namely:
median filtering, Gaussian blur, bilateral filtering, etc.
Example: This short macro runs the plugin twice in the blobs sample, first without pre-
processing and then after applying a Gaussian blur of radius 3:
1 // l o a d t h e B l o b s sample image
2 run ( " B l o b s (25K) " ) ;
3 // i n v e r t LUT and p i x e l v a l u e s t o have dark b l o b s
4 run ( " I n v e r t LUT " ) ;
5 run ( " I n v e r t " ) ;
6 // run p l u g i n on image
7 run ( " C l a s s i c Watershed " , " i n p u t=b l o b s mask=None use min=0 max=150 " ) ;
8 // a p p l y LUT t o f a c i l i t a t e r e s u l t v i s u a l i z a t i o n
9 run ( "3−3−2 RGB" ) ;
10 // pre−p r o c e s s image with Gaussian b l u r selectWindow ( " b l o b s . g i f " ) ;
11 run ( " Gaussian B l u r . . . " , " sigma=3" ) ;
12 rename ( " b l o b s −b l u r . g i f " ) ;
13 // a p p l y p l u g i n on pre−p r o c e s s e d image
14 run ( " C l a s s i c Watershed " , " i n p u t=b l o b s −b l u r mask=None use min=0 max=150 " ) ;
15 // a p p l y LUT t o f a c i l i t a t e r e s u l t v i s u a l i z a t i o n
16 run ( "3−3−2 RGB" ) ;
(a) Gaussian-blurred blobs image (b) Watershed segmentation on (c) Watershed segmentation on
used as input (radius = 3). original image (hmin = 0, hmax = Gaussian-blurred original image
150). (radius = 3, hmin = 0, hmax = 150).
Figure 6.2.: Input and result images from the macro example.
6.2.2. Usage
Marker-controlled Watershed needs at least two images to run. If that’s the case, a dialog
like the following will pop up:
• Rest of parameters:
– Calculate dams: select to enable the calculation of watershed lines.
– Use diagonal connectivity: select to allow the flooding in diagonal directions
(more rounded objects are usually obtain by unchecking this option).
Output:
• Labeled image containing the catchment basins and (optionally) watershed lines (dams).
6.3.2. Usage
Interactive Marker-controlled Watershed runs on any open grayscale image, single 2D image
or (3D) stack. If no image is open when calling the plugin, an Open dialog will pop up.
The user can pan, zoom in and out, or scroll between slices (if the input image is a stack)
in the main canvas as if it were any other ImageJ window. On the left side of the canvas
there are three panels of parameters, one with the watershed parameters, one for the output
(result) options and one for post-processing the result. All buttons, checkboxes and panels
contain a short explanation of their functionality that is displayed when the cursor lingers
over them.
Figure 6.5.: Example of interactive markers introduced by the user. From left to right: point
selections, rectangular selections and free-hand selections (stored in the ROI Manager).
• Connectivity: voxel connectivity (4-8 in 2D, and 6-26 in 3D). Selecting non-diagonal
connectivity (4 or 6) usually provides more rounded objects.
Finally, click on “Run” to launch the segmentation. If your segmentation is taking too
long or you want to stop it for any reason, you can do so by clicking on the same button
(which should read “STOP” during that process).
• Display: list of options to display the segmentation results (see Fig. 6.7).
– Overlaid basins: colored objects overlaying the input image (with or without
dams depending on the selected option in the Watershed Segmentation panel).
– Overlaid dams: overlay the watershed dams in red on top of the input image
(only works if “Calculate dams” is checked).
– Catchment basins: colored objects.
– Watershed lines: binary image showing the watershed lines in black and the
objects in white (only works if “Calculate dams” is checked).
• Create image button: create a new image with the results displayed in the canvas.
Similarly to the Results panel, this panel only gets enabled after running the segmentation
pipeline.
• Merge labels: merge together labels selected by either the “freehand” selection tool
(on a single slice) or the point tool (on single or multiple slices). The zero-value label
belongs to the watershed dams, therefore it will ignored in case of being selected. The
first selected label value will be assigned to the rest of selected labels, which will share
its color.
Note: to select labels on different slices, use the point selection tool and keep the
SHIFT key pressed each time you click on a new label.
• Shuffle colors: randomly re-assign colors to the labels. This is a very handy option
whenever two adjacent labels present a similar color.
6.4.2. Usage
Morphological Segmentation runs on any open grayscale image, single 2D image or (3D)
stack. If no image is open when calling the plugin, an Open dialog will pop up.
The user can pan, zoom in and out, or scroll between slices (if the input image is a stack)
in the main canvas as if it were any other ImageJ window. On the left side of the canvas there
are four panels of parameters, one for the input image, one with the watershed parameters,
one for the output options and one for post-processing the resulting labels. All buttons,
checkboxes and panels contain a short explanation of their functionality that is displayed
when the cursor lingers over them.
Image pre-processing: some pre-processing is included in the plugin to facilitate the
segmentation task. However, other pre-processing may be required depending on the input
image. It is up to the user to decide what filtering may be most appropriate upstream.
First, you need to indicate the nature of the input image to process. This is a key param-
eter since the watershed algorithm is expecting an image where the boundaries of objects
present high intensity values (usually as a result of a gradient or edge detection filtering).
You should select:
• Object Image: if the borders of the objects do not have higher intensity values than
the rest of voxels in the image.
When selecting “Object Image”, an additional set of options is enabled to choose the type of
gradient and radius (in pixels) to apply to the input image before starting the morphological
operations. Finally, a checkbox allows displaying the gradient image instead of the input
image in the main canvas of the plugin (only after running the watershed segmentation).
This panel is reserved to the parameters involved in the segmentation pipeline. By de-
fault, only the tolerance can be changed. Clicking on “Advanced options” enables the rest of
options.
• Tolerance: dynamic of intensity for the search of regional minima (in the extended-
minima transform, which is the regional minima of the H-minima transform, value of
h). Increasing the tolerance value reduces the number of segments in the final result,
while decreasing its value produces more object splits.
Note: since the tolerance is an intensity parameter, it is sensitive to the input image
type. A tolerance value of 10 is a good starting point for 8-bit images (with 0-255 in-
tensity range) but it should be drastically increased when using image types with larger
intensity ranges. For example to ~2000 when working on a 16-bit image (intensity
values between 0 and 65535).
• Connectivity: voxel connectivity (4-8 in 2D, and 6-26 in 3D). Selecting non-diagonal
connectivity (4 or 6) usually provides more rounded objects.
• Create image button: create a new image with the results displayed in the canvas.
Similarly to the Results panel, this panel only gets enabled after running the segmentation
pipeline.
• Merge labels: merge together labels selected by either the “freehand” selection tool
(on a single slice) or the point tool (on single or multiple slices). The zero-value label
belongs to the watershed dams, therefore it will ignored in case of being selected. The
first selected label value will be assigned to the rest of selected labels, which will share
its color.
Note: to select labels on different slices, use the point selection tool and keep the
SHIFT key pressed each time you click on a new label.
• Shuffle colors: randomly re-assign colors to the labels. This is a very handy option
whenever two adjacent labels present a similar color.
1 // s e l e c t as o b j e c t image
2 c a l l ( " i n r a . i j p b . p l u g i n s . M o r p h o l o g i c a l S e g m e n t a t i o n . se tIn put Im age Ty pe " , " o b j e c t " ) ;
3 // s e l e c t as b o r d e r image
4 c a l l ( " i n r a . i j p b . p l u g i n s . M o r p h o l o g i c a l S e g m e n t a t i o n . se tIn put Im age Ty pe " , " bor der " ) ;
1 // O v e r l a i d b a s i n s
2 c a l l ( " in r a . i j p b . plugins . MorphologicalSegmentation . setDisplayFormat " , " Overlaid
basins " ) ;
3 // O v e r l a i d dams
4 c a l l ( " i n r a . i j p b . p l u g i n s . M o r p h o l o g i c a l S e g m e n t a t i o n . s e t D i s p l a y F o r m a t " , " O v e r l a i d dams "
);
5 // Catchment b a s i n s
6 c a l l ( " i n r a . i j p b . p l u g i n s . M o r p h o l o g i c a l S e g m e n t a t i o n . s e t D i s p l a y F o r m a t " , " Catchment
basins " ) ;
1
http://imagej.net/developer/macro/macros.html
7 // Watershed l i n e s
8 c a l l ( " i n r a . i j p b . p l u g i n s . M o r p h o l o g i c a l S e g m e n t a t i o n . s e t D i s p l a y F o r m a t " , " Watershed
lines " ) ;
Figure 6.8.: Basics of the Distance Transform Watershed algorithm. From left to right:
sample image of touching DAPI stained cell nuclei from a confocal laser scanning microscope,
binary mask calculated after filtering and thresholding input image, inverse of the distance
transform applied to the binary mask (Chamfer distance map using normalized Chessknight
weights and 32-bit output) and resulting labeled image after applying watershed to the inverse
distance image using the binary mask (dynamic of 1 and 4-connectivity).
The plugin parameters are divided between the distance transform and the watershed
options:
• Distance map options:
– Distances: allows selecting among a pre-defined set of weights that can be used
to compute the distance transform using Chamfer approximations of the Eu-
clidean metric (see section 8.1.1). They affect the location but specially the shape
of the border in the final result. The options are:
· Chessboard (1,1): weight equal to 1 for all neighbors.
· City-Block (1,2): weights 1 for orthogonal neighbors and 2 for diagonal
neighbors.
p
· Quasi-Euclidean (1,1.41): weights 1 for orthogonal neighbors and 2 for
diagonal neighbors.
• Watershed options:
– Dynamic: same as in the Morphological Segmentation plugin, this is the dynamic
of intensity for the search of regional minima in the inverse of the distance trans-
form image. Basically, by increasing its value there will be more object merges
and by decreasing it there will be more object splits.
– Connectivity: pixel connectivity (4 or 8). Selecting non-diagonal connectivity
(4) usually provides more rounded objects.
Finally, the result with the current plugin configuration can be visualized by clicking on the
Preview option.
The parameters are the same as in the 2D version but some of them are adapted for 3D
images:
• Watershed options:
– Dynamic: same as in the 2D version, this is the dynamic of intensity for the search
of regional minima in the inverse of the distance transform image. Basically, by
increasing its value there will be more object merges and by decreasing it there
will be more object splits.
– Connectivity: voxel connectivity (6 or 26). Selecting non-diagonal connectivity
(6) usually provides more rounded objects.
As it is usual in ImageJ, no preview is provided here since we are dealing with 3D images.
Contents
7.1. Region Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
7.1.1. Intrinsic volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
7.1.2. Geometric moments and equivalent ellipse . . . . . . . . . . . . . . 51
7.1.3. Feret diameter and oriented box . . . . . . . . . . . . . . . . . . . . 52
7.1.4. Geodesic measurements . . . . . . . . . . . . . . . . . . . . . . . . . 53
7.1.5. Thickness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
7.1.6. Quantification by shape indices . . . . . . . . . . . . . . . . . . . . 54
7.1.7. Plugins . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
7.2. Region Analysis 3D . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
7.2.1. Intrinsic volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
7.2.2. Geometric moments and equivalent Ellipsoid . . . . . . . . . . . . . 63
7.2.3. 3D Shape indices . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
7.2.4. Plugins . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
7.3. Intensity measurements . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
7.4. Label Overlap measures . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
7.5. Microstructure analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
47
7.1 Region Analysis
• intrinsic volumes (section 7.1.1) encompass the area, the perimeter, and the Euler
number, that quantifies the topology of the region
• geodesic features allow to investigate the complexity of the region, in particular the
tortuosity (section 7.1.4)
• the use of the skeleton provide derived features such as the average thickness (sec-
tion 7.1.5)
• features describing the size may be combined to obtain shape indices that describe
the morphology of the regions independently of their size and location (section 7.1.6)
The following sections will present these featuers. For each family of features, we first pro-
vide mathematical definitions of the implemented features, eventually completed by the
methods used to measure them from discrete images. A final section (section 7.1.7) presents
the different plugins that allow to measure features from images.
7.1.1.1. Area
The area of a set X can be defined using integrals over the set:
Z
A(X ) = dx (7.1)
X
In image analysis, the measurement of area of a region simply consists in counting the
number of pixels that constitute it, and to multiply by the area of a single pixel.
7.1.1.2. Perimeter
The perimeter of a region corresponds to the length of its boundary. Its formal definition
involves computing an integral over its boundary ∂ X :
Z
P(X ) = dx (7.2)
∂X
An intuitive approach for measuring the perimeter of a binary region is to count the
number of boundary pixels (Figure 7.1). However, this approach is rather inaccurate. For
example, when counting the boundary pixels of a discretized disk, one obtain the same
perimeter value as the length of the bounding rectangle, leading to a relative error of more
than 25%1 .
Figure 7.1.: Measurement of perimeter within a discrete image of a disk. (a) Counting the
number of boundary pixels. (b) Counting the number of intersection with parallel lines.
The discrete version of Crofton formula can provide a better estimate of the perimeter
than traditional boundary pixel count. The principle is to consider a set of parallel lines with
a random orientation, and to count the number of intersections with the region of interest
(Figure 7.1). The number of intersections is proportional to the line density and to the
perimeter (Serra, 1982; Legland et al., 2007; Ohser & Schladitz, 2009). By averaging over
all possible directions, an unbiased estimate can be obtained.
Perimeter can be measured using either two directions (horizontal and vertical), or four
directions (by adding the diagonals). Restricting the number of directions introduces an
estimation bias, with known theoretical bounds (Moran, 1966; Legland et al., 2007). The
error made by counting intersections is usually better than boundary pixel count (Lehmann
& Legland, 2012).
1
If d is the diameter of the circle, the expected perimeter is πd. The number of boundary pixels measure the
perimeter of the enclosing square, equal to 4d. The relative error is (4d − πd)/πd ' 27%
Figure 7.2.: Euler Number for 2D regions. Particle A is composed of a single component, its
Euler number is 1. Particles B and C present one and two holes respectively. Their corresponding
Euler numbers are equal to 0 = 1 − 1 and −1 = 1 − 2.
In 2D, the Euler number of a region with smooth boundary also equals the integral of the
curvature over the boundary of the set:
Z
1
χ(X ) = κ(x)d x (7.3)
2π ∂ X
The measurement of Euler number from binary images is usually performed by consider-
ing a reconstruction of the connected components that connects adjacent pixels or voxels
(Fig. 7.3). The resulting reconstruction depends on the choice of the connectivity (see Sec-
tion 3.3). For planar images, typical choices are the 4-connectivity, corresponding to the
orthogonal neighbors (Fig. 7.3-b), and the 8-connectivity, that also consider the diagonal
neighbors (Fig. 7.3-b). The connectivity is also considered for computing morphological re-
constructions (section 5.1), or for computing the connected component labeling of a binary
image (section 8.2).
Figure 7.3.: Determination of the Euler number from binary images. (a) Original binary im-
ages. (b) Euler number computed with 4-connectivity: five components are detected. (c) Euler
number computed using the 8-connectivity: three components are detected, two of them having
holes.
Depening on the chosen connectivity, the resulting Euler number may differ. For exam-
ple, the reconstruction presented in Figure 7.3 has Euler number equal to five (the num-
ber of connected components), whereas the Euler number of the reconstruction using the
8-connectivity is equal to 1: three connected components are obtained, minus two holes
within them (Fig. 7.3-c).
Z Z
m pq (X ) = I X (x, y)x p y q · d x · d y (7.4)
Z Z
m00 = I X (x, y) · d x · d y
= A(X )
The coordinates of the centroid of X can be expressed from the first-order moments:
Z Z
m10 1
xc = = I X (x, y) · x · d x · d y
m00 Ar ea(X )
Z Z
m01 1
yc = = I X (x, y) · y · d x · d y
m00 Ar ea(X )
Z Z
µ pq = I X (x, y) (x − x c ) p ( y − y c)q · d x · d y (7.5)
By computing the parameters of the ellipse that produces the same moments up to the
second order, one obtains the equivalent ellipse (Burger & Burge, 2008). In particular, the
orientation θ of the equivalent ellipse is given by θ = 12 atan2 (2 · µ11 , µ20 − µ02 ).
It is easy to realize that the search can be performed on the set of boundary points (Fig. 7.5).
In practice, the computation of Maximum Feret diameter can be accelerated by first comput-
ing the convex hull of the region. In MorphoLibJ, the maximal Feret diameter is computed
over the corners of the regions.
The Feret diameter can be performed on any direction. Typical choice is to measure the
Feret diameter in the direction perpendicular to the direction of the maximum Feret diam-
eter. The ratio of the largest Feret diameter and of the Feret diameter measured in the
perpendicular direction gives an indication on the elongation of the region.
The notion of object-oriented bounding box (sometimes denoted as “OOBB”) is closely
related to that of Feret diameter. One possible definition of oriented bounding box is the
rectangle with smallest width that totally encloses the region. An example is illustrated on
Figure 7.5. Another definition is the rectangle with smallest area: the two resulting rect-
angles are often similar, but may differ in some cases. Note that in general, the orientation
of the largest axis of the oriented box differs from both the direction of the largest Feret
diameter, and the direction of the equivalent ellipse.
Figure 7.6.: Geodesic diameter. Left: illustration of the geodesic diameter on a simple particle.
Right: computation of the geodesic diameter on a segmented image from the DRIVE database
(Staal et al., 2004). Each connected component was associated to a label, then the longest
geodesic path within each connected component was computed and displayed as red overlay.
7.1.5. Thickness
The thickness is also a convenient measure to describe a region. It may be evaluated by
the diameter of the largest inscribed circle plugin (page 59, Fig. 7.7). This method however
often over-estimates the thickness, as it measures thickness at the largest point within the
region.
The MorphoLibJ library also proposes to measure average thickness by first computing
the skeleton of the region, and then to measure for each pixel of the skeleton the distance
to the closest background point in the original image (Fig. 7.7). This way, the thickness
measure is integrated over the whole region skeleton and is more representative.
Figure 7.7.: Evaluation of the thickness using two methods. On the left, computation of the
largest inscribed circle. On the right, computation of the average thickness over the skeleton.
7.1.6.1. Circularity
The circularity (or “shape factor”, or “isperimetric deficit index”) is defined as the ratio of
area over the square of the perimeter, normalized such that the value for a disk equals one:
A
circularity = 4π (7.7)
P2
While the values of isoperimetric deficit index theoretically range within the interval [0; 1],
the measurements errors of the perimeter may produce circularity values above 1 (Lehmann
& Legland, 2012).
7.1.6.2. Convexity
The convexity (also known as “solidity”) is defined as the ratio of the area of the region over
the area of its convex hull.
A(X )
convexity =
A(ConvHull(X ))
The Convexify Plugin (Section 8.3.1) also allows to compute the convex image correspond-
ing to a binary region.
7.1.6.3. Elongation
The MorphoLibJ library provides several shape indices describing the elongation of the re-
gions. Their values is 1 for a disk, and increase for elongated regions.
Figure 7.8.: Convexity of a planar region. The convex area is the union of the blue and yellow
regions.
The ellipse elongation factor, defined as the ratio of the largest over the smallest axis
lengths, can be used to quantify the shape of the region.
The oriented box elongation, defined as the ratio of the length of the oriented bounding
box over its width
The geodesic elongation, defined as the ratio of the geodesic diameter of the diameter
of the largest inscribed disk.
7.1.6.4. Tortuosity
The tortuosity is defined as the ratio of the geodesic diameter over the largest Feret diameter.
It describes the complexity of the region. Its theroretical value is 1 for convex regions, and
increases for regions with complex geometries.
7.1.7. Plugins
Most MorphoLibJ plugins consider the current image as input, that must be either binary
(only one region is considered), or label (typically the result of a connected components
labeling, see section 8.2). The output is a results table (ImageJ ResultsTable) containing one
row for each label actually present within the image. The spatial calibration of the image is
taken into account for most measurements. All plugins can be found under the “Plugins .
MorphoLibJ . Analyze” menu.
Perimeter the perimeter, measured using discrete version of Crofton formula (see sec-
tion 7.1.1)
Circularity the normalized ratio of the area by the square of the perimeter: 4π·A/p2 . Values
superior to 1 may appear due to discretization effect.
Equivalent Ellipse the position, size and orientation of the equivalent ellipse (see section 7.1.2)
Oriented Box the position, size and orientation of the oriented box with smallest width
Oriented Box Elong. the elongation of the bounding box (ratio of length over width)
Tortuosity the tortuosity, as the ratio of geodesic diameter over largest Feret diameter
Max inscribed disc the position and the radius of the largest circle that can be inscribed
within the region
Geodesic Elong. the ratio of geodesic diameter over diameter of inscribed circle.
The first columns of the resulting ResultsTable contain the label of the regions, to facilitate
their identification.
Label the label of the region measured on the current line (equal to 255 for binary
images)
It is also possible to draw the resulting bounding box(es) onto another image (Fig. 7.9-a).
Figure 7.9.: Examples of graphical outputs of region analysis plugins. (a) Bounding box. (b)
Equivalent ellipse. (c) Max Feret diameter.
Label the label of the region measured on the current line (equal to 255 for binary
images)
It is also possible to draw the resulting ellipse(s) onto another image (Fig. 7.9-b).
Label the label of the region measured on the current line (equal to 255 for bionary
images)
It is also possible to draw the resulting diameter as a line segment onto another image
(Fig. 7.9-c).
Label the label of the region measured on the current line (equal to 255 for bionary
images)
It is also possible to draw the resulting oriented boxe(s) onto another image (Fig. 7.10-a).
Figure 7.10.: Examples of graphical output of region analysis plugins. (a) Oriented bounding
box. (b) Largest geodesic diameter. (c) Largest inscribed circle.
Label the label of the region measured on the current line (equal to 255 for binary
images)
Radius the radius of the largest inscribed circle, which is computed during the algo-
rithm.
Geod. Elong. the ratio of geodesic diameter over the diameter of the largest inscribed circle.
The values range from 1 for nearly round particles and increases for elongated
particles.
It is also possible to draw the resulting geodesic paths onto another image (Fig. 7.10-b).
Label the label of the region measured on the current line (equal to 255 for bionary
images)
It is also possible to draw the resulting circle(s) onto another image (Fig. 7.10-c).
7.2.1.1. Volume
The volume and the surface area of a set X with smooth surface ∂ X can be defined using
integrals over the set, or over its boundary:
Z
V (X ) = dx (7.8)
X
The measurement of surface area from 3D binary images follows the same principle
as the estimation of perimeter. One considers the intersections of the region with straight
lines, and averages over all possible directions (Lang et al., 2001; Legland et al., 2007). The
number of directions is typically chosen equal to 3 (the three main axes in image), or 13 (by
considering also diagonals). As for perimeter estimation, surface area estimation is usually
biased, but is usually more precise than measuring the surface area of the polygonal mesh
reconstructed from binary images (Lehmann & Legland, 2012).
Figure 7.11.: Mean breadth of a convex (a) and of a non convex (b) set. In the case of a
non-convex set, the concavities are taken into account for measuring the diameters.
In the case of a non-convex set, the concavities are taken into account for measuring the
diameters, resulting in “total projected diameter” (Fig. 7.11-b). For 3D sets, the mea-
surements of total projected diameter consists in measuring the (2D) Euler number of the
intersection of the set with planes orthogonal to the direction (Serra, 1982). The average of
total projected diameter over Ω, the set of all directions, results in the mean breadth b̄:
Z
b̄(X ) = Dω (X ) dω (7.10)
Ω
κ1 (x) + κ2 (x)
Z
1
b̄(X ) = dx (7.11)
2π ∂ X 2
In practice, the mean breadth may be computed from digital binary images by considering
elementary configurations of 2×2×2 voxels, identifying the contribution of this configuration
to the total mean breadth, and combining with the histogram of binary configuration within
the image (Ohser & Schladitz, 2009; Legland et al., 2007).
Figure 7.12.: Euler Number of a 3D particle. The Euler number equals -1, corresponding to the
subtraction of 1 connected components minus two handles.
For a set X with smooth boundary ∂ X , the 3D Euler number is proportionnal to the integral
of the gaussian curvature, corresponding to the product of the curvatures:
Z
1
χ(X ) = κ1 (x) · κ2 (x)d x (7.12)
4π ∂ X
As for 2D images, the measurement of Euler number from 3D images also relies on the
choice of a connectivity between adjacent voxels. In 3D, the 6-connectivity considers the
neighbors in the three main directions within the image, whereas the 26 connectivity also
considers the diagonals. Other connectivities have been proposed but are not implemented
in MorphoLibJ (Ohser & Schladitz, 2009). Note that depending on the choice of connec-
tivity, very different results may be obtained. Such differences may result from the small
irregularities arising in images after segmentation. A typical work-around is to regularize
the binary or label image, for example by applying morphological opening and / or closing.
The first moment m000 (X ) corresponds to the volume of the particle. The normalization of
the first-order moments by the volume leads to the 3D centroid of the particle. The second-
order moments can be used to compute an equivalent ellipsoid, defined as the ellipsoid with
the same moments up to the second order as the region of interest (see appendix B).
Ellipsoid.Phi the azimut of the projection of the major axis on the XY plane, in degrees
(“yaw”)
Ellipsoid.Theta the elevation of the major axis on the XY plane, in degrees (“pitch”)
Ellipsoid.Psi the rotation of the ellipsoid around the main axis (“roll”), in degrees
The three angles correspond to a succession of three rotations (see Figure 7.13):
1. a rotation R x (ψ) about the x-axis by ψ degrees (positive when the y-axis moves to-
wards the z-axis)
2. a rotation R y (θ ) about the y-axis by θ degrees (positive when the z-axis moves to-
wards the x-axis)
3. a rotation Rz (ϕ) about the z-axis by ϕ degrees (positive when the x-axis moves to-
wards the y-axis)
Figure 7.13.: Definition of angles for representing the orientation of equivalent ellipsoid.
Sphericity
The sphericity index is the generalisation of the circularity to 3D. It can be defined as the
ratio of the squared volume over the cube of the surface area, normalized such that the value
for a ball equals one:
V2
sphericity = 36π (7.14)
S3
Other shape factors may be obtained by computing normalized ratios of volume with mean
breadth, or surface with mean breadth. Their interpretation is however often not obvious.
Elongation
It is also possible to compute elongation factors from ratio of the radiuses if the equivalent
ellipsoid. With three radius values, three ratios can be computed. In MorphoLibJ, the largest
radius is used as numerator, resulting in the following ratios:
r1 r r1
e12 = , e23 = 2 , e13 = , with r1 ≥ r2 ≥ r3
r2 r3 r3
7.2.4. Plugins
Analyze Regions 3D
The plugin “Analyze Regions 3D” gathers most 3D measures implemented within the Mor-
phoLibJ library. It can be found under “Plugins . MorphoLibJ . Analyze . Analyze Regions
3D”. The results are provided in an ImageJ ResultsTable, whose name contains the name of
the original image.
Method the number of directions to use for the computation (either 3 or 13).
One of the two labels can have the value 0. In this case, the interface of the other label with
the background is measured.
If the two regions do not touch anywhere, the resulting value will be 0. The region adja-
cency graph (section 9.4.1) can be used to identify neighbor regions.
|S r ∩ Tr |
T Or =
|Tr |
|S r ∩ Tr |
UOr =
|S r ∪ Tr |
|S r ∩ Tr |
M Or = 2
|S r | + |Tr |
|S r | − |Tr |
V Sr = 2
|S r | + |Tr |
(|S r | − |Tr |)
P
V S = 2 Pr
r (|S r | + |Tr |)
|Tr \ S r |
F Nr =
|Tr |
|S r \ Tr |
F Pr =
|S r |
Figure 7.14.: Analysis of binary microstrucure. (a) Original binary image. (b) Estimation of
perimeter density.
To analyze such images, the notion of particle analysis is not relevant anymore. One
possibility is to adapt the definitions of intrinsic volumes, by considering that the image
corresponds to a sampling window. This induces the two modifications:
1. the integral over boundaries are restricted to the boundary within the sampling win-
dow (Fig. 7.14-b)
2. the result is normalized by the area (or the volume) of the sampling window.
Contents
8.1. Distance transforms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
8.1.1. Distance transform . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
8.1.2. Chamfer distances . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
8.1.3. Geodesic distance transform . . . . . . . . . . . . . . . . . . . . . . 75
8.2. Connected component labeling . . . . . . . . . . . . . . . . . . . . . . . . 76
8.3. Binary images processing . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
8.3.1. Convexification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
8.3.2. Selection of binary regions . . . . . . . . . . . . . . . . . . . . . . . 77
69
8.1 Distance transforms
Figure 8.1.: Binary image, and result of computation of the distance transform.
Figure 8.2.: Computation of distance maps using chamfer masks. The distance associated to
each pixel is updated according to the distances associated to the neighbor pixels. Left: 3x3
neighborhood. Right: 5x5 neighborhood.
In practice, using integer weights may result in faster computations. This also allows to
store the resulting distance map within 16-bits grayscale images, resulting in more efficient
memory usage than floating point computation.
Several typical choices for chamfer weights are illustrated on Figure 8.3:
• The simplest one is the “Chessboard” mask, that considers distance equal to 1 for
both orthogonal and diagonal neighbors (Fig. 8.3-a). The propagation of distances
from a center pixel follow a square pattern.
• The City-block or Manhattan mask uses a weight equal to 2 for diagonal neighbord,
resulting in diamond-like propagation of distance (Fig. 8.3-b).
• The “Borgefors” distance uses the weight 3 for orthogonal pixels and 4 for diagonal
pixels (Borgefors, 1986). It was claimed to be best integer approximation when con-
sidering the 3-by-3 neighborhood (Fig. 8.3-c).
• The “Chess-knight” distance (Das & Chatterji, 1988) also takes into account the pixels
located at a distance from (±1, ±2) in any direction (Fig. 8.3-d). It is usually the best
choice, as it considers a larger neighborhood.
• The MorphoLibJ library also provides chamfer masks with four weights, as proposed
by Verwer et al. (1989). The fourth weight corresponds to pixels located at a distance
of (±1, ±3) from the reference pixel, in any direction.
(a) Chessboard (1,1) (b) City-block (1,2) (c) Borgefors (3,4) (d) Chess-knight (5,7,11)
Figure 8.3.: Several examples of distance maps computed with different chamfer masks. For
each image, the distance to the lower-left pixel is represented both with numeric value and with
color code. Numbers in brackets correspond to the weights associated with orthogonal and
diagonal directions. For chess-knight weights, the weight 11 corresponds to a shift of (±1, ±2)
pixels in any direction.
As chamfer masks are only an approximation of the real Euclidean distance, some differ-
ences are expected compared to the actual Euclidean distance map. Using larger weights
reduces the relative error, but the largest possible distance (that depends on the bit-depth of
the output image) may be reached faster, leading to “saturation” of the distance map.
Figure 8.4.: Implementation of two-passes chamfer distances. The 5x5 chamfer mask is decom-
posed into two “half-masks”, one for each pass.
Figure 8.5.: Discrete 3D balls resulting from different chamfer distances. The distance map is
computed to the center voxel, then the result is thresholded and represented in 3D.
• The (3D) chessboard distance uses the same weight for all the 26 neighbors. This
results in distance propagation in a cubic shape pattern (Fig. 8.5-a).
• The (3D) Manhattan distance considers weights triplet equal to (1, 2, 3), resulting in
octagon-like distance propagation (Fig. 8.5-b).
• The Borgefors weights (3,4,5) usually give a good approximation of the Euclidean
distance when considering only the 3-by-3-by-3 neighborhood (Fig. 8.5-c).
Figure 8.6.: Enhancement of 3D distance maps when using larger chamfer masks. Masks are
based on (a) four, (b) five, or (c) six weights. Using All images represent the threshold of the
distance 100 from image center.
• Svensson and Borgefors found that adding a fourth weight for voxels shifted by (±1, ±1, ±2)
in any direction results in a substantial error reduction while preserving a low com-
plexity of the algorithm (Svensson & Borgefors, 2002b,a).
• Starting from version 1.5, MorphoLibJ allows using up to six weights (corresponding
to the complete 5–by–5-by–5 neighborhood).
As for the 2D case, the choice of the chamfer distance is compromise between the accuracy
(larger neighborhood and weights are usually more accurate) and the largest distance that
can be represented.
Figure 8.7.: An example of a label image containing contiguous regions, and result of the dis-
tance transform applied on the label image.
Distances the weights of the chamfer mask used for computing the distance map
Output Type specify if the result should be stored in 16-bits image (uses less memory), or
32-bits image (larger distances can be computed).
Normalize specify if the resulting map should be divided by the weight associated with
orthogonal pixels or voxels. This may be useful for masks with large weights to
have resulting distances comparable to Euclidean ones.
Figure 8.8.: Computation of the geodesic distance map on a binary image from the DRIVE
database (Staal et al., 2004). (a) Original image with marker superimposed in red. (b) Color
representation of the geodesic distance map: hot colors correspond to large distances, cold colors
correspond to small distances.
Geodesic distance maps are computed by propagating chamfer weights, as for the compu-
tation of distance maps. As chamfer weights are only an approximation of the real Euclidean
distance, some differences are expected compared to the actual geodesic distance.
The result of a connected component labeling depends on the chosen connectivity, that
corresponds to the convention used to decide wether two pixels or voxels are connected or
not (Figure 8.11). See also Section 3.3.
For planar images, a typical choice is the 4-connectivity, that considers only the orthog-
onal neighbors of a given pixel (Figure 8.11-b). When two components touch only via a
corner, they are not considered as the same one. The 8-connectivity is an alternative that
also considers the diagonal pixels as neighbors (Figure 8.11-c).
For 3D images, the 6-connectivity considers only orthogonal neighbors in the three main
directions, whereas the 26-connectivity considers all the neighbors of a given voxel in its
3-by-3-by-3 surrounding. The connectivity is also considered for computing morphological
Figure 8.11.: Impact of the connectivity on the result of connected component labeling. (a)
Original binary images. (b) Connected component labeling using the 4-connectivity. (c) Con-
nected component labeling using the 8-connectivity.
reconstructions (section 5.1), or for computing the Euler number of binary or label images
(section 7.1.1.3).
The number of labels that can be represented within a label map depends on the image
type: 255 for byte images, 65535 for short images, and around 16 millions for 32-bit floating
point images (only the 24 bits of the mantissa are used for label representation, leading to
a maximum number of labels equal to 224 ).
Convexify
for a binary image, generate the smallest convex region containing a region in the image.
(a) Binary image. (b) Convexification. (c) Original image over its convex
hull.
Figure 8.13.: Utilities for binary images. From left to right: original image, keep largest region,
remove largest region, apply size opening for keeping only regions with at least 150 pixels.
Size Opening
computes the size (area in 2D, volume in 3D) of each connected component, and remove all
particles whose size is below the value specified by the user. Algorithms work for both 2D
or 3D images. Default connectivity 4 (resp. 6) is used for 2D (resp. 3D) images.
Contents
9.1. Visualization of label images . . . . . . . . . . . . . . . . . . . . . . . . . 80
9.1.1. Color maps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80
9.1.2. Plugins . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
9.2. Visualization of region features . . . . . . . . . . . . . . . . . . . . . . . 82
9.3. Label image processing . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
9.3.1. Region and label selection . . . . . . . . . . . . . . . . . . . . . . . 83
9.3.2. Morphological filtering . . . . . . . . . . . . . . . . . . . . . . . . . 84
9.3.3. Edition of label maps . . . . . . . . . . . . . . . . . . . . . . . . . . 85
9.4. Region adjacencies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86
9.4.1. Region Adjacency Graph . . . . . . . . . . . . . . . . . . . . . . . . 86
9.4.2. Adjacent region boundaries . . . . . . . . . . . . . . . . . . . . . . . 87
9.4.3. Region adjacencies . . . . . . . . . . . . . . . . . . . . . . . . . . . 88
9.5. Label Edition plugin . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
79
9.1 Visualization of label images
(a) Using Colormap (b) Draw Labels as Overlay (c) Overlay Label Image
Figure 9.1.: Various visualization modes of label images. (a) Display regions using appropriate
colormap. (b) The label associated to each region is overlaid on another image. (c) Label map
or binary images can be overlaid on another grayscale image using user-defined opacity.
(a) Glasbey, Black (b) GlasbeyBright, White (c) GlasbeyDark, White (d) Golden Angle, Black
The “Glasbey” colormap, included within ImageJ, generates colors that are perceptually
contrasted (Figure 9.2-a). It however generates black and white colors, making them difficult
to distinguish from background (on the example from Figure 9.2-a, the region in the top-
right corner is not visible anymore). Therefore, the “Glasbey Bright” and “Glasbey Dark”
colormaps were made available, to provide good contrast both between labels and with
background (Glasbey et al., 2007; Kovesi, 2015). The choice of these colormaps was inspired
by the colorcet library1 . The Golden Angle colormap is an alternative to the Glasbey family,
that provides more saturated colors (Figure 9.2-d). The contrast between colors is less than
for Glasbey colormaps, making adjacent region potentially more difficult to distinguish.
9.1.2. Plugins
Several plugins allow to control the appearance of label images. It is possible to choose
a given color map, or to transform a label image into a color image. In both cases, the
background color can be specified, and the color order can be shuffled to facilitate the dis-
crimination of neighbor regions with similar labels.
Label To RGB
converts a label image to a true RGB image. Similar to ImageJ native conversion, but this
plugin avoids confusion between background pixels and labels with small values when the
number of labels is large.
1
https://colorcet.holoviz.org/index.html
Figure 9.3.: Various possibilities to graphically visualize region feature. On the left, the table
used for display. Center, overlay of the elongation feature using the “Draw Label Values” plugin.
On the right, color representation of the elongation using a color code: between dark purple
(circular, compact) to white (very elongated).
Figure 9.4.: Utilities for label images. From left to right: original label image, remove border
labels, remove largest region, apply size opening for keeping only regions with at least 150 pixels.
Select Label(s)
Enters a set of labels, and creates a new label image containing only the selected labels.
• when applying a dilation, only background pixels or voxels are updated, according
to the closest label within the neighborhood (Figure 9.5-b). When two regions are
adjacent without background pixel between them, the application of dilation do not
transform nor move their boundary.
• when applying an erosion, label pixels or voxels are removed depending on the labels
within the neighborhood and on the “any Label” option:
– if true, current label is removed if any other label (background or another region)
is contained within the neighborhood
– if false, current label is removed if and only if a background pixel or voxel is
contained within the neighborhood (Figure 9.5-c).
When the “Any Label” option is set to false, performing a morphological closing results in a
label map with more regular regions, with less gaps between regions, and preserved bound-
aries between adjacent regions (Figure 9.5-d).
The computation of the neighborhood, and of the closest label, is based on chamfer masks
(see Section 8.1.1).
Figure 9.5.: Morphological filtering of label images. Morphological dilation (b) and erosion
(c) take care of adjacent regions.
Crop Label
creates a new binary image containing only the label specified by the user. The size of the
new image is fitted to the region.
Replace Value
replaces the value of a region by another value. Can be used to “clear” a label, by replacing
its value by 0, or to merge to adjacent regions, by replacing the value of a label by the value
of its neighbor.
Remap Labels
re-computes the label of each region such that the largest label is equal to the number of
labels (Figure 9.6). This tool can be useful to remove “empty labels” that could occur after
removal of one or several labels (merge labels, remove labels...).
Figure 9.6.: Illustration of the “Remap Labels” plugin. (a) Original label image. (b) Histogram
of original label image, showing missing labels. (c) Histogram of the remapped label image,
without missing labels.
The Figure 9.6 presents a use case of the Remap Labels plugin. The left image is a label
image in which small labels as well as labels touching image borders have been removed
(Fig. 9.6-a). The histogram of the image presents some “holes”, corresponding to the labels
that have been removed (Fig. 9.6-b). The application of the plugin results in a similar images,
without missing labels (Fig. 9.6-c). The number of labels in the image corresponds to the
maximal value in the image (in this case, 71).
Figure 9.7.: Computation of the Region Adjacency Graph (RAG) on a microscopy image of plant
tissue. Left: original image. Middle: result of watershed segmentation. Right: overlay of edges
representing adjacent regions.
In the current implementation, MorphoLibJ considers that two regions are adjacent if they
are separated by 0 or 1 pixel (or voxel) in one of the two (or three) main directions.
Figure 9.8.: Example use of the plugin “Label Region Boundaries” plugin. Left: a label image
containing regions with various adjacencies. Right: a new label map was created, where each
label corresponds to the boundary between two or more regions within the original image.
The “Label Boundaries” plugin generates a binary image of the boundaries between re-
gions. The “Region Boundaries Labeling” considers similar boundaries, but in addition it
associates a label to each boundary.
Label Boundaries
Creates a new binary image containing value 255 for pixels/voxels having a neighbor with a
different value. The background is taken into account for computing boundaries. Only the
bottom-left neighborhood is checked during image iteration, to generate a thinner boundary
image.
Figure 9.9.: An example of a label image containing contiguous regions, and result of the selec-
tion of the neighbors of two regions (keeping the original regions).
• Manually merge labels after their selection using the point selection tool (in 2D and
3D).
• Apply morphological erosion, dilation, opening and closing with a square/cube of ra-
dius 1 as structuring element.
• Remove labels by area or volume (size opening operation), largest size, manual selec-
tion or border location.
All operations are performed “in place”, i.e., the input image gets directly modified. However,
the initial status of the label image can be recovered by clicking on “Reset”.
Figure 9.10.: Label Edition plugin. From left to right and from top to bottom: overview of the
plugin GUI on a 2D label image, result of label merging by manual selection, removal of selected
labels, label erosion and removal of largest label (in this case, the largest label corresponds to
the background).
• For final users, plugins provide graphical display and intuitive tuning of parameters.
Such plugins can easily be incorporated into a macro:
1 // C a l l s t h e R e g i o n a l Min/Max p l u g i n on c u r r e n t ImagePlus i n s t a n c e
2 run ( " R e g i o n a l Min & Max " , " o p e r a t i o n =[ R e g i o n a l Maxima ] c o n n e c t i v i t y =4" ) ;
• For plugins developers, operators are available through collections of static methods,
making it possible to apply most operations with a single line of code. Example:
1 // Computes r e g i o n a l maxima u s i n g t h e 4− c o n n e c t i v i t y
2 I m a g e P r o c e s s o r maxima = MinimaAndMaxima . regionalMaxima ( image , 4) ;
In total, the library provides nearly two hundred classes and interfaces.
inra.ijpb.geometry contains utility functions for geometric computing, and several classes
for representing geometric primitives (ellipse, point pair...)
90
10.2 Scripting MorphoLibJ
inra.ijpb.label contains the utilities for label images (cropping, size opening, remove bor-
der labels, etc)
inra.ijpb.measure contains the tools for geometric and gray level characterization of 2D
or 3D images
inra.ijpb.plugins contains the set of plugins that is accessible from ImageJ/Fiji Plugins
menu
All major methods have a general class with static methods that allow calling the methods on
2D and 3D images in a transparent way. This modularity permitted to develop other plugins
devoted to the analysis of nucleus images (Poulet et al., 2015), gray level granulometries
(Devaux & Legland, 2014) or the description of binary microstructures (Silva et al., 2015).
42 i f ( verbose )
43 I J . l o g ( "RGB( " + l u t . getRed ( l a b e l s [ i ] ) +" , "
44 + l u t . getGreen ( l a b e l s [ i ] )
45 + " , " + l u t . getBlue ( l a b e l s [ i ] ) + " ) " ) ;
46 c h a n n e l s = new boolean [ 3 ] ;
47 channels [ 0 ] = f a l s e ;
48 channels [ 1 ] = f a l s e ;
49 channels [ 2 ] = f a l s e ;
50 // add l a b e l image with c o r r e s p o n d i n g c o l o r as an i s o s u r f a c e
51 univ . addContent ( labelImp , c o l o r , " l a b e l −"+l a b e l s [ i ] , 0 , channels , 2 , C o n t e n t C o n s t a n t s
. SURFACE) ;
52 }
53 }
54
55 // launch smooth c o n t r o l
56 s c = new SmoothControl ( univ ) ;
At the end of the script a dialog is shown to smooth the surfaces at will. Each label is
added to the 3D scene independently with the name "label-X" where X is its label value.
Figure 10.1.: From left to right: input label image, script output, smoothed label surfaces and
example of individually translated surfaces in the 3D viewer.
Borgefors, G. (1986). Distance transformations in digital images. Comp. Vis. Graph. Im.
Proc., 34, 344–371.
Breen, E. J. & Jones, R. (1996). Attribute opening, thinnings, and granulometries. Computer
Vision and Image Understanding, 64(3), 377–389.
Burger, W. & Burge, M. J. (2008). Digital Image Processing, An algorithmic introduction using
Java. Springer.
Das, P. P. & Chatterji, B. N. (1988). Knight’s distance in digital geometry. Pattern Recognition
Letters, 7(4), 215–226.
Devaux, M.-F. & Legland, D. (2014). Microscopy: advances in scientific research and education,
chapter Grey level granulometry for histological image analysis of plant tissues, (pp. 681–
688). Formatex Research Center.
Doube, M., Kłosowski, M. M., Arganda-Carreras, I., Cordelières, F. P., Dougherty, R. P., Jack-
son, J. S., Schmid, B., Hutchinson, J. R., & Shefelbinea, S. J. (2010). BoneJ: free and
extensible bone image analysis in ImageJ. Bone, 47(6), 1076–1079.
Florindo, J. B., Bruno, O. M., Rossatto, D. R., Kolb, R. M., Gómez, M. C., & Landini, G.
(2016). Identifying plant species using architectural features in leaf microscopy images.
Botany, 94(1), 15–21.
Glasbey, C., van der Heijden, G., Toh, V. F. K., & et al. (2007). Colour displays for categorical
images. Color Research and Application.
Heneghan, C., Flynn, J., Keefe, M. O., & Cahill, M. (2002). Characterization of changes
in blood vessel width and tortuosity in retinopathy of prematurity using image analysis.
Medical Image Analysis, 6(4), 407 – 429.
Kong, T. Y. & Rosenfeld, A. (1989). Digital topology: Introduction and survey. Computer
Vision, Graphics, and Image Processing, 48(3), 357–393.
94
Bibliography
Lang, C., Ohser, J., & Hilfer, R. (2001). On the analysis of spatial binary images. Journal of
Microscopy, 203(3), 303–313.
Lantuéjoul, C. & Beucher, S. (1981). On the use of geodesic metric in image analysis. Journal
of Microscopy, 121(1), 39–40.
Legland, D., Arganda-Carreras, I., & Andrey, P. (2016). MorphoLibJ: integrated library and
plugins for mathematical morphology with ImageJ. Bioinformatics, 32(22), 3532–3534.
Legland, D. & Devaux, M.-F. (2009). Détection semi-automatique de cellules de fruits charnus
observés par microscopie confocale 2d et 3d. Cahiers Techniques de l’INRA, numéro spécial
imagerie, 7–16.
Legland, D., Kiêu, K., & Devaux, M.-F. (2007). Computation of Minkowski measures on 2D
and 3D binary images. Image Analysis and Stereology, 26(6), 83–92.
Lehmann, G. & Legland, D. (2012). Efficient N-Dimensional surface estimation using Crofton
formula and run-length encoding. Technical report.
Luengo Hendriks, C. L. & van Vliet, L. J. (2003). Discrete morphology with line structuring
elements. In M. W. N. Petkov (Ed.), Computer Analysis of Images and Patterns - CAIP
2003 (Proc. 10th Int. Conf., Groningen, NL, Aug.25-27), volume 2756 of Lecture Notes in
Computer Science (pp. 722–729).: Springer Verlag, Berlin.
Ohser, J. & Schladitz, K. (2009). 3D Images of Materials Structures: processing and analysis.
WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Poulet, A., Arganda-Carreras, I., Legland, D., Probst, A. V., Andrey, P., & Tatout, C. (2015).
NucleusJ: an ImageJ plugin for quantifying 3D images of interphase nuclei. Bioinformatics,
31(7), 1144–1146.
Schindelin, J., Arganda-Carreras, I., Frise, E., Kaynig, V., Longair, M., Pietzsch, T., Preibisch,
S., Rueden, C., Saalfeld, S., Schmid, B., et al. (2012). Fiji: an open-source platform for
biological-image analysis. Nature methods, 9(7), 676–682.
Schneider, C. A., Rasband, W. S., Eliceiri, K. W., Schindelin, J., Arganda-Carreras, I., Frise,
E., Kaynig, V., Longair, M., Pietzsch, T., Preibisch, S., et al. (2012). NIH Image to ImageJ:
25 years of image analysis. Nature methods, 9(7).
Serra, J. (1982). Image Analysis and Mathematical Morphology. Volume 1. London: Academic
Press.
Serra, J. & Vincent, L. (1992). An overview of morphological filtering. Circuits, Systems and
Signal Processing, 11(1), 47–108.
Shoemake, K. (1994). Euler Angle Conversion. In P. S. Heckbert (Ed.), Graphics Gems (pp.
222 – 229). Academic Press.
Silva, J. V., Legland, D., Cauty, C., Kolotuev, I., & Floury, J. (2015). Characterization of
the microstructure of dairy systems using automated image analysis. Food Hydrocolloids,
44(0), 360 – 371.
Soille, P. & Vincent, L. M. (1990). Determining watersheds in digital pictures via flooding
simulations. In Lausanne-DL tentative (pp. 240–250).: International Society for Optics and
Photonics.
Staal, J., Abramoff, M., Niemeijer, M., Viergever, M., & van Ginneken, B. (2004). Ridge based
vessel segmentation in color images of the retina. IEEE Transactions on Medical Imaging,
23, 501–509.
Svensson, S. & Borgefors, G. (2002a). Digital distance transforms in 3d images using infor-
mation from neighbourhoods up to 5 x 5 x 5. Computer Vision and Image Understanding,
88(1), 24 – 53.
Svensson, S. & Borgefors, G. (2002b). Distance transforms in 3D using four different weights.
Pattern Recognition Letters, 23, 1407–1418.
Tustison, N. & Gee, J. (2009). Introducing dice, jaccard, and other label overlap measures
to itk. Insight J, (pp. 1–4).
Verwer, B. J. H., Verbeek, P. W., & Dekker, S. T. (1989). An efficient uniform cost algorithm
applied to distance transforms. IEEE Trans. Pattern Anal. Mach. Intell., 11, 425–429.
Vincent, L. & Soille, P. (1991). Watersheds in digital spaces : An efficient algorithm based
on immersion simulation. IEEE Transactions on Pattern Analysis and Machine Intelligence,
13(6), 583–598.
Segmentation
Classic Watershed * * * * * * 31
Marker-controlled Watershed * * * * * * 33
Interactive Marker-controlled Watershed * * * * * * 35
Morphological Segmentation * * * * * * 38
Table A.1.: Image processing operators for grayscale and intensity images.
97
Plugin Name bin. 8 16 32 RGB 2D 3D GUI Page
Binary Images
Connected Components Labeling * n.a. n.a. n.a. n.a. * * * 76
Chamfer Distance Map * * * * n.a. * * 74
Chamfer Distance Map 3D * * * * n.a. * * 74
Geodesic Distance Map * * * * n.a. * * 76
Interactive Geodesic Distance Map * n.a. n.a. n.a. n.a. * * 76
Geodesic Distance Map 3D * * * * n.a. * * 76
Distance Transform Watershed * n.a. n.a. n.a. n.a. * * 43
Distance Transform Watershed 3D * n.a. n.a. n.a. n.a. * * 45
Convexify * n.a. n.a. n.a. n.a. * 77
Remove Largest Region * n.a. n.a. n.a. n.a. * * 77
Keep Largest Region * n.a. n.a. n.a. n.a. * * 77
Area Opening * n.a. n.a. n.a. n.a. * * 78
Size Opening 2D/3D * n.a. n.a. n.a. n.a. * * * 78
Label Images
Set Label Map n.a. * * * n.a. * * * 81
Labels To RGB n.a. * * * n.a. * * * 81
Draw Labels As Overlay n.a. * * * n.a. * 80
Assign Measure To Label * * * * n.a. * * * 81
Label Edition n.a. * * * n.a. * * * 89
Label Morphological Filters * * * * n.a. * * * 84
Remap Labels * * * * n.a. * * 85
Fill Label Holes * * * * n.a. * * 85
Expand Labels n.a. * * * n.a. * * * 85
Remove Border Labels n.a. * * * n.a. * * * 83
Replace / Remove Label(s) n.a. * * * n.a. * * * 83
Merge Labels n.a. * * * n.a. * * *
Select Label(s) n.a. * * * n.a. * * * 83
Crop Label n.a. * * * n.a. * * * 85
Remove Largest Label * * * * n.a. * * 85
Keep Largest Label * * * * n.a. * * 85
Label Size Filtering * * * * n.a. * * * 83
Region Adjacency Graph n.a. * * * n.a. * * 86
Region Boundaries Labeling n.a. * * * n.a. * * 86
Label Boundaries n.a. * * * n.a. * * 86
Select Neighbor Labels n.a. * * * n.a. * * * 88
Table A.2.: Image processing operators for binary and label images.
• the three radius r1 , r2 , r3 , with r1 > r2 > r3 , corresponding to the lengths of each
semi-axis
B.1. Notations
We consider euclidean spaces of dimension 3. Let X be the a 3D set representing the structure
of interest.
B.1.1. Moments
In 3D, the moments m pqr of order (p, q, r) are defined as:
Z Z Z
m pqr = I X (x, y, z)x p y q z r · d x · d y · dz (B.1)
where I X (x, y, z) is the indicator function of the set X , taking value 1 if the specified point
is within the set X , and 0 otherwise. The moment m000 corresponds to the volume of the
structure. The centered moments are expressed as:
Z Z Z
µ pqr = I X (x, y, z) (x − x c ) p ( y − yc )q (z − zc ) r · d x · d y · dz (B.2)
m m m
where (x c , yc , zc ) = ( m000
100 010
, m000 , m001
000
) is the centroid of X . The matrix of second order mo-
ments is the symmetric 3 × 3 matrix that combines all the centered second order moments:
µ200 µ110 µ101
M = µ110 µ020 µ011
µ101 µ011 µ002
99
B.2 Computation of angles
M = QΛQ t
where
Eigen values are ordered in decreasing order. The matrix Q can be used to extract the ori-
entation of the main axes of the structure (section B.2), whereas the eigen values λi can be
related to the particle dimensions along these axes (section B.3).
Such a matrix can be represented by a sequence of three successive rotations around the
main axes. As matrix multiplication does not commute, the order of the axes one rotates
about will affect the result. In MorphoLibJ, we follow the choice of G. Slabaugh and consider
rotation first about the x-axis, then about the y-axis, and finally about the z-axis. Then, the
angles ψ, θ and ϕ correspond to Euler angles. Angle ψ corresponds to the “roll”, angle θ
to the “pitch”, and angle ϕ to the “yaw”.
The global rotation matrix can be written as follow:
R = Rz (ϕ) · R y (θ ) · R x (ψ)
cos θ cos ϕ sin ψ sin θ cos ϕ − cos ψ sin ϕ cos ψ sin θ cos ϕ + sin ψ sin ϕ
= cos θ sin ϕ sin ψ sin θ sin ϕ + cos ψ cos ϕ cos ψ sin θ sin ϕ − sin ψ cos ϕ
− sin θ sin ψ cos θ cos ψ cos θ
The problem is now to identify the three Euler angles ψ, θ and ϕ from the matrix coeffi-
cients. This results in nine equations.
B.2.2. Elevation
We start considering the elevation θ , or “pitch”. From element R31 of the matrix, one finds
R31 = − sin θ
θ = − sin−1 (R31 )
B.2.3. Roll
The possible values of the roll ψ around the x-axis can be found by
R32
= tan ψ
R33
ψ = atan2(R32 , R33 )
where atan2( y, x) is the arc tangent function of the two variables y and x that extends the
atan function to the all four quadrants. The function atan2 is available in most programming
languages.
B.2.4. Azimut
The azimut ϕ, or “yaw”, can be obtained from
R21
= tan ϕ
R11
ϕ = atan2(R21 , R11 )
leading to
ψ = ϕ + atan2(R12 , R13 )
ψ = −ϕ + atan2(−R12 , −R13 )
x = ρ · r1 · cos ϕ · sin θ
y = ρ · r2 · sin ϕ · sin θ
z = ρ · r3 · cos θ
The Jacobian matrix of the transformation is as follow (see example 3 in “Jacobian matrix
and determinant” on Wikipedia1 ):
∂x ∂x ∂x
∂ρ ∂θ ∂ϕ r1 cos ϕ sin θ ρr1 cos ϕ cos θ −ρr1 sin ϕ sin θ
∂y ∂y ∂y
JΦ (x, y, z) = ∂ ρ ∂ θ ∂ ϕ = r2 sin ϕ sin θ ρr2 sin ϕ cos θ ρr2 cos ϕ sin θ
∂z ∂z ∂z r2 cos θ −ρr3 sin θ 0
∂ρ ∂θ ∂ϕ
Z 2π Z π Z 1
m000 = r1 r2 r3 ρ 2 sin θ · dρ · dθ · dϕ
0 0 0
Z 2π Z π Z 2π
r1 r2 r3 a bc
= sin θ · dθ · dϕ = (− cos π + cos 0) · dϕ
3 0 0 3 0
Z 2π
2 4π
= r1 r2 r3 dϕ = r1 r2 r3
3 0 3
Z 2π Z π Z 1
µ200 = r1 r2 r3 (r1 ρ cos ϕ sin θ )2 ρ 2 sin θ · dρ · dθ · dϕ
0 0 0
Z 2π Z π Z 1
= r13 r2 r3 ρ 4 dρ · cos2 ϕ sin3 θ · dθ · dϕ
0 0 0
2π Z π
r13 r2 r3
Z
= sin3 θ · dθ · cos2 ϕ · dϕ
5 0 0
1
https://en.wikipedia.org/wiki/Jacobian_matrix_and_determinant#Example_3:
_spherical-Cartesian_transformation
3
The integral over θ can be developed using the linearisation of sin3 θ = 4 sin θ − 14 sin (3θ ):
Z π Z π
3 1
3
sin θ dθ = sin θ − sin (3θ ) dθ
0 0 4 4
3 −1
= − (cos π − cos 0) − (cos 3π − cos 0)
4 12
3 1 4
= − =
2 6 3
Coming back to moment integral:
2π 2π
r13 r2 r3 4 r13 r2 r3 4
Z Z
2 1 1
µ200 = cos ϕ · dϕ == + cos (2ϕ) dϕ
5 3 0 5 3 0 2 2
r13 r2 r3 41 4π r13 r2 r3
= (2π − 0) =
5 32 3 5
r12
= m000
5
Then, the central moments are expressed as:
r12
λ1 = µ200 = m000
5
r22
λ2 = µ020 = m000
5
r32
λ3 = µ002 = m000
5
ri2
We therefore have λi = 5 m000 , leading to:
v
t5 · λ
i
ri = (B.4)
m000